From noreply at buildbot.pypy.org Wed Feb 1 05:53:01 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 1 Feb 2012 05:53:01 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: start an outline for the tutorial Message-ID: <20120201045301.033D082B67@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4071:dbfdfec72ca7 Date: 2012-01-31 23:52 -0500 http://bitbucket.org/pypy/extradoc/changeset/dbfdfec72ca7/ Log: start an outline for the tutorial diff --git a/talk/pycon2012/tutorial/outline.rst b/talk/pycon2012/tutorial/outline.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/outline.rst @@ -0,0 +1,20 @@ +How to get the most out of PyPy +=============================== + +* How PyPy Works + * Bytecode VM + * GC + * Generational + * Implications (building large objects) + * JIT + * JIT + Python + * mapdict + * globals/builtins + * tracing + * resops + * optimizations +* A case study + * Open source application (TBD) + * Tracebin or jitviewer +* Putting it to work + * Workshop style From noreply at buildbot.pypy.org Wed Feb 1 05:53:02 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 1 Feb 2012 05:53:02 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merged upstream Message-ID: <20120201045302.287DE82B67@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4072:cc2d83cc2256 Date: 2012-01-31 23:52 -0500 http://bitbucket.org/pypy/extradoc/changeset/cc2d83cc2256/ Log: merged upstream diff --git a/planning/stm.txt b/planning/stm.txt --- a/planning/stm.txt +++ b/planning/stm.txt @@ -2,6 +2,9 @@ STM planning ============ +Comments in << >> describe the next thing to work on. + + Overview -------- @@ -72,6 +75,9 @@ use 4-5 bits, where in addition we use some "thread hash" value if there is only one copy. +<< NOW: think of a minimal GC model with these properties. We probably +need GC_GLOBAL, a single bit of GC_WAS_COPIED, and the version number. >> + stm_read -------- @@ -96,6 +102,10 @@ depending on cases). And if the read is accepted then we need to remember in a local list that we've read that object. +<< NOW: implement the thread's local dictionary in C, as say a search +tree. Should be easy enough if we don't try to be as efficient as +possible. The rest of the logic here is straightforward. >> + stm_write --------- @@ -114,6 +124,8 @@ consistent copy (i.e. nobody changed the object in the middle of us reading it). If it is too recent, then we might have to abort. +<< NOW: straightforward >> + End-of-transaction collections ------------------------------ @@ -132,6 +144,9 @@ We need to check that each of these global objects' versions have not been modified in the meantime. +<< NOW: should be easy, but with unclear interactions between the C +code and the GC. >> + Annotator support ----------------- @@ -149,6 +164,8 @@ of a localobj are themselves localobjs. This would be useful for 'PyFrame.fastlocals_w': it should also be known to always be a localobj. +<< do later >> + Local collections ----------------- @@ -173,6 +190,9 @@ all global reads done --- "compress" in the sense of removing duplicates. +<< do later; memory usage grows unboundedly during one transaction for +now. >> + Global collections ------------------ @@ -218,6 +238,10 @@ * Parallelism: there are multiple threads all doing something GC-related, like all scanning the heap together. +<< at first the global area keeps growing unboundedly. The next step +will be to add the LIL but run the global collection by keeping all +other threads blocked. >> + When not running transactively ------------------------------ @@ -240,11 +264,11 @@ is called, we can try to do such a collection, but what about the pinned objects? -Some intermediate solution would be to let this mode be rather slow: +<< NOW: let this mode be rather slow. To implement this mode, we would have only global objects, and have the stm_write barrier of 'obj' return -'obj'. Do only global collections. Allocation would allocate -immediately a global object, mostly without being able to benefit from -bump-pointer allocation. +'obj'. Do only global collections (one we have them; at first, don't +collect at all). Allocation would allocate immediately a global object, +without being able to benefit from bump-pointer allocation. >> Pointer equality @@ -261,6 +285,8 @@ dictionary if they map to each other. And we need to take care of the cases of NULL pointers. +<< NOW: straightforward, if we're careful not to forget cases >> + From noreply at buildbot.pypy.org Wed Feb 1 11:13:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 1 Feb 2012 11:13:34 +0100 (CET) Subject: [pypy-commit] pypy builtin-module: Closing branch. Message-ID: <20120201101334.96B6382B67@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: builtin-module Changeset: r52005:b21805c30c04 Date: 2012-02-01 09:52 +0100 http://bitbucket.org/pypy/pypy/changeset/b21805c30c04/ Log: Closing branch. The original issue was resolved in a72429e0e0ed. From noreply at buildbot.pypy.org Wed Feb 1 11:26:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 1 Feb 2012 11:26:26 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Start the branch stm-gc, where we'll try to implement the basic Message-ID: <20120201102626.463F382B67@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52006:5d4160f25c47 Date: 2012-01-31 19:22 +0100 http://bitbucket.org/pypy/pypy/changeset/5d4160f25c47/ Log: Start the branch stm-gc, where we'll try to implement the basic model described in extradoc/planning/stm.txt. From noreply at buildbot.pypy.org Wed Feb 1 12:24:21 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 12:24:21 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: start declaring what we need Message-ID: <20120201112421.8D60F82B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52007:52478d84d718 Date: 2012-01-31 22:08 +0200 http://bitbucket.org/pypy/pypy/changeset/52478d84d718/ Log: start declaring what we need diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3140,6 +3140,11 @@ fail = self.cpu.execute_token(looptoken2, -9) assert fail.identifier == 42 + def test_vector_ops(self): + ops = """ + [p0] + guard_array_aligned(p0) [] + """ class OOtypeBackendTest(BaseBackendTest): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -17,6 +17,7 @@ INT = 'i' REF = 'r' FLOAT = 'f' +VECTOR = 'e' STRUCT = 's' HOLE = '_' VOID = 'v' @@ -481,6 +482,18 @@ def repr_rpython(self): return repr_rpython(self, 'bi') +class BoxFloatVector(Box): + type = VECTOR + + def __init__(self, floats): + self.floats = floats + +class BoxIntVector(Box): + type = VECTOR + + def __init__(self, ints): + self.ints = ints + class BoxFloat(Box): type = FLOAT _attrs_ = ('value',) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -393,6 +393,7 @@ 'GUARD_OVERFLOW/0d', 'GUARD_NOT_FORCED/0d', # may be called with an exception currently set 'GUARD_NOT_INVALIDATED/0d', + 'GUARD_ARRAY_ALIGNED/1d', '_GUARD_LAST', # ----- end of guard operations ----- '_NOSIDEEFFECT_FIRST', # ----- start of no_side_effect operations ----- @@ -415,6 +416,7 @@ 'FLOAT_TRUEDIV/2', 'FLOAT_NEG/1', 'FLOAT_ABS/1', + 'FLOAT_VECTOR_ADD/2', 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', @@ -467,6 +469,7 @@ '_ALWAYS_PURE_LAST', # ----- end of always_pure operations ----- 'GETARRAYITEM_GC/2d', + 'GETARRAYITEM_VECTOR_RAW/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', 'GETINTERIORFIELD_RAW/2d', @@ -487,6 +490,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', + 'SETARRAYITEM_VECTOR_RAW/2d', 'SETINTERIORFIELD_GC/3d', 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', From noreply at buildbot.pypy.org Wed Feb 1 12:24:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 12:24:22 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: (fijal, arigo) aligned arrays support for ll2ctypes Message-ID: <20120201112422.CCE1982B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52008:3db72c7b6012 Date: 2012-02-01 13:23 +0200 http://bitbucket.org/pypy/pypy/changeset/3db72c7b6012/ Log: (fijal, arigo) aligned arrays support for ll2ctypes diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -183,7 +183,11 @@ ('items', max_n * ctypes_item)] else: _fields_ = [('items', max_n * ctypes_item)] - + if A._hints.get('memory_position_alignment'): + # This pack means the same as #pragma pack in MSVC and *not* + # in gcc + _pack_ = A._hints.get('memory_position_alignment') + @classmethod def _malloc(cls, n=None): if not isinstance(n, int): diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -179,6 +179,8 @@ raise TypeError, "%r cannot be inlined in structure" % self def _install_extras(self, adtmeths={}, hints={}): + if not isinstance(self, Array): + assert not 'memory_position_alignment' in hints self._adtmeths = frozendict(adtmeths) self._hints = frozendict(hints) @@ -422,6 +424,10 @@ % (self.OF._gckind,)) self.OF._inline_is_varsize(False) + if 'memory_position_alignment' in kwds.get('_hints', {}): + assert kwds['_hints']['nolength'], 'alignment only for raw non-lenght arrays' + assert self._gckind == 'raw', 'alignment only for raw non-lenght arrays' + assert kwds['_hints']['memory_position_alignment'] in (1, 2, 4, 8, 16) self._install_extras(**kwds) def _inline_is_varsize(self, last): diff --git a/pypy/rpython/lltypesystem/test/test_ll2ctypes.py b/pypy/rpython/lltypesystem/test/test_ll2ctypes.py --- a/pypy/rpython/lltypesystem/test/test_ll2ctypes.py +++ b/pypy/rpython/lltypesystem/test/test_ll2ctypes.py @@ -1356,6 +1356,19 @@ a2 = ctypes2lltype(lltype.Ptr(A), lltype2ctypes(a)) assert a2._obj.getitem(0)._obj._parentstructure() is a2._obj + def test_aligned_alloc(Self): + A = lltype.Array(lltype.Signed, + hints={'memory_position_alignment': 16, + 'nolength': True}) + l = [] + for i in range(10): + a = lltype.malloc(A, 5, flavor='raw') + ca = lltype2ctypes(a) + assert ctypes.cast(ca, ctypes.c_void_p).value % 16 == 0 + l.append(a) + for i in range(10): + lltype.free(l[i], 'raw') + class TestPlatform(object): def test_lib_on_libpaths(self): from pypy.translator.platform import platform From noreply at buildbot.pypy.org Wed Feb 1 12:24:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 12:24:47 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: remove GUARD_ARRAY_ALIGNED for now Message-ID: <20120201112447.A892B82B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52009:65a964afbef0 Date: 2012-02-01 13:24 +0200 http://bitbucket.org/pypy/pypy/changeset/65a964afbef0/ Log: remove GUARD_ARRAY_ALIGNED for now diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -393,7 +393,6 @@ 'GUARD_OVERFLOW/0d', 'GUARD_NOT_FORCED/0d', # may be called with an exception currently set 'GUARD_NOT_INVALIDATED/0d', - 'GUARD_ARRAY_ALIGNED/1d', '_GUARD_LAST', # ----- end of guard operations ----- '_NOSIDEEFFECT_FIRST', # ----- start of no_side_effect operations ----- From pullrequests-noreply at bitbucket.org Wed Feb 1 12:25:16 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Wed, 01 Feb 2012 11:25:16 -0000 Subject: [pypy-commit] [ACCEPTED] Pull request #24 for pypy/pypy: Apply hpaulj's patch to fix issue950 (startup_hook in readline / pyrepl) In-Reply-To: References: Message-ID: <20120201112516.22802.24737@bitbucket03.managed.contegix.com> Pull request #24 has been accepted by Stefano Parmesan. Changes in dripton/pypy have been pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/24/apply-hpauljs-patch-to-fix-issue950 -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From pullrequests-noreply at bitbucket.org Wed Feb 1 12:25:23 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Wed, 01 Feb 2012 11:25:23 -0000 Subject: [pypy-commit] [ACCEPTED] Pull request #21 for pypy/pypy: datetime.py fix for issue972, with unit test In-Reply-To: References: Message-ID: <20120201112523.22802.43872@bitbucket03.managed.contegix.com> Pull request #21 has been accepted by Stefano Parmesan. Changes in / have been pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/21/datetimepy-fix-for-issue972-with-unit-test -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From pullrequests-noreply at bitbucket.org Wed Feb 1 12:28:23 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Wed, 01 Feb 2012 11:28:23 -0000 Subject: [pypy-commit] [OPEN] Pull request #26 for pypy/pypy: json/decoder speed-up Message-ID: A new pull request has been opened by Stefano Parmesan. armisael/pypy has changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/26/json-decoder-speed-up Title: json/decoder speed-up Following what Fijal wrote on his blogpost of October 2011 I worked on cleanin the code of json/decoder.py Changes to be pulled: a330d824b42f by Stefano Parmesa?: "merged from pypy's 2c2944d51e51 and fixed conflicts" de5504a0f4f0 by Stefano Parmesa?: "removed cPython-oriented code in json and added KeyValueBuilder(s) for speeding ?" -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Wed Feb 1 13:43:25 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 13:43:25 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: Good. First go at vectorized operations - support double reading writing Message-ID: <20120201124325.E4B4782B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52010:b60d7a3bcf8f Date: 2012-02-01 14:42 +0200 http://bitbucket.org/pypy/pypy/changeset/b60d7a3bcf8f/ Log: Good. First go at vectorized operations - support double reading writing and adding in the x86 backend. No spilling so far diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -10,6 +10,8 @@ # longlongs are supported by the JIT, but stored as doubles. # Boxes and Consts are BoxFloats and ConstFloats. supports_singlefloats = False + supports_vector_ops = False + # SSE and similar done_with_this_frame_void_v = -1 done_with_this_frame_int_v = -1 diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3141,10 +3141,29 @@ assert fail.identifier == 42 def test_vector_ops(self): - ops = """ - [p0] - guard_array_aligned(p0) [] - """ + if not self.cpu.supports_vector_ops: + py.test.skip("unsupported vector ops") + + A = lltype.Array(lltype.Float, hints={'nolength': True, + 'memory_position_alignment': 16}) + descr0 = self.cpu.arraydescrof(A) + looptoken = JitCellToken() + ops = parse(""" + [p0, p1] + vec0 = getarrayitem_vector_raw(p0, 0, descr=descr0) + vec1 = getarrayitem_vector_raw(p1, 0, descr=descr0) + vec2 = float_vector_add(vec0, vec1) + setarrayitem_vector_raw(p0, 0, vec2, descr=descr0) + finish() + """, namespace=locals()) + self.cpu.compile_loop(ops.inputargs, ops.operations, looptoken) + a = lltype.malloc(A, 10, flavor='raw') + a[0] = 13.0 + a[1] = 15.0 + self.cpu.execute_token(looptoken, a, a) + assert a[0] == 26 + assert a[1] == 30 + lltype.free(a, flavor='raw') class OOtypeBackendTest(BaseBackendTest): diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -45,6 +45,7 @@ # darwin requires the stack to be 16 bytes aligned on calls. Same for gcc 4.5.0, # better safe than sorry CALL_ALIGN = 16 // WORD +FLOAT_VECTOR_SIZE = 1 # multiply by 2 def align_stack_words(words): return (words + CALL_ALIGN - 1) & ~(CALL_ALIGN-1) @@ -1164,6 +1165,7 @@ genop_int_rshift = _binaryop("SAR") genop_uint_rshift = _binaryop("SHR") genop_float_add = _binaryop("ADDSD", True) + genop_float_vector_add = _binaryop("ADDPD", True) genop_float_sub = _binaryop('SUBSD') genop_float_mul = _binaryop('MULSD', True) genop_float_truediv = _binaryop('DIVSD') @@ -1458,6 +1460,13 @@ genop_getarrayitem_gc_pure = genop_getarrayitem_gc genop_getarrayitem_raw = genop_getarrayitem_gc + def genop_getarrayitem_vector_raw(self, op, arglocs, resloc): + base_loc, ofs_loc, size_loc, _, sign_loc = arglocs + assert isinstance(size_loc, ImmedLoc) + scale = _get_scale(size_loc.value) + src_addr = addr_add(base_loc, ofs_loc, 0, scale) + self.mc.MOVDQA(resloc, src_addr) + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, base_loc, ofs_loc): assert isinstance(itemsize_loc, ImmedLoc) @@ -1510,6 +1519,13 @@ dest_addr = AddressLoc(base_loc, ofs_loc, scale, baseofs.value) self.save_into_mem(dest_addr, value_loc, size_loc) + def genop_discard_setarrayitem_vector_raw(self, op, arglocs): + base_loc, ofs_loc, value_loc, size_loc, _ = arglocs + assert isinstance(size_loc, ImmedLoc) + scale = _get_scale(size_loc.value) + dest_addr = AddressLoc(base_loc, ofs_loc, scale, 0) + self.mc.MOVDQA(dest_addr, value_loc) + def genop_discard_strsetitem(self, op, arglocs): base_loc, ofs_loc, val_loc = arglocs basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,7 +5,7 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, INT, REF, FLOAT, + BoxFloat, INT, REF, FLOAT, VECTOR, TargetToken, JitCellToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr @@ -87,7 +87,7 @@ class X86XMMRegisterManager(RegisterManager): - box_types = [FLOAT] + box_types = [FLOAT, VECTOR] all_regs = [xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7] # we never need lower byte I hope save_around_call_regs = all_regs @@ -256,7 +256,7 @@ return pass_on_stack def possibly_free_var(self, var): - if var.type == FLOAT: + if var.type in self.xrm.box_types: self.xrm.possibly_free_var(var) else: self.rm.possibly_free_var(var) @@ -274,7 +274,7 @@ def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - if var.type == FLOAT: + if var.type in self.xrm.box_types: if isinstance(var, ConstFloat): return FloatImmedLoc(var.getfloatstorage()) return self.xrm.make_sure_var_in_reg(var, forbidden_vars, @@ -285,7 +285,7 @@ def force_allocate_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - if var.type == FLOAT: + if var.type in self.xrm.box_types: return self.xrm.force_allocate_reg(var, forbidden_vars, selected_reg, need_lower_byte) else: @@ -293,7 +293,7 @@ selected_reg, need_lower_byte) def force_spill_var(self, var): - if var.type == FLOAT: + if var.type in self.xrm.box_types: return self.xrm.force_spill_var(var) else: return self.rm.force_spill_var(var) @@ -530,7 +530,7 @@ def loc(self, v): if v is None: # xxx kludgy return None - if v.type == FLOAT: + if v.type in self.xrm.box_types: return self.xrm.loc(v) return self.rm.loc(v) @@ -701,6 +701,7 @@ self.xrm.possibly_free_vars_for_op(op) consider_float_add = _consider_float_op + consider_float_vector_add = _consider_float_op consider_float_sub = _consider_float_op consider_float_mul = _consider_float_op consider_float_truediv = _consider_float_op @@ -1080,6 +1081,7 @@ imm(itemsize), imm(ofs)]) consider_setarrayitem_raw = consider_setarrayitem_gc + consider_setarrayitem_vector_raw = consider_setarrayitem_gc def consider_getfield_gc(self, op): ofs_loc, size_loc, sign = self._unpack_fielddescr(op.getdescr()) @@ -1112,6 +1114,7 @@ sign_loc], result_loc) consider_getarrayitem_raw = consider_getarrayitem_gc + consider_getarrayitem_vector_raw = consider_getarrayitem_gc consider_getarrayitem_gc_pure = consider_getarrayitem_gc def consider_getinteriorfield_gc(self, op): diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -556,6 +556,7 @@ MOVSD = _binaryop('MOVSD') MOVAPD = _binaryop('MOVAPD') + MOVDQA = _binaryop('MOVDQA') ADDSD = _binaryop('ADDSD') ADDPD = _binaryop('ADDPD') SUBSD = _binaryop('SUBSD') diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -21,6 +21,7 @@ debug = True supports_floats = True supports_singlefloats = True + supports_vector_ops = True dont_keepalive_stuff = False # for tests with_threads = False diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -714,12 +714,18 @@ define_modrm_modes('MOVSX16_r*', [rex_w, '\x0F\xBF', register(1, 8)]) define_modrm_modes('MOVSX32_r*', [rex_w, '\x63', register(1, 8)]) -define_modrm_modes('MOVSD_x*', ['\xF2', rex_nw, '\x0F\x10', register(1,8)], regtype='XMM') -define_modrm_modes('MOVSD_*x', ['\xF2', rex_nw, '\x0F\x11', register(2,8)], regtype='XMM') +define_modrm_modes('MOVSD_x*', ['\xF2', rex_nw, '\x0F\x10', register(1,8)], + regtype='XMM') +define_modrm_modes('MOVSD_*x', ['\xF2', rex_nw, '\x0F\x11', register(2,8)], + regtype='XMM') define_modrm_modes('MOVAPD_x*', ['\x66', rex_nw, '\x0F\x28', register(1,8)], regtype='XMM') define_modrm_modes('MOVAPD_*x', ['\x66', rex_nw, '\x0F\x29', register(2,8)], regtype='XMM') +define_modrm_modes('MOVDQA_x*', ['\x66', rex_nw, '\x0F\x6F', register(1, 8)], + regtype='XMM') +define_modrm_modes('MOVDQA_*x', ['\x66', rex_nw, '\x0F\x7F', register(2, 8)], + regtype='XMM') define_modrm_modes('SQRTSD_x*', ['\xF2', rex_nw, '\x0F\x51', register(1,8)], regtype='XMM') diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -273,6 +273,9 @@ # ____________________________________________________________ +IGNORED = ['FLOAT_VECTOR_ADD', 'GETARRAYITEM_VECTOR_RAW', + 'SETARRAYITEM_VECTOR_RAW'] + def _make_execute_list(): if 0: # enable this to trace calls to do_xxx def wrap(fn): @@ -349,7 +352,8 @@ rop.LABEL, ): # list of opcodes never executed by pyjitpl continue - raise AssertionError("missing %r" % (key,)) + if not key in IGNORED: + raise AssertionError("missing %r" % (key,)) return execute_by_num_args def make_execute_function_with_boxes(name, func): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -482,17 +482,14 @@ def repr_rpython(self): return repr_rpython(self, 'bi') -class BoxFloatVector(Box): +class BoxVector(Box): type = VECTOR - def __init__(self, floats): - self.floats = floats + def __init__(self): + pass -class BoxIntVector(Box): - type = VECTOR - - def __init__(self, ints): - self.ints = ints + def _getrepr_(self): + return '' class BoxFloat(Box): type = FLOAT diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -489,7 +489,7 @@ 'SETARRAYITEM_GC/3d', 'SETARRAYITEM_RAW/3d', - 'SETARRAYITEM_VECTOR_RAW/2d', + 'SETARRAYITEM_VECTOR_RAW/3d', 'SETINTERIORFIELD_GC/3d', 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -114,6 +114,9 @@ elif elem.startswith('f'): box = self.model.BoxFloat() _box_counter_more_than(self.model, elem[1:]) + elif elem.startswith('vec'): + box = self.model.BoxVector() + _box_counter_more_than(self.model, elem[3:]) elif elem.startswith('p'): # pointer ts = getattr(self.cpu, 'ts', self.model.llhelper) diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -4,7 +4,7 @@ def get_real_model(): class LoopModel(object): from pypy.jit.metainterp.history import TreeLoop, JitCellToken - from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat + from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat, BoxVector from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper @@ -76,6 +76,9 @@ class BoxRef(Box): type = 'p' + class BoxVector(Box): + type = 'e' + class Const(object): def __init__(self, value=None): self.value = value From noreply at buildbot.pypy.org Wed Feb 1 13:48:44 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 1 Feb 2012 13:48:44 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge default Message-ID: <20120201124844.4B9F682B67@wyvern.cs.uni-duesseldorf.de> Author: l.diekmann Branch: set-strategies Changeset: r52011:1317685b0c35 Date: 2012-01-30 14:16 +0000 http://bitbucket.org/pypy/pypy/changeset/1317685b0c35/ Log: merge default diff too long, truncating to 10000 out of 13220 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -20,6 +20,8 @@ # 2. Altered source versions must be plainly marked as such, and must not be # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. +# +# Note: This software has been modified for use in PyPy. from ctypes import c_void_p, c_int, c_double, c_int64, c_char_p, cdll from ctypes import POINTER, byref, string_at, CFUNCTYPE, cast @@ -27,7 +29,6 @@ from collections import OrderedDict import datetime import sys -import time import weakref from threading import _get_ident as thread_get_ident @@ -606,7 +607,7 @@ def authorizer(userdata, action, arg1, arg2, dbname, source): try: return int(callback(action, arg1, arg2, dbname, source)) - except Exception, e: + except Exception: return SQLITE_DENY c_authorizer = AUTHORIZER(authorizer) @@ -653,7 +654,7 @@ if not aggregate_ptr[0]: try: aggregate = cls() - except Exception, e: + except Exception: msg = ("user-defined aggregate's '__init__' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -667,7 +668,7 @@ params = _convert_params(context, argc, c_params) try: aggregate.step(*params) - except Exception, e: + except Exception: msg = ("user-defined aggregate's 'step' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -683,7 +684,7 @@ aggregate = self.aggregate_instances[aggregate_ptr[0]] try: val = aggregate.finalize() - except Exception, e: + except Exception: msg = ("user-defined aggregate's 'finalize' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -771,7 +772,7 @@ self.statement.item = None self.statement.exhausted = True - if self.statement.kind == DML or self.statement.kind == DDL: + if self.statement.kind == DML: self.statement.reset() self.rowcount = -1 @@ -791,7 +792,7 @@ if self.statement.kind == DML: self.connection._begin() else: - raise ProgrammingError, "executemany is only for DML statements" + raise ProgrammingError("executemany is only for DML statements") self.rowcount = 0 for params in many_params: @@ -861,8 +862,6 @@ except StopIteration: return None - return nextrow - def fetchmany(self, size=None): self._check_closed() self._check_reset() @@ -915,7 +914,7 @@ def __init__(self, connection, sql): self.statement = None if not isinstance(sql, str): - raise ValueError, "sql must be a string" + raise ValueError("sql must be a string") self.con = connection self.sql = sql # DEBUG ONLY first_word = self._statement_kind = sql.lstrip().split(" ")[0].upper() @@ -944,8 +943,8 @@ raise self.con._get_exception(ret) self.con._remember_statement(self) if _check_remaining_sql(next_char.value): - raise Warning, "One and only one statement required: %r" % ( - next_char.value,) + raise Warning("One and only one statement required: %r" % ( + next_char.value,)) # sql_char should remain alive until here self._build_row_cast_map() @@ -1016,7 +1015,7 @@ elif type(param) is buffer: sqlite.sqlite3_bind_blob(self.statement, idx, str(param), len(param), SQLITE_TRANSIENT) else: - raise InterfaceError, "parameter type %s is not supported" % str(type(param)) + raise InterfaceError("parameter type %s is not supported" % str(type(param))) def set_params(self, params): ret = sqlite.sqlite3_reset(self.statement) @@ -1045,11 +1044,11 @@ for idx in range(1, sqlite.sqlite3_bind_parameter_count(self.statement) + 1): param_name = sqlite.sqlite3_bind_parameter_name(self.statement, idx) if param_name is None: - raise ProgrammingError, "need named parameters" + raise ProgrammingError("need named parameters") param_name = param_name[1:] try: param = params[param_name] - except KeyError, e: + except KeyError: raise ProgrammingError("missing parameter '%s'" %param) self.set_param(idx, param) @@ -1260,7 +1259,7 @@ params = _convert_params(context, nargs, c_params) try: val = real_cb(*params) - except Exception, e: + except Exception: msg = "user-defined function raised exception" sqlite.sqlite3_result_error(context, msg, len(msg)) else: diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -13,7 +13,7 @@ Sources for time zone and DST data: http://www.twinsun.com/tz/tz-link.htm This was originally copied from the sandbox of the CPython CVS repository. -Thanks to Tim Peters for suggesting using it. +Thanks to Tim Peters for suggesting using it. """ import time as _time @@ -271,6 +271,8 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): + if not isinstance(year, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -280,6 +282,8 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): + if not isinstance(hour, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -543,61 +547,76 @@ self = object.__new__(cls) - self.__days = d - self.__seconds = s - self.__microseconds = us + self._days = d + self._seconds = s + self._microseconds = us if abs(d) > 999999999: raise OverflowError("timedelta # of days is too large: %d" % d) return self def __repr__(self): - if self.__microseconds: + if self._microseconds: return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds, - self.__microseconds) - if self.__seconds: + self._days, + self._seconds, + self._microseconds) + if self._seconds: return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds) - return "%s(%d)" % ('datetime.' + self.__class__.__name__, self.__days) + self._days, + self._seconds) + return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days) def __str__(self): - mm, ss = divmod(self.__seconds, 60) + mm, ss = divmod(self._seconds, 60) hh, mm = divmod(mm, 60) s = "%d:%02d:%02d" % (hh, mm, ss) - if self.__days: + if self._days: def plural(n): return n, abs(n) != 1 and "s" or "" - s = ("%d day%s, " % plural(self.__days)) + s - if self.__microseconds: - s = s + ".%06d" % self.__microseconds + s = ("%d day%s, " % plural(self._days)) + s + if self._microseconds: + s = s + ".%06d" % self._microseconds return s - days = property(lambda self: self.__days, doc="days") - seconds = property(lambda self: self.__seconds, doc="seconds") - microseconds = property(lambda self: self.__microseconds, - doc="microseconds") - def total_seconds(self): return ((self.days * 86400 + self.seconds) * 10**6 + self.microseconds) / 1e6 + # Read-only field accessors + @property + def days(self): + """days""" + return self._days + + @property + def seconds(self): + """seconds""" + return self._seconds + + @property + def microseconds(self): + """microseconds""" + return self._microseconds + def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days + other.__days, - self.__seconds + other.__seconds, - self.__microseconds + other.__microseconds) + return timedelta(self._days + other._days, + self._seconds + other._seconds, + self._microseconds + other._microseconds) return NotImplemented __radd__ = __add__ def __sub__(self, other): if isinstance(other, timedelta): - return self + -other + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(self._days - other._days, + self._seconds - other._seconds, + self._microseconds - other._microseconds) return NotImplemented def __rsub__(self, other): @@ -606,17 +625,17 @@ return NotImplemented def __neg__(self): - # for CPython compatibility, we cannot use - # our __class__ here, but need a real timedelta - return timedelta(-self.__days, - -self.__seconds, - -self.__microseconds) + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(-self._days, + -self._seconds, + -self._microseconds) def __pos__(self): return self def __abs__(self): - if self.__days < 0: + if self._days < 0: return -self else: return self @@ -625,81 +644,81 @@ if isinstance(other, (int, long)): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days * other, - self.__seconds * other, - self.__microseconds * other) + return timedelta(self._days * other, + self._seconds * other, + self._microseconds * other) return NotImplemented __rmul__ = __mul__ def __div__(self, other): if isinstance(other, (int, long)): - usec = ((self.__days * (24*3600L) + self.__seconds) * 1000000 + - self.__microseconds) + usec = ((self._days * (24*3600L) + self._seconds) * 1000000 + + self._microseconds) return timedelta(0, 0, usec // other) return NotImplemented __floordiv__ = __div__ - # Comparisons. + # Comparisons of timedelta objects with other. def __eq__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, timedelta) - return cmp(self.__getstate(), other.__getstate()) + return cmp(self._getstate(), other._getstate()) def __hash__(self): - return hash(self.__getstate()) + return hash(self._getstate()) def __nonzero__(self): - return (self.__days != 0 or - self.__seconds != 0 or - self.__microseconds != 0) + return (self._days != 0 or + self._seconds != 0 or + self._microseconds != 0) # Pickle support. __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - return (self.__days, self.__seconds, self.__microseconds) + def _getstate(self): + return (self._days, self._seconds, self._microseconds) def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) timedelta.min = timedelta(-999999999) timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, @@ -749,25 +768,26 @@ return self _check_date_fields(year, month, day) self = object.__new__(cls) - self.__year = year - self.__month = month - self.__day = day + self._year = year + self._month = month + self._day = day return self # Additional constructors + @classmethod def fromtimestamp(cls, t): "Construct a date from a POSIX timestamp (like time.time())." y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t) return cls(y, m, d) - fromtimestamp = classmethod(fromtimestamp) + @classmethod def today(cls): "Construct a date from time.time()." t = _time.time() return cls.fromtimestamp(t) - today = classmethod(today) + @classmethod def fromordinal(cls, n): """Contruct a date from a proleptic Gregorian ordinal. @@ -776,16 +796,24 @@ """ y, m, d = _ord2ymd(n) return cls(y, m, d) - fromordinal = classmethod(fromordinal) # Conversions to string def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + """ return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__year, - self.__month, - self.__day) + self._year, + self._month, + self._day) # XXX These shouldn't depend on time.localtime(), because that # clips the usable dates to [1970 .. 2038). At least ctime() is # easily done without using strftime() -- that's better too because @@ -793,12 +821,20 @@ def ctime(self): "Format a la ctime()." - return tmxxx(self.__year, self.__month, self.__day).ctime() + return tmxxx(self._year, self._month, self._day).ctime() def strftime(self, fmt): "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + def isoformat(self): """Return the date formatted according to ISO. @@ -808,29 +844,31 @@ - http://www.w3.org/TR/NOTE-datetime - http://www.cl.cam.ac.uk/~mgk25/iso-time.html """ - return "%04d-%02d-%02d" % (self.__year, self.__month, self.__day) + return "%04d-%02d-%02d" % (self._year, self._month, self._day) __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + # Read-only field accessors + @property + def year(self): + """year (1-9999)""" + return self._year - # Read-only field accessors - year = property(lambda self: self.__year, - doc="year (%d-%d)" % (MINYEAR, MAXYEAR)) - month = property(lambda self: self.__month, doc="month (1-12)") - day = property(lambda self: self.__day, doc="day (1-31)") + @property + def month(self): + """month (1-12)""" + return self._month + + @property + def day(self): + """day (1-31)""" + return self._day # Standard conversions, __cmp__, __hash__ (and helpers) def timetuple(self): "Return local time tuple compatible with time.localtime()." - return _build_struct_time(self.__year, self.__month, self.__day, + return _build_struct_time(self._year, self._month, self._day, 0, 0, 0, -1) def toordinal(self): @@ -839,24 +877,24 @@ January 1 of year 1 is day 1. Only the year, month and day values contribute to the result. """ - return _ymd2ord(self.__year, self.__month, self.__day) + return _ymd2ord(self._year, self._month, self._day) def replace(self, year=None, month=None, day=None): """Return a new date with new values for the specified fields.""" if year is None: - year = self.__year + year = self._year if month is None: - month = self.__month + month = self._month if day is None: - day = self.__day + day = self._day _check_date_fields(year, month, day) return date(year, month, day) - # Comparisons. + # Comparisons of date objects with other. def __eq__(self, other): if isinstance(other, date): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -864,7 +902,7 @@ def __ne__(self, other): if isinstance(other, date): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -872,7 +910,7 @@ def __le__(self, other): if isinstance(other, date): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -880,7 +918,7 @@ def __lt__(self, other): if isinstance(other, date): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -888,7 +926,7 @@ def __ge__(self, other): if isinstance(other, date): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -896,21 +934,21 @@ def __gt__(self, other): if isinstance(other, date): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple"): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, date) - y, m, d = self.__year, self.__month, self.__day - y2, m2, d2 = other.__year, other.__month, other.__day + y, m, d = self._year, self._month, self._day + y2, m2, d2 = other._year, other._month, other._day return cmp((y, m, d), (y2, m2, d2)) def __hash__(self): "Hash." - return hash(self.__getstate()) + return hash(self._getstate()) # Computations @@ -922,9 +960,9 @@ def __add__(self, other): "Add a date to a timedelta." if isinstance(other, timedelta): - t = tmxxx(self.__year, - self.__month, - self.__day + other.days) + t = tmxxx(self._year, + self._month, + self._day + other.days) self._checkOverflow(t.year) result = date(t.year, t.month, t.day) return result @@ -966,9 +1004,9 @@ ISO calendar algorithm taken from http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm """ - year = self.__year + year = self._year week1monday = _isoweek1monday(year) - today = _ymd2ord(self.__year, self.__month, self.__day) + today = _ymd2ord(self._year, self._month, self._day) # Internally, week and day have origin 0 week, day = divmod(today - week1monday, 7) if week < 0: @@ -985,18 +1023,18 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - return ("%c%c%c%c" % (yhi, ylo, self.__month, self.__day), ) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + return ("%c%c%c%c" % (yhi, ylo, self._month, self._day), ) def __setstate(self, string): if len(string) != 4 or not (1 <= ord(string[2]) <= 12): raise TypeError("not enough arguments") - yhi, ylo, self.__month, self.__day = map(ord, string) - self.__year = yhi * 256 + ylo + yhi, ylo, self._month, self._day = map(ord, string) + self._year = yhi * 256 + ylo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) _date_class = date # so functions w/ args named "date" can get at the class @@ -1118,62 +1156,80 @@ return self _check_tzinfo_arg(tzinfo) _check_time_fields(hour, minute, second, microsecond) - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo # Standard conversions, __hash__ (and helpers) - # Comparisons. + # Comparisons of time objects with other. def __eq__(self, other): if isinstance(other, time): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, time): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, time): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, time): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, time): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, time): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, time) mytz = self._tzinfo ottz = other._tzinfo @@ -1187,23 +1243,23 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._hour, self._minute, self._second, + self._microsecond), + (other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware times") - myhhmm = self.__hour * 60 + self.__minute - myoff - othhmm = other.__hour * 60 + other.__minute - otoff - return cmp((myhhmm, self.__second, self.__microsecond), - (othhmm, other.__second, other.__microsecond)) + myhhmm = self._hour * 60 + self._minute - myoff + othhmm = other._hour * 60 + other._minute - otoff + return cmp((myhhmm, self._second, self._microsecond), + (othhmm, other._second, other._microsecond)) def __hash__(self): """Hash.""" tzoff = self._utcoffset() if not tzoff: # zero or None - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) h, m = divmod(self.hour * 60 + self.minute - tzoff, 60) if 0 <= h < 24: return hash(time(h, m, self.second, self.microsecond)) @@ -1227,14 +1283,14 @@ def __repr__(self): """Convert to formal string, for repr().""" - if self.__microsecond != 0: - s = ", %d, %d" % (self.__second, self.__microsecond) - elif self.__second != 0: - s = ", %d" % self.__second + if self._microsecond != 0: + s = ", %d, %d" % (self._second, self._microsecond) + elif self._second != 0: + s = ", %d" % self._second else: s = "" s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__, - self.__hour, self.__minute, s) + self._hour, self._minute, s) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" @@ -1246,8 +1302,8 @@ This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if self.microsecond == 0. """ - s = _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond) + s = _format_time(self._hour, self._minute, self._second, + self._microsecond) tz = self._tzstr() if tz: s += tz @@ -1255,14 +1311,6 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) - def strftime(self, fmt): """Format using strftime(). The date part of the timestamp passed to underlying strftime should not be used. @@ -1270,10 +1318,18 @@ # The year must be >= 1900 else Python's strftime implementation # can raise a bogus exception. timetuple = (1900, 1, 1, - self.__hour, self.__minute, self.__second, + self._hour, self._minute, self._second, 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + # Timezone functions def utcoffset(self): @@ -1350,10 +1406,10 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 6) % (self.__hour, self.__minute, self.__second, + basestate = ("%c" * 6) % (self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1363,13 +1419,13 @@ def __setstate(self, string, tzinfo): if len(string) != 6 or ord(string[0]) >= 24: raise TypeError("an integer is required") - self.__hour, self.__minute, self.__second, us1, us2, us3 = \ + self._hour, self._minute, self._second, us1, us2, us3 = \ map(ord, string) - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (time, self.__getstate()) + return (time, self._getstate()) _time_class = time # so functions w/ args named "time" can get at the class @@ -1378,9 +1434,11 @@ time.resolution = timedelta(microseconds=1) class datetime(date): + """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) - # XXX needs docstrings - # See http://www.zope.org/Members/fdrake/DateTimeWiki/TimeZoneInfo + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints or longs. + """ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None): @@ -1393,24 +1451,43 @@ _check_time_fields(hour, minute, second, microsecond) self = date.__new__(cls, year, month, day) # XXX This duplicates __year, __month, __day for convenience :-( - self.__year = year - self.__month = month - self.__day = day - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._year = year + self._month = month + self._day = day + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo + + @classmethod def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). @@ -1438,37 +1515,42 @@ if tz is not None: result = tz.fromutc(result) return result - fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation # XXX uses gettimeofday on platforms that have it, but that isn't # XXX available from Python. So now() may return different results # XXX across the implementations. + @classmethod def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) - now = classmethod(now) + @classmethod def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) - utcnow = classmethod(utcnow) + @classmethod def combine(cls, date, time): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): @@ -1478,7 +1560,6 @@ return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, time.tzinfo) - combine = classmethod(combine) def timetuple(self): "Return local time tuple compatible with time.localtime()." @@ -1504,7 +1585,7 @@ def date(self): "Return the date part." - return date(self.__year, self.__month, self.__day) + return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." @@ -1564,8 +1645,8 @@ def ctime(self): "Format a la ctime()." - t = tmxxx(self.__year, self.__month, self.__day, self.__hour, - self.__minute, self.__second) + t = tmxxx(self._year, self._month, self._day, self._hour, + self._minute, self._second) return t.ctime() def isoformat(self, sep='T'): @@ -1580,10 +1661,10 @@ Optional argument sep specifies the separator between date and time, default 'T'. """ - s = ("%04d-%02d-%02d%c" % (self.__year, self.__month, self.__day, + s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + - _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond)) + _format_time(self._hour, self._minute, self._second, + self._microsecond)) off = self._utcoffset() if off is not None: if off < 0: @@ -1596,13 +1677,13 @@ return s def __repr__(self): - "Convert to formal string, for repr()." - L = [self.__year, self.__month, self.__day, # These are never zero - self.__hour, self.__minute, self.__second, self.__microsecond] + """Convert to formal string, for repr().""" + L = [self._year, self._month, self._day, # These are never zero + self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: - del L[-1] + del L[-1] s = ", ".join(map(str, L)) s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s) if self._tzinfo is not None: @@ -1675,7 +1756,7 @@ def __eq__(self, other): if isinstance(other, datetime): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1683,7 +1764,7 @@ def __ne__(self, other): if isinstance(other, datetime): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1691,7 +1772,7 @@ def __le__(self, other): if isinstance(other, datetime): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1699,7 +1780,7 @@ def __lt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1707,7 +1788,7 @@ def __ge__(self, other): if isinstance(other, datetime): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1715,13 +1796,13 @@ def __gt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo @@ -1737,12 +1818,12 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__year, self.__month, self.__day, - self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__year, other.__month, other.__day, - other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._year, self._month, self._day, + self._hour, self._minute, self._second, + self._microsecond), + (other._year, other._month, other._day, + other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware datetimes") @@ -1756,13 +1837,13 @@ "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented - t = tmxxx(self.__year, - self.__month, - self.__day + other.days, - self.__hour, - self.__minute, - self.__second + other.seconds, - self.__microsecond + other.microseconds) + t = tmxxx(self._year, + self._month, + self._day + other.days, + self._hour, + self._minute, + self._second + other.seconds, + self._microsecond + other.microseconds) self._checkOverflow(t.year) result = datetime(t.year, t.month, t.day, t.hour, t.minute, t.second, @@ -1780,11 +1861,11 @@ days1 = self.toordinal() days2 = other.toordinal() - secs1 = self.__second + self.__minute * 60 + self.__hour * 3600 - secs2 = other.__second + other.__minute * 60 + other.__hour * 3600 + secs1 = self._second + self._minute * 60 + self._hour * 3600 + secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, - self.__microsecond - other.__microsecond) + self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self._utcoffset() @@ -1792,13 +1873,13 @@ if myoff == otoff: return base if myoff is None or otoff is None: - raise TypeError, "cannot mix naive and timezone-aware time" + raise TypeError("cannot mix naive and timezone-aware time") return base + timedelta(minutes = otoff-myoff) def __hash__(self): tzoff = self._utcoffset() if tzoff is None: - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + (self.minute - tzoff) * 60 + self.second return hash(timedelta(days, seconds, self.microsecond)) @@ -1807,12 +1888,12 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 10) % (yhi, ylo, self.__month, self.__day, - self.__hour, self.__minute, self.__second, + basestate = ("%c" * 10) % (yhi, ylo, self._month, self._day, + self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1820,14 +1901,14 @@ return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): - (yhi, ylo, self.__month, self.__day, self.__hour, - self.__minute, self.__second, us1, us2, us3) = map(ord, string) - self.__year = yhi * 256 + ylo - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + (yhi, ylo, self._month, self._day, self._hour, + self._minute, self._second, us1, us2, us3) = map(ord, string) + self._year = yhi * 256 + ylo + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) datetime.min = datetime(1, 1, 1) @@ -2009,7 +2090,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,2 @@ +from .fromnumeric import * +from .numeric import * diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/_methods.py @@ -0,0 +1,98 @@ +# Array methods which are called by the both the C-code for the method +# and the Python code for the NumPy-namespace function + +import _numpypy as mu +um = mu +#from numpypy.core import umath as um +from numpypy.core.numeric import asanyarray + +def _amax(a, axis=None, out=None, skipna=False, keepdims=False): + return um.maximum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _amin(a, axis=None, out=None, skipna=False, keepdims=False): + return um.minimum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _sum(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.add.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _prod(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.multiply.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _mean(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + arr = asanyarray(a) + + # Upgrade bool, unsigned int, and int to float64 + if dtype is None and arr.dtype.kind in ['b','u','i']: + ret = um.add.reduce(arr, axis=axis, dtype='f8', + out=out, skipna=skipna, keepdims=keepdims) + else: + ret = um.add.reduce(arr, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=keepdims) + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + return ret + +def _var(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + arr = asanyarray(a) + + # First compute the mean, saving 'rcount' for reuse later + if dtype is None and arr.dtype.kind in ['b','u','i']: + arrmean = um.add.reduce(arr, axis=axis, dtype='f8', + skipna=skipna, keepdims=True) + else: + arrmean = um.add.reduce(arr, axis=axis, dtype=dtype, + skipna=skipna, keepdims=True) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=True) + if isinstance(arrmean, mu.ndarray): + arrmean = um.true_divide(arrmean, rcount, + casting='unsafe', subok=False) + else: + arrmean = arrmean / float(rcount) + + # arr - arrmean + x = arr - arrmean + + # (arr - arrmean) ** 2 + if arr.dtype.kind == 'c': + x = um.multiply(x, um.conjugate(x)).real + else: + x = um.multiply(x, x) + + # add.reduce((arr - arrmean) ** 2, axis) + ret = um.add.reduce(x, axis=axis, dtype=dtype, out=out, + skipna=skipna, keepdims=keepdims) + + # add.reduce((arr - arrmean) ** 2, axis) / (n - ddof) + if not keepdims and isinstance(rcount, mu.ndarray): + rcount = rcount.squeeze(axis=axis) + rcount -= ddof + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + + return ret + +def _std(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof, + skipna=skipna, keepdims=keepdims) + + if isinstance(ret, mu.ndarray): + ret = um.sqrt(ret) + else: + ret = um.sqrt(ret) + + return ret diff --git a/lib_pypy/numpypy/core/arrayprint.py b/lib_pypy/numpypy/core/arrayprint.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/arrayprint.py @@ -0,0 +1,789 @@ +"""Array printing function + +$Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $ +""" +__all__ = ["array2string", "set_printoptions", "get_printoptions"] +__docformat__ = 'restructuredtext' + +# +# Written by Konrad Hinsen +# last revision: 1996-3-13 +# modified by Jim Hugunin 1997-3-3 for repr's and str's (and other details) +# and by Perry Greenfield 2000-4-1 for numarray +# and by Travis Oliphant 2005-8-22 for numpy + +import sys +import _numpypy as _nt +from _numpypy import maximum, minimum, absolute, not_equal, isinf, isnan, isna +#from _numpypy import format_longfloat, datetime_as_string, datetime_data +from .fromnumeric import ravel + + +def product(x, y): return x*y + +_summaryEdgeItems = 3 # repr N leading and trailing items of each dimension +_summaryThreshold = 1000 # total items > triggers array summarization + +_float_output_precision = 8 +_float_output_suppress_small = False +_line_width = 75 +_nan_str = 'nan' +_inf_str = 'inf' +_na_str = 'NA' +_formatter = None # formatting function for array elements + +if sys.version_info[0] >= 3: + from functools import reduce + +def set_printoptions(precision=None, threshold=None, edgeitems=None, + linewidth=None, suppress=None, + nanstr=None, infstr=None, nastr=None, + formatter=None): + """ + Set printing options. + + These options determine the way floating point numbers, arrays and + other NumPy objects are displayed. + + Parameters + ---------- + precision : int, optional + Number of digits of precision for floating point output (default 8). + threshold : int, optional + Total number of array elements which trigger summarization + rather than full repr (default 1000). + edgeitems : int, optional + Number of array items in summary at beginning and end of + each dimension (default 3). + linewidth : int, optional + The number of characters per line for the purpose of inserting + line breaks (default 75). + suppress : bool, optional + Whether or not suppress printing of small floating point values + using scientific notation (default False). + nanstr : str, optional + String representation of floating point not-a-number (default nan). + infstr : str, optional + String representation of floating point infinity (default inf). + nastr : str, optional + String representation of NA missing value (default NA). + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + See Also + -------- + get_printoptions, set_string_function, array2string + + Notes + ----- + `formatter` is always reset with a call to `set_printoptions`. + + Examples + -------- + Floating point precision can be set: + + >>> np.set_printoptions(precision=4) + >>> print np.array([1.123456789]) + [ 1.1235] + + Long arrays can be summarised: + + >>> np.set_printoptions(threshold=5) + >>> print np.arange(10) + [0 1 2 ..., 7 8 9] + + Small results can be suppressed: + + >>> eps = np.finfo(float).eps + >>> x = np.arange(4.) + >>> x**2 - (x + eps)**2 + array([ -4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) + >>> np.set_printoptions(suppress=True) + >>> x**2 - (x + eps)**2 + array([-0., -0., 0., 0.]) + + A custom formatter can be used to display array elements as desired: + + >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) + >>> x = np.arange(3) + >>> x + array([int: 0, int: -1, int: -2]) + >>> np.set_printoptions() # formatter gets reset + >>> x + array([0, 1, 2]) + + To put back the default options, you can use: + + >>> np.set_printoptions(edgeitems=3,infstr='inf', + ... linewidth=75, nanstr='nan', precision=8, + ... suppress=False, threshold=1000, formatter=None) + """ + + global _summaryThreshold, _summaryEdgeItems, _float_output_precision, \ + _line_width, _float_output_suppress_small, _nan_str, _inf_str, \ + _na_str, _formatter + if linewidth is not None: + _line_width = linewidth + if threshold is not None: + _summaryThreshold = threshold + if edgeitems is not None: + _summaryEdgeItems = edgeitems + if precision is not None: + _float_output_precision = precision + if suppress is not None: + _float_output_suppress_small = not not suppress + if nanstr is not None: + _nan_str = nanstr + if infstr is not None: + _inf_str = infstr + if nastr is not None: + _na_str = nastr + _formatter = formatter + +def get_printoptions(): + """ + Return the current print options. + + Returns + ------- + print_opts : dict + Dictionary of current print options with keys + + - precision : int + - threshold : int + - edgeitems : int + - linewidth : int + - suppress : bool + - nanstr : str + - infstr : str + - formatter : dict of callables + + For a full description of these options, see `set_printoptions`. + + See Also + -------- + set_printoptions, set_string_function + + """ + d = dict(precision=_float_output_precision, + threshold=_summaryThreshold, + edgeitems=_summaryEdgeItems, + linewidth=_line_width, + suppress=_float_output_suppress_small, + nanstr=_nan_str, + infstr=_inf_str, + nastr=_na_str, + formatter=_formatter) + return d + +def _leading_trailing(a): + import numeric as _nc + if a.ndim == 1: + if len(a) > 2*_summaryEdgeItems: + b = _nc.concatenate((a[:_summaryEdgeItems], + a[-_summaryEdgeItems:])) + else: + b = a + else: + if len(a) > 2*_summaryEdgeItems: + l = [_leading_trailing(a[i]) for i in range( + min(len(a), _summaryEdgeItems))] + l.extend([_leading_trailing(a[-i]) for i in range( + min(len(a), _summaryEdgeItems),0,-1)]) + else: + l = [_leading_trailing(a[i]) for i in range(0, len(a))] + b = _nc.concatenate(tuple(l)) + return b + +def _boolFormatter(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif x: + return ' True' + else: + return 'False' + + +def repr_format(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return repr(x) + +def _array2string(a, max_line_width, precision, suppress_small, separator=' ', + prefix="", formatter=None): + + if max_line_width is None: + max_line_width = _line_width + + if precision is None: + precision = _float_output_precision + + if suppress_small is None: + suppress_small = _float_output_suppress_small + + if formatter is None: + formatter = _formatter + + if a.size > _summaryThreshold: + summary_insert = "..., " + data = _leading_trailing(a) + else: + summary_insert = "" + data = ravel(a) + + formatdict = {'bool' : _boolFormatter, + 'int' : IntegerFormat(data), + 'float' : FloatFormat(data, precision, suppress_small), + 'longfloat' : LongFloatFormat(precision), + #'complexfloat' : ComplexFormat(data, precision, + # suppress_small), + #'longcomplexfloat' : LongComplexFormat(precision), + #'datetime' : DatetimeFormat(data), + #'timedelta' : TimedeltaFormat(data), + 'numpystr' : repr_format, + 'str' : str} + + if formatter is not None: + fkeys = [k for k in formatter.keys() if formatter[k] is not None] + if 'all' in fkeys: + for key in formatdict.keys(): + formatdict[key] = formatter['all'] + if 'int_kind' in fkeys: + for key in ['int']: + formatdict[key] = formatter['int_kind'] + if 'float_kind' in fkeys: + for key in ['float', 'longfloat']: + formatdict[key] = formatter['float_kind'] + if 'complex_kind' in fkeys: + for key in ['complexfloat', 'longcomplexfloat']: + formatdict[key] = formatter['complex_kind'] + if 'str_kind' in fkeys: + for key in ['numpystr', 'str']: + formatdict[key] = formatter['str_kind'] + for key in formatdict.keys(): + if key in fkeys: + formatdict[key] = formatter[key] + + try: + format_function = a._format + msg = "The `_format` attribute is deprecated in Numpy 2.0 and " \ + "will be removed in 2.1. Use the `formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + # find the right formatting function for the array + dtypeobj = a.dtype.type + if issubclass(dtypeobj, _nt.bool_): + format_function = formatdict['bool'] + elif issubclass(dtypeobj, _nt.integer): + #if issubclass(dtypeobj, _nt.timedelta64): + # format_function = formatdict['timedelta'] + #else: + format_function = formatdict['int'] + elif issubclass(dtypeobj, _nt.floating): + #if issubclass(dtypeobj, _nt.longfloat): + # format_function = formatdict['longfloat'] + #else: + format_function = formatdict['float'] + elif issubclass(dtypeobj, _nt.complexfloating): + if issubclass(dtypeobj, _nt.clongfloat): + format_function = formatdict['longcomplexfloat'] + else: + format_function = formatdict['complexfloat'] + elif issubclass(dtypeobj, (_nt.unicode_, _nt.string_)): + format_function = formatdict['numpystr'] + elif issubclass(dtypeobj, _nt.datetime64): + format_function = formatdict['datetime'] + else: + format_function = formatdict['str'] + + # skip over "[" + next_line_prefix = " " + # skip over array( + next_line_prefix += " "*len(prefix) + + lst = _formatArray(a, format_function, len(a.shape), max_line_width, + next_line_prefix, separator, + _summaryEdgeItems, summary_insert)[:-1] + return lst + +def _convert_arrays(obj): + import numeric as _nc + newtup = [] + for k in obj: + if isinstance(k, _nc.ndarray): + k = k.tolist() + elif isinstance(k, tuple): + k = _convert_arrays(k) + newtup.append(k) + return tuple(newtup) + + +def array2string(a, max_line_width=None, precision=None, + suppress_small=None, separator=' ', prefix="", + style=repr, formatter=None): + """ + Return a string representation of an array. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters splits the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing + precision (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero. A number is "very small" if it + is smaller than the current printing precision. + separator : str, optional + Inserted between elements. + prefix : str, optional + An array is typically printed as:: + + 'prefix(' + array2string(a) + ')' + + The length of the prefix string is used to align the + output correctly. + style : function, optional + A function that accepts an ndarray and returns a string. Used only + when the shape of `a` is equal to ``()``, i.e. for 0-D arrays. + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + Returns + ------- + array_str : str + String representation of the array. + + Raises + ------ + TypeError : if a callable in `formatter` does not return a string. + + See Also + -------- + array_str, array_repr, set_printoptions, get_printoptions + + Notes + ----- + If a formatter is specified for a certain type, the `precision` keyword is + ignored for that type. + + Examples + -------- + >>> x = np.array([1e-16,1,2,3]) + >>> print np.array2string(x, precision=2, separator=',', + ... suppress_small=True) + [ 0., 1., 2., 3.] + + >>> x = np.arange(3.) + >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) + '[0.00 1.00 2.00]' + + >>> x = np.arange(3) + >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) + '[0x0L 0x1L 0x2L]' + + """ + + if a.shape == (): + x = a.item() + if isna(x): + lst = str(x).replace('NA', _na_str, 1) + else: + try: + lst = a._format(x) + msg = "The `_format` attribute is deprecated in Numpy " \ + "2.0 and will be removed in 2.1. Use the " \ + "`formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + if isinstance(x, tuple): + x = _convert_arrays(x) + lst = style(x) + elif reduce(product, a.shape) == 0: + # treat as a null array if any of shape elements == 0 + lst = "[]" + else: + lst = _array2string(a, max_line_width, precision, suppress_small, + separator, prefix, formatter=formatter) + return lst + +def _extendLine(s, line, word, max_line_len, next_line_prefix): + if len(line.rstrip()) + len(word.rstrip()) >= max_line_len: + s += line.rstrip() + "\n" + line = next_line_prefix + line += word + return s, line + + +def _formatArray(a, format_function, rank, max_line_len, + next_line_prefix, separator, edge_items, summary_insert): + """formatArray is designed for two modes of operation: + + 1. Full output + + 2. Summarized output + + """ + if rank == 0: + obj = a.item() + if isinstance(obj, tuple): + obj = _convert_arrays(obj) + return str(obj) + + if summary_insert and 2*edge_items < len(a): + leading_items, trailing_items, summary_insert1 = \ + edge_items, edge_items, summary_insert + else: + leading_items, trailing_items, summary_insert1 = 0, len(a), "" + + if rank == 1: + s = "" + line = next_line_prefix + for i in xrange(leading_items): + word = format_function(a[i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + if summary_insert1: + s, line = _extendLine(s, line, summary_insert1, max_line_len, next_line_prefix) + + for i in xrange(trailing_items, 1, -1): + word = format_function(a[-i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + word = format_function(a[-1]) + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + s += line + "]\n" + s = '[' + s[len(next_line_prefix):] + else: + s = '[' + sep = separator.rstrip() + for i in xrange(leading_items): + if i > 0: + s += next_line_prefix + s += _formatArray(a[i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + + if summary_insert1: + s += next_line_prefix + summary_insert1 + "\n" + + for i in xrange(trailing_items, 1, -1): + if leading_items or i != trailing_items: + s += next_line_prefix + s += _formatArray(a[-i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + if leading_items or trailing_items > 1: + s += next_line_prefix + s += _formatArray(a[-1], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert).rstrip()+']\n' + return s + +class FloatFormat(object): + def __init__(self, data, precision, suppress_small, sign=False): + self.precision = precision + self.suppress_small = suppress_small + self.sign = sign + self.exp_format = False + self.large_exponent = False + self.max_str_len = 0 + #try: + self.fillFormat(data) + #except (TypeError, NotImplementedError): + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + #pass + + def fillFormat(self, data): + import numeric as _nc + # XXX pypy unimplemented + #errstate = _nc.seterr(all='ignore') + try: + special = isnan(data) | isinf(data) | isna(data) + special[isna(data)] = False + valid = not_equal(data, 0) & ~special + valid[isna(data)] = False + non_zero = absolute(data.compress(valid)) + if len(non_zero) == 0: + max_val = 0. + min_val = 0. + else: + max_val = maximum.reduce(non_zero, skipna=True) + min_val = minimum.reduce(non_zero, skipna=True) + if max_val >= 1.e8: + self.exp_format = True + if not self.suppress_small and (min_val < 0.0001 + or max_val/min_val > 1000.): + self.exp_format = True + finally: + pass + # XXX pypy unimplemented + #_nc.seterr(**errstate) + + if self.exp_format: + self.large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 + self.max_str_len = 8 + self.precision + if self.large_exponent: + self.max_str_len += 1 + if self.sign: + format = '%+' + else: + format = '%' + format = format + '%d.%de' % (self.max_str_len, self.precision) + else: + format = '%%.%df' % (self.precision,) + if len(non_zero): + precision = max([_digits(x, self.precision, format) + for x in non_zero]) + else: + precision = 0 + precision = min(self.precision, precision) + self.max_str_len = len(str(int(max_val))) + precision + 2 + if special.any(): + self.max_str_len = max(self.max_str_len, + len(_nan_str), + len(_inf_str)+1, + len(_na_str)) + if self.sign: + format = '%#+' + else: + format = '%#' + format = format + '%d.%df' % (self.max_str_len, precision) + + self.special_fmt = '%%%ds' % (self.max_str_len,) + self.format = format + + def __call__(self, x, strip_zeros=True): + import numeric as _nc + #err = _nc.seterr(invalid='ignore') + try: + if isna(x): + return self.special_fmt % (str(x).replace('NA', _na_str, 1),) + elif isnan(x): + if self.sign: + return self.special_fmt % ('+' + _nan_str,) + else: + return self.special_fmt % (_nan_str,) + elif isinf(x): + if x > 0: + if self.sign: + return self.special_fmt % ('+' + _inf_str,) + else: + return self.special_fmt % (_inf_str,) + else: + return self.special_fmt % ('-' + _inf_str,) + finally: + pass + #_nc.seterr(**err) + + s = self.format % x + if self.large_exponent: + # 3-digit exponent + expsign = s[-3] + if expsign == '+' or expsign == '-': + s = s[1:-2] + '0' + s[-2:] + elif self.exp_format: + # 2-digit exponent + if s[-3] == '0': + s = ' ' + s[:-3] + s[-2:] + elif strip_zeros: + z = s.rstrip('0') + s = z + ' '*(len(s)-len(z)) + return s + + +def _digits(x, precision, format): + s = format % x + z = s.rstrip('0') + return precision - len(s) + len(z) + + +_MAXINT = sys.maxint +_MININT = -sys.maxint-1 +class IntegerFormat(object): + def __init__(self, data): + try: + max_str_len = max(len(str(maximum.reduce(data, skipna=True))), + len(str(minimum.reduce(data, skipna=True)))) + self.format = '%' + str(max_str_len) + 'd' + except TypeError, NotImplementedError: + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + pass + except ValueError: + # this occurs when everything is NA + pass + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif _MININT < x < _MAXINT: + return self.format % x + else: + return "%s" % x + +class LongFloatFormat(object): + # XXX Have to add something to determine the width to use a la FloatFormat + # Right now, things won't line up properly + def __init__(self, precision, sign=False): + self.precision = precision + self.sign = sign + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif isnan(x): + if self.sign: + return '+' + _nan_str + else: + return ' ' + _nan_str + elif isinf(x): + if x > 0: + if self.sign: + return '+' + _inf_str + else: + return ' ' + _inf_str + else: + return '-' + _inf_str + elif x >= 0: + if self.sign: + return '+' + format_longfloat(x, self.precision) + else: + return ' ' + format_longfloat(x, self.precision) + else: + return format_longfloat(x, self.precision) + + +class LongComplexFormat(object): + def __init__(self, precision): + self.real_format = LongFloatFormat(precision) + self.imag_format = LongFloatFormat(precision, sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real) + i = self.imag_format(x.imag) + return r + i + 'j' + + +class ComplexFormat(object): + def __init__(self, x, precision, suppress_small): + self.real_format = FloatFormat(x.real, precision, suppress_small) + self.imag_format = FloatFormat(x.imag, precision, suppress_small, + sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real, strip_zeros=False) + i = self.imag_format(x.imag, strip_zeros=False) + if not self.imag_format.exp_format: + z = i.rstrip('0') + i = z + 'j' + ' '*(len(i)-len(z)) + else: + i = i + 'j' + return r + i + +class DatetimeFormat(object): + def __init__(self, x, unit=None, + timezone=None, casting='same_kind'): + # Get the unit from the dtype + if unit is None: + if x.dtype.kind == 'M': + unit = datetime_data(x.dtype)[0] + else: + unit = 's' + + # If timezone is default, make it 'local' or 'UTC' based on the unit + if timezone is None: + # Date units -> UTC, time units -> local + if unit in ('Y', 'M', 'W', 'D'): + self.timezone = 'UTC' + else: + self.timezone = 'local' + else: + self.timezone = timezone + self.unit = unit + self.casting = casting + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return "'%s'" % datetime_as_string(x, + unit=self.unit, + timezone=self.timezone, + casting=self.casting) + +class TimedeltaFormat(object): + def __init__(self, data): + if data.dtype.kind == 'm': + v = data.view('i8') + max_str_len = max(len(str(maximum.reduce(v))), + len(str(minimum.reduce(v)))) + self.format = '%' + str(max_str_len) + 'd' + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return self.format % x.astype('i8') + diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -30,7 +30,7 @@ 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', 'amax', 'amin', ] - + def take(a, indices, axis=None, out=None, mode='raise'): """ Take elements from an array along an axis. @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -149,6 +149,7 @@ [5, 6]]) """ + assert order == 'C' if not hasattr(a, 'reshape'): a = numpypy.array(a) return a.reshape(newshape) @@ -273,7 +274,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +316,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +367,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +411,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +452,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +557,7 @@ dtype=[('name', '|S10'), ('height', '>> a = [1, 2] + >>> np.asanyarray(a) + array([1, 2]) + + Instances of `ndarray` subclasses are passed through as-is: + + >>> a = np.matrix([1, 2]) + >>> np.asanyarray(a) is a + True + + """ + return array(a, dtype, copy=False, order=order, subok=True, + maskna=maskna, ownmaskna=ownmaskna) + +def base_repr(number, base=2, padding=0): + """ + Return a string representation of a number in the given base system. + + Parameters + ---------- + number : int + The value to convert. Only positive values are handled. + base : int, optional + Convert `number` to the `base` number system. The valid range is 2-36, + the default value is 2. + padding : int, optional + Number of zeros padded on the left. Default is 0 (no padding). + + Returns + ------- + out : str + String representation of `number` in `base` system. + + See Also + -------- + binary_repr : Faster version of `base_repr` for base 2. + + Examples + -------- + >>> np.base_repr(5) + '101' + >>> np.base_repr(6, 5) + '11' + >>> np.base_repr(7, base=5, padding=3) + '00012' + + >>> np.base_repr(10, base=16) + 'A' + >>> np.base_repr(32, base=16) + '20' + + """ + digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' + if base > len(digits): + raise ValueError("Bases greater than 36 not handled in base_repr.") + + num = abs(number) + res = [] + while num: + res.append(digits[num % base]) + num //= base + if padding: + res.append('0' * padding) + if number < 0: + res.append('-') + return ''.join(reversed(res or '0')) + +_typelessdata = [int_, float_]#, complex_] +# XXX +#if issubclass(intc, int): +# _typelessdata.append(intc) + +#if issubclass(longlong, int): +# _typelessdata.append(longlong) + +def array_repr(arr, max_line_width=None, precision=None, suppress_small=None): + """ + Return the string representation of an array. + + Parameters + ---------- + arr : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters split the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero, default is False. Very small + is defined by `precision`, if the precision is 8 then + numbers smaller than 5e-9 are represented as zero. + + Returns + ------- + string : str + The string representation of an array. + + See Also + -------- + array_str, array2string, set_printoptions + + Examples + -------- + >>> np.array_repr(np.array([1,2])) + 'array([1, 2])' + >>> np.array_repr(np.ma.array([0.])) + 'MaskedArray([ 0.])' + >>> np.array_repr(np.array([], np.int32)) + 'array([], dtype=int32)' + + >>> x = np.array([1e-6, 4e-7, 2, 3]) + >>> np.array_repr(x, precision=6, suppress_small=True) + 'array([ 0.000001, 0. , 2. , 3. ])' + + """ + if arr.size > 0 or arr.shape==(0,): + lst = array2string(arr, max_line_width, precision, suppress_small, + ', ', "array(") + else: # show zero-length shape unless it is (0,) + lst = "[], shape=%s" % (repr(arr.shape),) + + if arr.__class__ is not ndarray: + cName= arr.__class__.__name__ + else: + cName = "array" + + skipdtype = (arr.dtype.type in _typelessdata) and arr.size > 0 + + # XXX pypy lacks support + if 0 and arr.flags.maskna: + whichna = isna(arr) + # If nothing is NA, explicitly signal the NA-mask + if not any(whichna): + lst += ", maskna=True" + # If everything is NA, can't skip the dtype + if skipdtype and all(whichna): + skipdtype = False + + if skipdtype: + return "%s(%s)" % (cName, lst) + else: + typename = arr.dtype.name + # Quote typename in the output if it is "complex". + if typename and not (typename[0].isalpha() and typename.isalnum()): + typename = "'%s'" % typename + + lf = '' + if 0: # or issubclass(arr.dtype.type, flexible): + if arr.dtype.names: + typename = "%s" % str(arr.dtype) + else: + typename = "'%s'" % str(arr.dtype) + lf = '\n'+' '*len("array(") + return cName + "(%s, %sdtype=%s)" % (lst, lf, typename) + +def array_str(a, max_line_width=None, precision=None, suppress_small=None): + """ + Return a string representation of the data in an array. + + The data in the array is returned as a single string. This function is + similar to `array_repr`, the difference being that `array_repr` also + returns information on the kind of array and its data type. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + Inserts newlines if text is longer than `max_line_width`. The + default is, indirectly, 75. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent numbers "very close" to zero as zero; default is False. + Very close is defined by precision: if the precision is 8, e.g., + numbers smaller (in absolute value) than 5e-9 are represented as + zero. + + See Also + -------- + array2string, array_repr, set_printoptions + + Examples + -------- + >>> np.array_str(np.arange(3)) + '[0 1 2]' + + """ + return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) + +def set_string_function(f, repr=True): + """ + Set a Python function to be used when pretty printing arrays. + + Parameters + ---------- + f : function or None + Function to be used to pretty print arrays. The function should expect + a single array argument and return a string of the representation of + the array. If None, the function is reset to the default NumPy function + to print arrays. + repr : bool, optional + If True (default), the function for pretty printing (``__repr__``) + is set, if False the function that returns the default string + representation (``__str__``) is set. + + See Also + -------- + set_printoptions, get_printoptions + + Examples + -------- + >>> def pprint(arr): + ... return 'HA! - What are you going to do now?' + ... + >>> np.set_string_function(pprint) + >>> a = np.arange(10) + >>> a + HA! - What are you going to do now? + >>> print a + [0 1 2 3 4 5 6 7 8 9] + + We can reset the function to the default: + + >>> np.set_string_function(None) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + + `repr` affects either pretty printing or normal string representation. + Note that ``__repr__`` is still affected by setting ``__str__`` + because the width of each array element in the returned string becomes + equal to the length of the result of ``__str__()``. + + >>> x = np.arange(4) + >>> np.set_string_function(lambda x:'random', repr=False) + >>> x.__str__() + 'random' + >>> x.__repr__() + 'array([ 0, 1, 2, 3])' + + """ + if f is None: + if repr: + return multiarray.set_string_function(array_repr, 1) + else: + return multiarray.set_string_function(array_str, 0) + else: + return multiarray.set_string_function(f, repr) + +set_string_function(array_str, 0) +set_string_function(array_repr, 1) + +little_endian = (sys.byteorder == 'little') diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -440,6 +440,12 @@ def method_popitem(dct): return dct.getanyitem('items') + def method_pop(dct, s_key, s_dfl=None): + dct.dictdef.generalize_key(s_key) + if s_dfl is not None: + dct.dictdef.generalize_value(s_dfl) + return dct.dictdef.read_value() + def _can_only_throw(dic, *ignore): if dic1.dictdef.dictkey.custom_eq_hash: return None # r_dict: can throw anything diff --git a/pypy/bin/py.py b/pypy/bin/py.py --- a/pypy/bin/py.py +++ b/pypy/bin/py.py @@ -76,6 +76,8 @@ config.objspace.suggest(allworkingmodules=False) if config.objspace.allworkingmodules: pypyoption.enable_allworkingmodules(config) + if config.objspace.usemodules._continuation: + config.translation.continuation = True if config.objspace.usemodules.thread: config.translation.thread = True diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -340,7 +340,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -370,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withspecialisedtuple=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,11 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.6-linux.tar.bz2 + $ tar xf pypy-1.7-linux.tar.bz2 - $ ./pypy-1.6/bin/pypy + $ ./pypy-1.7/bin/pypy Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + [PyPy 1.7.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -75,14 +75,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.6/bin/pypy distribute_setup.py + $ ./pypy-1.7/bin/pypy distribute_setup.py - $ ./pypy-1.6/bin/pypy get-pip.py + $ ./pypy-1.7/bin/pypy get-pip.py - $ ./pypy-1.6/bin/pip install pygments # for example + $ ./pypy-1.7/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.6/site-packages``, and -the scripts in ``pypy-1.6/bin``. +3rd party libraries will be installed in ``pypy-1.7/site-packages``, and +the scripts in ``pypy-1.7/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -155,7 +155,7 @@ function. The two input variables are the exception class and the exception value, respectively. (No other block will actually link to the exceptblock if the function does not - explicitely raise exceptions.) + explicitly raise exceptions.) ``Block`` @@ -325,7 +325,7 @@ Mutable objects need special treatment during annotation, because the annotation of contained values needs to be possibly updated to account for mutation operations, and consequently the annotation information -reflown through the relevant parts of the flow the graphs. +reflown through the relevant parts of the flow graphs. * ``SomeList`` stands for a list of homogeneous type (i.e. all the elements of the list are represented by a single common ``SomeXxx`` @@ -503,8 +503,8 @@ Since RPython is a garbage collected language there is a lot of heap memory allocation going on all the time, which would either not occur at all in a more -traditional explicitely managed language or results in an object which dies at -a time known in advance and can thus be explicitely deallocated. For example a +traditional explicitly managed language or results in an object which dies at +a time known in advance and can thus be explicitly deallocated. For example a loop of the following form:: for i in range(n): @@ -696,7 +696,7 @@ So far it is the second most mature high level backend after GenCLI: it still can't translate the full Standard Interpreter, but after the -Leysin sprint we were able to compile and run the rpytstone and +Leysin sprint we were able to compile and run the rpystone and richards benchmarks. GenJVM is almost entirely the work of Niko Matsakis, who worked on it diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -445,6 +445,7 @@ AsyncAction.__init__(self, space) self.dying_objects = [] self.finalizers_lock_count = 0 + self.enabled_at_app_level = True def register_callback(self, w_obj, callback, descrname): self.dying_objects.append((w_obj, callback, descrname)) diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -547,6 +551,28 @@ res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) + def test_call_box_func(self): + def a(a1, a2): + return a1 + a2 + def b(b1, b2): + return b1 * b2 + + arg1 = 40 + arg2 = 2 + for f in [a, b]: + TP = lltype.Signed + FPTR = self.Ptr(self.FuncType([TP, TP], TP)) + func_ptr = llhelper(FPTR, f) + FUNC = deref(FPTR) + funcconst = self.get_funcbox(self.cpu, func_ptr) + funcbox = funcconst.clonebox() + calldescr = self.cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, + EffectInfo.MOST_GENERAL) + res = self.execute_operation(rop.CALL, + [funcbox, BoxInt(arg1), BoxInt(arg2)], + 'int', descr=calldescr) + assert res.getint() == f(arg1, arg2) + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. @@ -1868,6 +1894,7 @@ values.append(descr) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(1)) + values.append(token) FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Void) func_ptr = llhelper(lltype.Ptr(FUNC), maybe_force) @@ -1898,7 +1925,8 @@ assert fail.identifier == 1 assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 10 - assert values == [faildescr, 1, 10] + token = self.cpu.get_latest_force_token() + assert values == [faildescr, 1, 10, token] def test_force_operations_returning_int(self): values = [] @@ -1907,6 +1935,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Signed) @@ -1940,7 +1969,8 @@ assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 42 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_force_operations_returning_float(self): values = [] @@ -1949,6 +1979,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42.5 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Float) @@ -1984,7 +2015,8 @@ x = self.cpu.get_latest_value_float(1) assert longlong.getrealfloat(x) == 42.5 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_call_to_c_function(self): from pypy.rlib.libffi import CDLL, types, ArgChain, FUNCFLAG_CDECL @@ -2974,6 +3006,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -1113,6 +1118,12 @@ for src, dst in singlefloats: self.mc.MOVD(dst, src) # Finally remap the arguments in the main regs + # If x is a register and is in dst_locs, then oups, it needs to + # be moved away: + if x in dst_locs: + src_locs.append(x) + dst_locs.append(r10) + x = r10 remap_frame_layout(self, src_locs, dst_locs, X86_64_SCRATCH_REG) self._regalloc.reserve_param(len(pass_on_stack)) @@ -2037,10 +2048,7 @@ size = sizeloc.value signloc = arglocs[1] - if isinstance(op.getarg(0), Const): - x = imm(op.getarg(0).getint()) - else: - x = arglocs[2] + x = arglocs[2] # the function address if x is eax: tmp = ecx else: @@ -2550,3 +2558,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -6,7 +6,7 @@ from pypy.jit.codewriter import longlong from pypy.jit.metainterp import history, compile from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS +from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS, IS_X86_32 from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc @@ -142,7 +142,9 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) - all_null_registers = lltype.malloc(rffi.LONGP.TO, 24, + all_null_registers = lltype.malloc(rffi.LONGP.TO, + IS_X86_32 and (16+8) # 16 + 8 regs + or (16+16), # 16 + 16 regs flavor='raw', zero=True, immortal=True) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,7 +423,8 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/assembler.py b/pypy/jit/codewriter/assembler.py --- a/pypy/jit/codewriter/assembler.py +++ b/pypy/jit/codewriter/assembler.py @@ -81,10 +81,15 @@ if not isinstance(value, (llmemory.AddressAsInt, ComputedIntSymbolic)): value = lltype.cast_primitive(lltype.Signed, value) - if allow_short and -128 <= value <= 127: - # emit the constant as a small integer - self.code.append(chr(value & 0xFF)) - return True + if allow_short: + try: + short_num = -128 <= value <= 127 + except TypeError: # "Symbolics cannot be compared!" + short_num = False + if short_num: + # emit the constant as a small integer + self.code.append(chr(value & 0xFF)) + return True constants = self.constants_i elif kind == 'ref': value = lltype.cast_opaque_ptr(llmemory.GCREF, value) diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,16 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1553,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1793,6 +1794,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1966,9 +1976,14 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -18,6 +18,8 @@ pc = 0 opnum = 0 + _attrs_ = ('result',) + def __init__(self, result): self.result = result @@ -62,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -194,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2629,6 +2629,38 @@ self.check_jitcell_token_count(1) self.check_target_token_count(5) + def test_max_unroll_loops(self): + from pypy.jit.metainterp.optimize import InvalidLoop + from pypy.jit.metainterp import optimizeopt + myjitdriver = JitDriver(greens = [], reds = ['n', 'i']) + # + def f(n, limit): + set_param(myjitdriver, 'threshold', 5) + set_param(myjitdriver, 'max_unroll_loops', limit) + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + print i + i += 1 + return i + # + def my_optimize_trace(*args, **kwds): + raise InvalidLoop + old_optimize_trace = optimizeopt.optimize_trace + optimizeopt.optimize_trace = my_optimize_trace + try: + res = self.meta_interp(f, [23, 4]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(3) + # + res = self.meta_interp(f, [23, 20]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(2) + finally: + optimizeopt.optimize_trace = old_optimize_trace + def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True @@ -596,20 +601,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -108,6 +108,10 @@ w_result = state.codec_search_cache.get(normalized_encoding, None) if w_result is not None: return w_result + return _lookup_codec_loop(space, encoding, normalized_encoding) + +def _lookup_codec_loop(space, encoding, normalized_encoding): + state = space.fromcache(CodecState) if state.codec_need_encodings: w_import = space.getattr(space.builtin, space.wrap("__import__")) # registers new codecs diff --git a/pypy/module/_codecs/test/test_codecs.py b/pypy/module/_codecs/test/test_codecs.py --- a/pypy/module/_codecs/test/test_codecs.py +++ b/pypy/module/_codecs/test/test_codecs.py @@ -588,10 +588,18 @@ raises(UnicodeDecodeError, '+3ADYAA-'.decode, 'utf-7') def test_utf_16_encode_decode(self): - import codecs + import codecs, sys x = u'123abc' - assert codecs.getencoder('utf-16')(x) == ('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) - assert codecs.getdecoder('utf-16')('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) + if sys.byteorder == 'big': + assert codecs.getencoder('utf-16')(x) == ( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c', 6) + assert codecs.getdecoder('utf-16')( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c') == (x, 14) + else: + assert codecs.getencoder('utf-16')(x) == ( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) + assert codecs.getdecoder('utf-16')( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) def test_unicode_escape(self): assert u'\\'.encode('unicode-escape') == '\\\\' diff --git a/pypy/module/_codecs/test/test_ztranslation.py b/pypy/module/_codecs/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/_codecs/test/test_ztranslation.py @@ -0,0 +1,5 @@ +from pypy.objspace.fake.checkmodule import checkmodule + + +def test__codecs_translates(): + checkmodule('_codecs') diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -34,8 +34,12 @@ ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) - ropenssl.EVP_DigestInit(ctx, digest_type) - self.ctx = ctx + try: + ropenssl.EVP_DigestInit(ctx, digest_type) + self.ctx = ctx + except: + lltype.free(ctx, flavor='raw') + raise def __del__(self): # self.lock.free() diff --git a/pypy/module/_io/interp_fileio.py b/pypy/module/_io/interp_fileio.py --- a/pypy/module/_io/interp_fileio.py +++ b/pypy/module/_io/interp_fileio.py @@ -349,6 +349,8 @@ try: s = os.read(self.fd, size) except OSError, e: + if e.errno == errno.EAGAIN: + return space.w_None raise wrap_oserror(space, e, exception_name='w_IOError') @@ -362,6 +364,8 @@ try: buf = os.read(self.fd, length) except OSError, e: + if e.errno == errno.EAGAIN: + return space.w_None raise wrap_oserror(space, e, exception_name='w_IOError') rwbuffer.setslice(0, buf) diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -133,6 +133,19 @@ f.close() assert a == 'a\nbxxxxxxx' + def test_nonblocking_read(self): + import os, fcntl + r_fd, w_fd = os.pipe() + # set nonblocking + fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) + import _io + f = _io.FileIO(r_fd, 'r') + # Read from stream sould return None + assert f.read() is None + assert f.read(10) is None + a = bytearray('x' * 10) + assert f.readinto(a) is None + def test_repr(self): import _io f = _io.FileIO(self.tmpfile, 'r') diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -67,9 +67,6 @@ self.connect(self.addr_from_object(space, w_addr)) except SocketError, e: raise converted_error(space, e) - except TypeError, e: - raise OperationError(space.w_TypeError, - space.wrap(str(e))) def connect_ex_w(self, space, w_addr): """connect_ex(address) -> errno diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py --- a/pypy/module/_socket/test/test_sock_app.py +++ b/pypy/module/_socket/test/test_sock_app.py @@ -529,26 +529,31 @@ import _socket, os if not hasattr(_socket, 'AF_UNIX'): skip('AF_UNIX not supported.') - sockpath = os.path.join(self.udir, 'app_test_unix_socket_connect') + oldcwd = os.getcwd() + os.chdir(self.udir) + try: + sockpath = 'app_test_unix_socket_connect' - serversock = _socket.socket(_socket.AF_UNIX) - serversock.bind(sockpath) - serversock.listen(1) + serversock = _socket.socket(_socket.AF_UNIX) + serversock.bind(sockpath) + serversock.listen(1) - clientsock = _socket.socket(_socket.AF_UNIX) - clientsock.connect(sockpath) - s, addr = serversock.accept() - assert not addr + clientsock = _socket.socket(_socket.AF_UNIX) + clientsock.connect(sockpath) + s, addr = serversock.accept() + assert not addr - s.send('X') - data = clientsock.recv(100) - assert data == 'X' - clientsock.send('Y') - data = s.recv(100) - assert data == 'Y' + s.send('X') + data = clientsock.recv(100) + assert data == 'X' + clientsock.send('Y') + data = s.recv(100) + assert data == 'Y' - clientsock.close() - s.close() + clientsock.close() + s.close() + finally: + os.chdir(oldcwd) class AppTestSocketTCP: diff --git a/pypy/module/_ssl/test/test_ssl.py b/pypy/module/_ssl/test/test_ssl.py --- a/pypy/module/_ssl/test/test_ssl.py +++ b/pypy/module/_ssl/test/test_ssl.py @@ -64,8 +64,8 @@ def test_sslwrap(self): import _ssl, _socket, sys, gc - if sys.platform == 'darwin': - skip("hangs indefinitely on OSX (also on CPython)") + if sys.platform == 'darwin' or 'freebsd' in sys.platform: + skip("hangs indefinitely on OSX & FreeBSD (also on CPython)") s = _socket.socket() ss = _ssl.sslwrap(s, 0) exc = raises(_socket.error, ss.do_handshake) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.7.1" +#define PYPY_VERSION "1.8.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -32,11 +32,10 @@ Py_DecRef(space, w_item) if not isinstance(w_list, W_ListObject): PyErr_BadInternalCall(space) - wrappeditems = w_list.getitems() - if index < 0 or index >= len(wrappeditems): + if index < 0 or index >= w_list.length(): raise OperationError(space.w_IndexError, space.wrap( "list assignment index out of range")) - wrappeditems[index] = w_item + w_list.setitem(index, w_item) return 0 @cpython_api([PyObject, Py_ssize_t], PyObject) @@ -74,7 +73,7 @@ """Macro form of PyList_Size() without error checking. """ assert isinstance(w_list, W_ListObject) - return len(w_list.getitems()) + return w_list.length() @cpython_api([PyObject], Py_ssize_t, error=-1) diff --git a/pypy/module/cpyext/test/test_listobject.py b/pypy/module/cpyext/test/test_listobject.py --- a/pypy/module/cpyext/test/test_listobject.py +++ b/pypy/module/cpyext/test/test_listobject.py @@ -128,3 +128,6 @@ module.setslice(l, None) assert l == [0, 4, 5] + l = [1, 2, 3] + module.setlistitem(l,0) + assert l == [None, 2, 3] diff --git a/pypy/module/fcntl/test/test_fcntl.py b/pypy/module/fcntl/test/test_fcntl.py --- a/pypy/module/fcntl/test/test_fcntl.py +++ b/pypy/module/fcntl/test/test_fcntl.py @@ -42,13 +42,9 @@ else: start_len = "qq" - if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', - 'Darwin1.2', 'darwin', - 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', - 'freebsd6', 'freebsd7', 'freebsd8', 'freebsd9', - 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', - 'openbsd5'): + if any(substring in sys.platform.lower() + for substring in ('netbsd', 'darwin', 'freebsd', 'bsdos', + 'openbsd')): if struct.calcsize('l') == 8: off_t = 'l' pid_t = 'i' @@ -182,7 +178,8 @@ def test_large_flag(self): import sys - if sys.platform == "darwin" or sys.platform.startswith("openbsd"): + if any(plat in sys.platform + for plat in ('darwin', 'openbsd', 'freebsd')): skip("Mac OS doesn't have any large flag in fcntl.h") import fcntl, sys if sys.maxint == 2147483647: diff --git a/pypy/module/gc/__init__.py b/pypy/module/gc/__init__.py --- a/pypy/module/gc/__init__.py +++ b/pypy/module/gc/__init__.py @@ -1,18 +1,18 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - appleveldefs = { - 'enable': 'app_gc.enable', - 'disable': 'app_gc.disable', - 'isenabled': 'app_gc.isenabled', - } interpleveldefs = { 'collect': 'interp_gc.collect', + 'enable': 'interp_gc.enable', + 'disable': 'interp_gc.disable', + 'isenabled': 'interp_gc.isenabled', 'enable_finalizers': 'interp_gc.enable_finalizers', 'disable_finalizers': 'interp_gc.disable_finalizers', 'garbage' : 'space.newlist([])', #'dump_heap_stats': 'interp_gc.dump_heap_stats', } + appleveldefs = { + } def __init__(self, space, w_name): if (not space.config.translating or diff --git a/pypy/module/gc/app_gc.py b/pypy/module/gc/app_gc.py deleted file mode 100644 --- a/pypy/module/gc/app_gc.py +++ /dev/null @@ -1,21 +0,0 @@ -# NOT_RPYTHON - -enabled = True - -def isenabled(): - global enabled - return enabled - -def enable(): - global enabled - import gc - if not enabled: - gc.enable_finalizers() - enabled = True - -def disable(): - global enabled - import gc - if enabled: - gc.disable_finalizers() - enabled = False diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -17,6 +17,26 @@ rgc.collect() return space.wrap(0) +def enable(space): + """Non-recursive version. Enable finalizers now. + If they were already enabled, no-op. + If they were disabled even several times, enable them anyway. + """ + if not space.user_del_action.enabled_at_app_level: + space.user_del_action.enabled_at_app_level = True + enable_finalizers(space) + +def disable(space): + """Non-recursive version. Disable finalizers now. Several calls + to this function are ignored. + """ + if space.user_del_action.enabled_at_app_level: + space.user_del_action.enabled_at_app_level = False + disable_finalizers(space) + +def isenabled(space): + return space.newbool(space.user_del_action.enabled_at_app_level) + def enable_finalizers(space): if space.user_del_action.finalizers_lock_count == 0: raise OperationError(space.w_ValueError, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -27,6 +27,12 @@ 'dot': 'interp_numarray.dot', 'fromstring': 'interp_support.fromstring', 'flatiter': 'interp_numarray.W_FlatIterator', + 'isna': 'interp_numarray.isna', + 'concatenate': 'interp_numarray.concatenate', + + 'set_string_function': 'appbridge.set_string_function', + + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', @@ -66,10 +72,12 @@ ("copysign", "copysign"), ("cos", "cos"), ("divide", "divide"), + ("true_divide", "true_divide"), ("equal", "equal"), ("exp", "exp"), ("fabs", "fabs"), ("floor", "floor"), + ("ceil", "ceil"), ("greater", "greater"), ("greater_equal", "greater_equal"), ("less", "less"), @@ -85,12 +93,16 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), + ('bitwise_not', 'invert'), + ('isnan', 'isnan'), + ('isinf', 'isinf'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl appleveldefs = { 'average': 'app_numpy.average', - 'mean': 'app_numpy.mean', 'sum': 'app_numpy.sum', 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', @@ -99,5 +111,4 @@ 'e': 'app_numpy.e', 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', - 'reshape': 'app_numpy.reshape', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -11,34 +11,55 @@ def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! - return mean(a) + if not hasattr(a, "mean"): + a = _numpypy.array(a) + return a.mean() def identity(n, dtype=None): - a = _numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n, n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a): - if not hasattr(a, "mean"): - a = _numpypy.array(a) - return a.mean() +def sum(a,axis=None): + '''sum(a, axis=None) + Sum of array elements over a given axis. -def sum(a): + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + ''' + # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): a = _numpypy.array(a) - return a.sum() + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): a = _numpypy.array(a) - return a.min() + return a.min(axis) -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): a = _numpypy.array(a) - return a.max() - + return a.max(axis) + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). @@ -55,40 +76,3 @@ arr[j] = i i += step return arr - - -def reshape(a, shape): - '''reshape(a, newshape) - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array -''' - if not hasattr(a, 'reshape'): - a = _numpypy.array(a) - return a.reshape(shape) diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/appbridge.py @@ -0,0 +1,38 @@ + +from pypy.rlib.objectmodel import specialize + +class AppBridgeCache(object): + w__var = None + w__std = None + w_module = None + w_array_repr = None + w_array_str = None + + def __init__(self, space): + self.w_import = space.appexec([], """(): + def f(): + import sys + __import__('numpypy.core._methods') + return sys.modules['numpypy.core._methods'] + return f + """) + + @specialize.arg(2) + def call_method(self, space, name, *args): + w_meth = getattr(self, 'w_' + name) + if w_meth is None: + if self.w_module is None: + self.w_module = space.call_function(self.w_import) + w_meth = space.getattr(self.w_module, space.wrap(name)) + setattr(self, 'w_' + name, w_meth) + return space.call_function(w_meth, *args) + +def set_string_function(space, w_f, w_repr): + cache = get_appbridge_cache(space) + if space.is_true(w_repr): + cache.w_array_repr = w_f + else: + cache.w_array_str = w_f + +def get_appbridge_cache(space): + return space.fromcache(AppBridgeCache) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -32,13 +32,16 @@ class BadToken(Exception): pass -SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] +SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", + "unegative", "flat"] +TWO_ARG_FUNCTIONS = ['take'] class FakeSpace(object): w_ValueError = None w_TypeError = None w_IndexError = None w_OverflowError = None + w_NotImplementedError = None w_None = None w_bool = "bool" @@ -53,6 +56,9 @@ """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild + def _freeze_(self): + return True + def issequence_w(self, w_obj): return isinstance(w_obj, ListObject) or isinstance(w_obj, W_NDimArray) @@ -144,6 +150,9 @@ def allocate_instance(self, klass, w_subtype): return instantiate(klass) + def newtuple(self, list_w): + raise ValueError + def len_w(self, w_obj): if isinstance(w_obj, ListObject): return len(w_obj.items) @@ -371,14 +380,18 @@ for arg in self.args])) def execute(self, interp): + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch - arr = self.args[0].execute(interp) - if not isinstance(arr, BaseArray): - raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -392,21 +405,31 @@ elif self.name == "unegative": neg = interp_ufuncs.get(interp.space).negative w_res = neg.call(interp.space, [arr]) + elif self.name == "flat": + w_res = arr.descr_get_flatiter(interp.space) else: assert False # unreachable code - if isinstance(w_res, BaseArray): - return w_res - if isinstance(w_res, FloatObject): - dtype = get_dtype_cache(interp.space).w_float64dtype - elif isinstance(w_res, BoolObject): - dtype = get_dtype_cache(interp.space).w_booldtype - elif isinstance(w_res, interp_boxes.W_GenericBox): - dtype = w_res.get_dtype(interp.space) + elif self.name in TWO_ARG_FUNCTIONS: + arg = self.args[1].execute(interp) + if not isinstance(arg, BaseArray): + raise ArgumentNotAnArray + if self.name == 'take': + w_res = arr.descr_take(interp.space, arg) else: - dtype = None - return scalar_w(interp.space, dtype, w_res) + assert False # unreachable else: raise WrongFunctionName + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = get_dtype_cache(interp.space).w_float64dtype + elif isinstance(w_res, BoolObject): + dtype = get_dtype_cache(interp.space).w_booldtype + elif isinstance(w_res, interp_boxes.W_GenericBox): + dtype = w_res.get_dtype(interp.space) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) _REGEXES = [ ('-?[\d\.]+', 'number'), @@ -416,7 +439,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +527,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +547,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +567,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +583,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -8,6 +8,7 @@ from pypy.tool.sourcetools import func_with_new_name +MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): @@ -28,6 +29,7 @@ def convert_to(self, dtype): return dtype.box(self.value) + class W_GenericBox(Wrappable): _attrs_ = () @@ -93,7 +95,7 @@ descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") - def descr_tolist(self, space): + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -104,7 +106,8 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - pass + def int_w(self, space): + return space.int_w(self.descr_int(space)) class W_SignedIntegerBox(W_IntegerBox): pass @@ -187,7 +190,7 @@ __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), - tolist = interp2app(W_GenericBox.descr_tolist), + tolist = interp2app(W_GenericBox.item), ) W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, @@ -231,7 +234,7 @@ __new__ = interp2app(W_UInt16Box.descr__new__.im_func), ) -W_Int32Box.typedef = TypeDef("int32", W_SignedIntegerBox.typedef, +W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), ) @@ -241,24 +244,18 @@ __new__ = interp2app(W_UInt32Box.descr__new__.im_func), ) -if LONG_BIT == 32: - long_name = "int32" -elif LONG_BIT == 64: - long_name = "int64" -W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), - __module__ = "numpypy", - __new__ = interp2app(W_LongBox.descr__new__.im_func), -) - -W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, - __module__ = "numpypy", -) - W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), ) +if LONG_BIT == 32: + W_LongBox = W_Int32Box + W_ULongBox = W_UInt32Box +elif LONG_BIT == 64: + W_LongBox = W_Int64Box + W_ULongBox = W_UInt64Box + W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), @@ -283,3 +280,4 @@ __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) + diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -46,6 +47,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_bool(storage, isize, i, 0) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) @@ -62,7 +67,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -84,18 +89,38 @@ def descr_get_shape(self, space): return space.newtuple([]) + def eq(self, space, w_other): + w_other = space.call_function(space.gettypefor(W_Dtype), w_other) + return space.is_w(self, w_other) + + def descr_eq(self, space, w_other): + return space.wrap(self.eq(space, w_other)) + + def descr_ne(self, space, w_other): + return space.wrap(not self.eq(space, w_other)) + + def is_int_type(self): + return (self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR or + self.kind == BOOLLTR) + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), __str__= interp2app(W_Dtype.descr_str), __repr__ = interp2app(W_Dtype.descr_repr), + __eq__ = interp2app(W_Dtype.descr_eq), + __ne__ = interp2app(W_Dtype.descr_ne), num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), shape = GetSetProperty(W_Dtype.descr_get_shape), + name = interp_attrproperty('name', cls=W_Dtype), ) W_Dtype.typedef.acceptable_as_base_class = False @@ -107,7 +132,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +141,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +149,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +157,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +165,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +173,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +181,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +193,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +202,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +210,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +219,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +227,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +237,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,80 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +""" This is a mini-tutorial on iterators, strides, and +memory layout. It assumes you are familiar with the terms, see +http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html +for a more gentle introduction. -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +Given an array x: x.shape == [5,6], + +At which byte in x.data does the item x[3,4] begin? +if x.strides==[1,5]: + pData = x.pData + (x.start + 3*1 + 4*5)*sizeof(x.pData[0]) + pData = x.pData + (x.start + 24) * sizeof(x.pData[0]) +so the offset of the element is 24 elements after the first + +What is the next element in x after coordinates [3,4]? +if x.order =='C': + next == [3,5] => offset is 28 +if x.order =='F': + next == [4,4] => offset is 24 +so for the strides [1,5] x is 'F' contiguous +likewise, for the strides [6,1] x would be 'C' contiguous. + +Iterators have an internal representation of the current coordinates +(indices), the array, strides, and backstrides. A short digression to +explain backstrides: what is the coordinate and offset after [3,5] in +the example above? +if x.order == 'C': + next == [4,0] => offset is 4 +if x.order == 'F': + next == [4,5] => offset is 25 +Note that in 'C' order we stepped BACKWARDS 24 while 'overflowing' a +shape dimension + which is back 25 and forward 1, + which is x.strides[1] * (x.shape[1] - 1) + x.strides[0] +so if we precalculate the overflow backstride as +[x.strides[i] * (x.shape[i] - 1) for i in range(len(x.shape))] +we can go faster. +All the calculations happen in next() + +next_step_x() tries to do the iteration for a number of steps at once, +but then we cannot gaurentee that we only overflow one single shape +dimension, perhaps we could overflow times in one big step. +""" + +# structures to describe slicing + +class Chunk(object): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + + def __repr__(self): + return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, + self.lgt) + +class BaseTransform(object): + pass + +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,20 +83,40 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 self.size = size def next(self, shapelen): + return self.next_skip_x(1) + + def next_skip_x(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self.next_skip_x(0) + def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +133,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +180,43 @@ res._done = done return res + @jit.unroll_safe + def next_skip_x(self, shapelen, step): + shapelen = jit.promote(len(self.res_shape)) + offset = self.offset + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - step: + indices[i] += step + offset += self.strides[i] * step + break + else: + remaining_step = (indices[i] + step) // self.res_shape[i] + this_i_step = step - remaining_step * self.res_shape[i] + offset += self.strides[i] * this_i_step + indices[i] = indices[i] + this_i_step + step = remaining_step + else: + done = True + res = instantiate(ViewIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + return res + + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +224,64 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + if len(shape) == len(strides): + # keepdims = True + self.strides = strides[:dim] + [0] + strides[dim + 1:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim + 1:] + else: + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + if i == self.dim: + first_line = True + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +299,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,211 +1,82 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature -from pypy.module.micronumpy.strides import calculate_slice_strides +from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, + signature, support) +from pypy.module.micronumpy.strides import (calculate_slice_strides, + shape_agreement, find_shape_and_elems, get_shape_from_iterable, + calc_new_strides, to_coords) from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, + SkipLastAxisIterator, Chunk, ViewIterator) +from pypy.module.micronumpy.appbridge import get_appbridge_cache + numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) - -def _find_shape_and_elems(space, w_iterable): - shape = [space.len_w(w_iterable)] - batch = space.listview(w_iterable) - while True: - new_batch = [] - if not batch: - return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - return shape, batch - size = space.len_w(batch[0]) - for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - new_batch += space.listview(w_elem) - shape.append(size) - batch = new_batch - -def shape_agreement(space, shape1, shape2): - ret = _shape_agreement(shape1, shape2) - if len(ret) < max(len(shape1), len(shape2)): - raise OperationError(space.w_ValueError, - space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( - ",".join([str(x) for x in shape1]), - ",".join([str(x) for x in shape2]), - )) - ) - return ret - -def _shape_agreement(shape1, shape2): - """ Checks agreement about two shapes with respect to broadcasting. Returns - the resulting shape. - """ - lshift = 0 - rshift = 0 - if len(shape1) > len(shape2): - m = len(shape1) - n = len(shape2) - rshift = len(shape2) - len(shape1) - remainder = shape1 - else: - m = len(shape2) - n = len(shape1) - lshift = len(shape1) - len(shape2) - remainder = shape2 - endshape = [0] * m - indices1 = [True] * m - indices2 = [True] * m - for i in range(m - 1, m - n - 1, -1): - left = shape1[i + lshift] - right = shape2[i + rshift] - if left == right: - endshape[i] = left - elif left == 1: - endshape[i] = right - indices1[i + lshift] = False - elif right == 1: - endshape[i] = left - indices2[i + rshift] = False - else: - return [] - #raise OperationError(space.w_ValueError, space.wrap( - # "frames are not aligned")) - for i in range(m - n): - endshape[i] = remainder[i] - return endshape - -def get_shape_from_iterable(space, old_size, w_iterable): - new_size = 0 - new_shape = [] - if space.isinstance_w(w_iterable, space.w_int): - new_size = space.int_w(w_iterable) - if new_size < 0: - new_size = old_size - new_shape = [new_size] - else: - neg_dim = -1 - batch = space.listview(w_iterable) - new_size = 1 - if len(batch) < 1: - if old_size == 1: - # Scalars can have an empty size. - new_size = 1 - else: - new_size = 0 - new_shape = [] - i = 0 - for elem in batch: - s = space.int_w(elem) - if s < 0: - if neg_dim >= 0: - raise OperationError(space.w_ValueError, space.wrap( - "can only specify one unknown dimension")) - s = 1 - neg_dim = i - new_size *= s - new_shape.append(s) - i += 1 - if neg_dim >= 0: - new_shape[neg_dim] = old_size / new_size - new_size *= new_shape[neg_dim] - if new_size != old_size: - raise OperationError(space.w_ValueError, - space.wrap("total size of new array must be unchanged")) - return new_shape - -# Recalculating strides. Find the steps that the iteration does for each -# dimension, given the stride and shape. Then try to create a new stride that -# fits the new shape, using those steps. If there is a shape/step mismatch -# (meaning that the realignment of elements crosses from one step into another) -# return None so that the caller can raise an exception. -def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and - # len(new_shape) > 0 - steps = [] - last_step = 1 - oldI = 0 - new_strides = [] - if old_strides[0] < old_strides[-1]: - for i in range(len(old_shape)): - steps.append(old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[0] - n_new_elems_used = 1 - n_old_elems_to_use = old_shape[0] - for s in new_shape: - new_strides.append(cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI += 1 - if steps[oldI] != steps[oldI - 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI += 1 - if oldI >= len(old_shape): - break - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - else: - for i in range(len(old_shape) - 1, -1, -1): - steps.insert(0, old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[-1] - n_new_elems_used = 1 - oldI = -1 - n_old_elems_to_use = old_shape[-1] - for i in range(len(new_shape) - 1, -1, -1): - s = new_shape[i] - new_strides.insert(0, cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI -= 1 - if steps[oldI] != steps[oldI + 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI -= 1 - if oldI < -len(old_shape): - break - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - return new_strides +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) +take_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + reds=['index_i', 'res_i', 'concr', 'index', 'res'], + name='numpy_take', +) +flat_get_driver = jit.JitDriver( + greens=['shapelen', 'base'], + reds=['step', 'ri', 'basei', 'res'], + name='numpy_flatget', +) +flat_set_driver = jit.JitDriver( + greens=['shapelen', 'base'], + reds=['step', 'ai', 'lngth', 'arr', 'basei'], + name='numpy_flatset', +) class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -246,6 +117,7 @@ descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") def _binop_impl(ufunc_name): def impl(self, space, w_other): @@ -266,6 +138,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -282,13 +157,19 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, self, multidim=True) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + axis = -1 + else: + axis = space.int_w(w_axis) + return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, + self, True, promote_to_largest, axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") @@ -297,6 +178,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -372,7 +254,7 @@ else: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) def get_concrete(self): raise NotImplementedError @@ -400,9 +282,23 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space, w_order=None): + if isinstance(self, Scalar): + # scalars have no storage + return self.descr_reshape(space, [space.wrap(1)]) + concr = self.get_concrete() + w_res = concr.descr_ravel(space, w_order) + if w_res.storage == concr.storage: + return w_res.copy(space) + return w_res + def copy(self, space): return self.get_concrete().copy(space) + def empty_copy(self, space, dtype): + shape = self.shape + return W_NDimArray(support.product(shape), shape[:], dtype, 'C') + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -410,33 +306,32 @@ "len() of unsized object")) def descr_repr(self, space): - res = StringBuilder() - res.append("array(") - concrete = self.get_concrete_or_scalar() - dtype = concrete.find_dtype() - if not concrete.size: - res.append('[]') - if len(self.shape) > 1: - # An empty slice reports its shape - res.append(", shape=(") - self_shape = str(self.shape) - res.append_slice(str(self_shape), 1, len(self_shape) - 1) - res.append(')') - else: - concrete.to_str(space, 1, res, indent=' ') - if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - not (dtype.kind == interp_dtype.SIGNEDLTR and - dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or - not self.size): - res.append(", dtype=" + dtype.name) - res.append(")") - return space.wrap(res.build()) + cache = get_appbridge_cache(space) + if cache.w_array_repr is None: + return space.wrap(self.dump_data()) + return space.call_function(cache.w_array_repr, self) + + def dump_data(self): + concr = self.get_concrete() + i = concr.create_iter() + first = True + s = StringBuilder() + s.append('array([') + while not i.done(): + if first: + first = False + else: + s.append(', ') + s.append(concr.dtype.itemtype.str_format(concr.getitem(i.offset))) + i = i.next(len(concr.shape)) + s.append('])') + return s.build() def descr_str(self, space): - ret = StringBuilder() - concrete = self.get_concrete_or_scalar() - concrete.to_str(space, 0, ret, ' ') - return space.wrap(ret.build()) + cache = get_appbridge_cache(space) + if cache.w_array_str is None: + return space.wrap(self.dump_data()) + return space.call_function(cache.w_array_str, self) @jit.unroll_safe def _single_item_result(self, space, w_idx): @@ -470,20 +365,86 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + if concr.size > self.size: + raise OperationError(space.w_IndexError, + space.wrap("index out of range for array")) + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not ri.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item) chunks = self._prepare_slice_args(space, w_idx) - return space.wrap(self.create_slice(chunks)) + return self.create_slice(chunks) def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -500,9 +461,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -530,11 +490,14 @@ w_shape = args_w[0] else: w_shape = space.newtuple(args_w) + new_shape = get_shape_from_iterable(space, self.size, w_shape) + return self.reshape(space, new_shape) + + def reshape(self, space, new_shape): concrete = self.get_concrete() - new_shape = get_shape_from_iterable(space, concrete.size, w_shape) # Since we got to here, prod(new_shape) == self.size - new_strides = calc_new_strides(new_shape, - concrete.shape, concrete.strides) + new_strides = calc_new_strides(new_shape, concrete.shape, + concrete.strides, concrete.order) if new_strides: # We can create a view, strides somehow match up. ndims = len(new_shape) @@ -542,7 +505,7 @@ for nd in range(ndims): new_backstrides[nd] = (new_shape[nd] - 1) * new_strides[nd] arr = W_NDimSlice(concrete.start, new_strides, new_backstrides, - new_shape, self) + new_shape, concrete) else: # Create copy with contiguous data arr = concrete.copy(space) @@ -552,7 +515,7 @@ def descr_tolist(self, space): if len(self.shape) == 0: assert isinstance(self, Scalar) - return self.value.descr_tolist(space) + return self.value.item(space) w_result = space.newlist([]) for i in range(self.shape[0]): space.call_method(w_result, "append", @@ -560,20 +523,26 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_axis) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) - def descr_var(self, space): - # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space)) - assert isinstance(w_res, BaseArray) - w_res = w_res.descr_pow(space, space.wrap(2)) - assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space) + def descr_var(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_var', self, + w_axis) - def descr_std(self, space): - # std(v) = sqrt(var(v)) - return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_std(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_std', self, + w_axis) + + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) def descr_nonzero(self, space): if self.size > 1: @@ -602,17 +571,28 @@ return space.wrap(W_NDimSlice(concrete.start, strides, backstrides, shape, concrete)) + def descr_ravel(self, space, w_order=None): + if w_order is None or space.is_w(w_order, space.w_None): + order = 'C' + else: + order = space.str_w(w_order) + if order != 'C': + raise OperationError(space.w_NotImplementedError, space.wrap( + "order not implemented")) + return self.descr_reshape(space, [space.wrap(-1)]) + def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -630,6 +610,60 @@ def supports_fast_slicing(self): return False + def descr_compress(self, space, w_obj, w_axis=None): + index = convert_to_array(space, w_obj) + return self.getitem_filter(space, index) + + def descr_take(self, space, w_obj, w_axis=None): + index = convert_to_array(space, w_obj).get_concrete() + concr = self.get_concrete() + if space.is_w(w_axis, space.w_None): + concr = concr.descr_ravel(space) + else: + raise OperationError(space.w_NotImplementedError, + space.wrap("axis unsupported for take")) + index_i = index.create_iter() + res_shape = index.shape + size = support.product(res_shape) + res = W_NDimArray(size, res_shape[:], concr.dtype, concr.order) + res_i = res.create_iter() + shapelen = len(index.shape) + sig = concr.find_sig() + while not index_i.done(): + take_driver.jit_merge_point(index_i=index_i, index=index, + res_i=res_i, concr=concr, + res=res, + shapelen=shapelen, sig=sig) + w_item = index._getitem_long(space, index_i.offset) + res.setitem(res_i.offset, concr.descr_getitem(space, w_item)) + index_i = index_i.next(shapelen) + res_i = res_i.next(shapelen) + return res + + def _getitem_long(self, space, offset): + # an obscure hack to not have longdtype inside a jitted loop + longdtype = interp_dtype.get_dtype_cache(space).w_longdtype + return self.getitem(offset).convert_to(longdtype).item( + space) + + def descr_item(self, space, w_arg=None): + if space.is_w(w_arg, space.w_None): + if not isinstance(self, Scalar): + raise OperationError(space.w_ValueError, space.wrap("index out of bounds")) + return self.value.item(space) + if space.isinstance_w(w_arg, space.w_int): + if isinstance(self, Scalar): + raise OperationError(space.w_ValueError, space.wrap("index out of bounds")) + concr = self.get_concrete() + i = to_coords(space, self.shape, concr.size, concr.order, w_arg)[0] + # XXX a bit around + item = self.descr_getitem(space, space.newtuple([space.wrap(x) + for x in i])) + assert isinstance(item, interp_boxes.W_GenericBox) + return item.item(space) + raise OperationError(space.w_NotImplementedError, space.wrap( + "non-int arg not supported")) + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -655,23 +689,29 @@ self.shape = [] BaseArray.__init__(self, []) self.dtype = dtype + assert isinstance(value, interp_boxes.W_GenericBox) self.value = value def find_dtype(self): return self.dtype - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): - builder.append(self.dtype.itemtype.str_format(self.value)) - def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): return self + def reshape(self, space, new_shape): + size = support.product(new_shape) + res = W_NDimArray(size, new_shape, self.dtype, 'C') + res.setitem(0, self.value) + return res class VirtualArray(BaseArray): """ @@ -684,7 +724,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -700,8 +741,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -728,19 +768,16 @@ class VirtualSlice(VirtualArray): def __init__(self, child, chunks, shape): - size = 1 - for sh in shape: - size *= sh self.child = child self.chunks = chunks - self.size = size + self.size = support.product(shape) VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -750,46 +787,91 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): - def __init__(self, ufunc, name, shape, res_dtype, values): + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) self.values = values self.size = values.size self.ufunc = ufunc + self.calc_dtype = calc_dtype def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc self.left = left self.right = right self.calc_dtype = calc_dtype - self.size = 1 - for s in self.shape: - self.size *= s + self.size = support.product(self.shape) def _del_sources(self): self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if not self.no_broadcast and self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -844,94 +926,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): - '''Modifies builder with a representation of the array/slice - The items will be seperated by a comma if comma is 1 - Multidimensional arrays/slices will span a number of lines, - each line will begin with indent. - ''' - size = self.size - ccomma = ',' * comma - ncomma = ',' * (1 - comma) - dtype = self.find_dtype() - if size < 1: - builder.append('[]') - return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return - if size > 1000: - # Once this goes True it does not go back to False for recursive - # calls - use_ellipsis = True - ndims = len(self.shape) - i = 0 - builder.append('[') - if ndims > 1: - if use_ellipsis: - for i in range(min(3, self.shape[0])): - if i > 0: - builder.append(ccomma + '\n') - if ndims >= 3: - builder.append('\n' + indent) - else: - builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', - use_ellipsis=use_ellipsis) - if i < self.shape[0] - 1: - builder.append(ccomma +'\n' + indent + '...' + ncomma) - i = self.shape[0] - 3 - else: - i += 1 - while i < self.shape[0]: - if i > 0: - builder.append(ccomma + '\n') - if ndims >= 3: - builder.append('\n' + indent) - else: - builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', - use_ellipsis=use_ellipsis) - i += 1 - elif ndims == 1: - spacer = ccomma + ' ' - item = self.start - # An iterator would be a nicer way to walk along the 1d array, but - # how do I reset it if printing ellipsis? iterators have no - # "set_offset()" - i = 0 - if use_ellipsis: - for i in range(min(3, self.shape[0])): - if i > 0: - builder.append(spacer) - builder.append(dtype.itemtype.str_format(self.getitem(item))) - item += self.strides[0] - if i < self.shape[0] - 1: - # Add a comma only if comma is False - this prevents adding - # two commas - builder.append(spacer + '...' + ncomma) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 - else: - i += 1 - while i < self.shape[0]: - if i > 0: - builder.append(spacer) - builder.append(dtype.itemtype.str_format(self.getitem(item))) - item += self.strides[0] - i += 1 - builder.append(']') - @jit.unroll_safe def _index_of_single_item(self, space, w_idx): if space.isinstance_w(w_idx, space.w_int): @@ -963,20 +957,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -986,30 +982,28 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1018,15 +1012,16 @@ assert isinstance(parent, ConcreteArray) if isinstance(parent, W_NDimSlice): parent = parent.parent - size = 1 - for sh in shape: - size *= sh self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, size, shape, parent.dtype, parent.order, - parent) + ViewArray.__init__(self, support.product(shape), shape, parent.dtype, + parent.order, parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1050,7 +1045,8 @@ self.backstrides = backstrides self.shape = new_shape return - new_strides = calc_new_strides(new_shape, self.shape, self.strides) + new_strides = calc_new_strides(new_shape, self.shape, self.strides, + self.order) if new_strides is None: raise OperationError(space.w_AttributeError, space.wrap( "incompatible shape for a non-contiguous array")) @@ -1073,8 +1069,11 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_iter(self): + return ArrayIterator(self.size) + + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) @@ -1092,27 +1091,42 @@ shape.append(item) return size, shape -def array(space, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): + at unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) +def array(space, w_item_or_iterable, w_dtype=None, w_order=None, + subok=True, copy=True, w_maskna=None, ownmaskna=False): # find scalar + if w_maskna is None: + w_maskna = space.w_None + if (not subok or not space.is_w(w_maskna, space.w_None) or + ownmaskna): + raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) if not space.issequence_w(w_item_or_iterable): - if space.is_w(w_dtype, space.w_None): + if w_dtype is None or space.is_w(w_dtype, space.w_None): w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item_or_iterable) dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) return scalar_w(space, dtype, w_item_or_iterable) - if w_order is None: + if space.is_w(w_order, space.w_None) or w_order is None: order = 'C' else: order = space.str_w(w_order) if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) - shape, elems_w = _find_shape_and_elems(space, w_item_or_iterable) + if isinstance(w_item_or_iterable, BaseArray): + if (not space.is_w(w_dtype, space.w_None) and + w_item_or_iterable.find_dtype() is not w_dtype): + raise OperationError(space.w_NotImplementedError, space.wrap( + "copying over different dtypes unsupported")) + if copy: + return w_item_or_iterable.copy(space) + return w_item_or_iterable + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) # they come back in C order size = len(elems_w) - if space.is_w(w_dtype, space.w_None): + if w_dtype is None or space.is_w(w_dtype, space.w_None): w_dtype = None for w_elem in elems_w: w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, @@ -1127,6 +1141,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1139,6 +1154,8 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) size, shape = _find_size_and_shape(space, w_size) + if not shape: + return scalar_w(space, dtype, space.wrap(0)) return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) def ones(space, w_size, w_dtype=None): @@ -1147,17 +1164,66 @@ ) size, shape = _find_size_and_shape(space, w_size) + if not shape: + return scalar_w(space, dtype, space.wrap(1)) arr = W_NDimArray(size, shape[:], dtype=dtype) one = dtype.box(1) arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) + at unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) +def count_reduce_items(space, arr, w_axis=None, skipna=False, keepdims=True): + if not keepdims: + raise OperationError(space.w_NotImplementedError, space.wrap("unsupported")) + if space.is_w(w_axis, space.w_None): + return space.wrap(support.product(arr.shape)) + if space.isinstance_w(w_axis, space.w_int): + return space.wrap(arr.shape[space.int_w(w_axis)]) + s = 1 + elems = space.fixedview(w_axis) + for w_elem in elems: + s *= arr.shape[space.int_w(w_elem)] + return space.wrap(s) + def dot(space, w_obj, w_obj2): w_arr = convert_to_array(space, w_obj) if isinstance(w_arr, Scalar): return convert_to_array(space, w_obj2).descr_dot(space, w_arr) return w_arr.descr_dot(space, w_obj2) + at unwrap_spec(axis=int) +def concatenate(space, w_args, axis=0): + args_w = space.listview(w_args) + if len(args_w) == 0: + raise OperationError(space.w_ValueError, space.wrap("concatenation of zero-length sequences is impossible")) + args_w = [convert_to_array(space, w_arg) for w_arg in args_w] + dtype = args_w[0].find_dtype() + shape = args_w[0].shape[:] + if len(shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for arr in args_w[1:]: + dtype = interp_ufuncs.find_binop_result_dtype(space, dtype, + arr.find_dtype()) + if len(arr.shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for i, axis_size in enumerate(arr.shape): + if len(arr.shape) != len(shape) or (i != axis and axis_size != shape[i]): + raise OperationError(space.w_ValueError, space.wrap( + "array dimensions must agree except for axis being concatenated")) + elif i == axis: + shape[i] += axis_size + res = W_NDimArray(support.product(shape), shape, dtype, 'C') + chunks = [Chunk(0, i, 1, i) for i in shape] + axis_start = 0 + for arr in args_w: + chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, + arr.shape[axis]) + res.create_slice(chunks).setslice(space, arr) + axis_start += arr.shape[axis] + return res + BaseArray.typedef = TypeDef( 'ndarray', __module__ = "numpypy", @@ -1193,6 +1259,10 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __invert__ = interp2app(BaseArray.descr_invert), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1202,9 +1272,11 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), + item = interp2app(BaseArray.descr_item), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), + ravel = interp2app(BaseArray.descr_ravel), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1219,9 +1291,14 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), + take = interp2app(BaseArray.descr_take), + compress = interp2app(BaseArray.descr_compress), ) @@ -1230,30 +1307,129 @@ @jit.unroll_safe def __init__(self, arr): arr = arr.get_concrete() - size = 1 - for sh in arr.shape: - size *= sh self.strides = [arr.strides[-1]] self.backstrides = [arr.backstrides[-1]] - ViewArray.__init__(self, size, [size], arr.dtype, arr.order, - arr) self.shapelen = len(arr.shape) - self.iter = OneDimIterator(arr.start, self.strides[0], - self.shape[0]) + sig = arr.find_sig() + self.iter = sig.create_frame(arr).get_final_iter() + self.base = arr + self.index = 0 + ViewArray.__init__(self, arr.size, [arr.size], arr.dtype, arr.order, + arr) def descr_next(self, space): if self.iter.done(): raise OperationError(space.w_StopIteration, space.w_None) - result = self.getitem(self.iter.offset) + result = self.base.getitem(self.iter.offset) self.iter = self.iter.next(self.shapelen) + self.index += 1 return result def descr_iter(self): return self + def descr_index(self, space): + return space.wrap(self.index) + + def descr_coords(self, space): + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, + space.wrap(self.index)) + return space.newtuple([space.wrap(c) for c in coords]) + + @jit.unroll_safe + def descr_getitem(self, space, w_idx): + if not (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + base = self.base + start, stop, step, lngth = space.decode_index4(w_idx, base.size) + # setslice would have been better, but flat[u:v] for arbitrary + # shapes of array a cannot be represented as a[x1:x2, y1:y2] + basei = ViewIterator(base.start, base.strides, + base.backstrides,base.shape) + shapelen = len(base.shape) + basei = basei.next_skip_x(shapelen, start) + if lngth <2: + return base.getitem(basei.offset) + ri = ArrayIterator(lngth) + res = W_NDimArray(lngth, [lngth], base.dtype, + base.order) + while not ri.done(): + flat_get_driver.jit_merge_point(shapelen=shapelen, + base=base, + basei=basei, + step=step, + res=res, + ri=ri, + ) + w_val = base.getitem(basei.offset) + res.setitem(ri.offset,w_val) + basei = basei.next_skip_x(shapelen, step) + ri = ri.next(shapelen) + return res + + def descr_setitem(self, space, w_idx, w_value): + if not (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + base = self.base + start, stop, step, lngth = space.decode_index4(w_idx, base.size) + arr = convert_to_array(space, w_value) + ai = 0 + basei = ViewIterator(base.start, base.strides, + base.backstrides,base.shape) + shapelen = len(base.shape) + basei = basei.next_skip_x(shapelen, start) + while lngth > 0: + flat_set_driver.jit_merge_point(shapelen=shapelen, + basei=basei, + base=base, + step=step, + arr=arr, + ai=ai, + lngth=lngth, + ) + v = arr.getitem(ai).convert_to(base.dtype) + base.setitem(basei.offset, v) + # need to repeat input values until all assignments are done + ai = (ai + 1) % arr.size + basei = basei.next_skip_x(shapelen, step) + lngth -= 1 + + def create_sig(self): + return signature.FlatSignature(self.base.dtype) + + def descr_base(self, space): + return space.wrap(self.base) + W_FlatIterator.typedef = TypeDef( 'flatiter', + #__array__ = #MISSING + __iter__ = interp2app(W_FlatIterator.descr_iter), + __getitem__ = interp2app(W_FlatIterator.descr_getitem), + __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), + __ne__ = interp2app(BaseArray.descr_ne), + __lt__ = interp2app(BaseArray.descr_lt), + __le__ = interp2app(BaseArray.descr_le), + __gt__ = interp2app(BaseArray.descr_gt), + __ge__ = interp2app(BaseArray.descr_ge), + #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), + index = GetSetProperty(W_FlatIterator.descr_index), + coords = GetSetProperty(W_FlatIterator.descr_coords), next = interp2app(W_FlatIterator.descr_next), - __iter__ = interp2app(W_FlatIterator.descr_iter), + ) W_FlatIterator.acceptable_as_base_class = False + +def isna(space, w_obj): + if isinstance(w_obj, BaseArray): + arr = w_obj.empty_copy(space, + interp_dtype.get_dtype_cache(space).w_booldtype) + arr.fill(space, space.wrap(False)) + return arr + return space.wrap(False) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,31 +1,44 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy import interp_boxes, interp_dtype, support +from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, + new_printable_location, AxisReduceSignature, ScalarSignature) from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name + reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"], + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] - def __init__(self, name, promote_to_float, promote_bools, identity): + def __init__(self, name, promote_to_float, promote_bools, identity, + int_only): self.name = name self.promote_to_float = promote_to_float self.promote_bools = promote_bools self.identity = identity + self.int_only = int_only def descr_repr(self, space): return space.wrap("" % self.name) @@ -36,30 +49,105 @@ return self.identity def descr_call(self, space, __args__): - if __args__.keywords or len(__args__.arguments_w) < self.argcount: + args_w, kwds_w = __args__.unpack() + # it occurs to me that we don't support any datatypes that + # require casting, change it later when we do + kwds_w.pop('casting', None) + w_subok = kwds_w.pop('subok', None) + w_out = kwds_w.pop('out', space.w_None) + if ((w_subok is not None and space.is_true(w_subok)) or + not space.is_w(w_out, space.w_None)): + raise OperationError(space.w_NotImplementedError, + space.wrap("parameters unsupported")) + if kwds_w or len(args_w) < self.argcount: raise OperationError(space.w_ValueError, space.wrap("invalid number of arguments") ) - elif len(__args__.arguments_w) > self.argcount: + elif len(args_w) > self.argcount: # The extra arguments should actually be the output array, but we # don't support that yet. raise OperationError(space.w_TypeError, space.wrap("invalid number of arguments") ) - return self.call(space, __args__.arguments_w) + return self.call(space, args_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + @unwrap_spec(skipna=bool, keepdims=bool) + def descr_reduce(self, space, w_obj, w_axis=NoneNotWrapped, w_dtype=None, + skipna=False, keepdims=False, w_out=None): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + if not space.is_w(w_out, space.w_None): + raise OperationError(space.w_NotImplementedError, space.wrap( + "out not supported")) + if w_axis is None: + axis = 0 + elif space.is_w(w_axis, space.w_None): + axis = -1 + else: + axis = space.int_w(w_axis) + return self.reduce(space, w_obj, False, False, axis, keepdims) + + def reduce(self, space, w_obj, multidim, promote_to_largest, dim, + keepdims=False): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -67,26 +155,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim, keepdims) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim, keepdims): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + if keepdims: + shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] + else: + shape = obj.shape[:dim] + obj.shape[dim + 1:] + result = W_NDimArray(support.product(shape), shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -94,20 +236,24 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 _immutable_fields_ = ["func", "name"] def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None): + identity=None, bool_result=False, int_only=False): - W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) + W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity, + int_only) self.func = func + self.bool_result = bool_result def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call1, @@ -115,27 +261,32 @@ [w_obj] = args_w w_obj = convert_to_array(space, w_obj) - res_dtype = find_unaryop_result_dtype(space, - w_obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_bools=self.promote_bools, - ) + calc_dtype = find_unaryop_result_dtype(space, + w_obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_bools=self.promote_bools) + if self.bool_result: + res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + res_dtype = calc_dtype if isinstance(w_obj, Scalar): - return self.func(res_dtype, w_obj.value.convert_to(res_dtype)) + return space.wrap(self.func(calc_dtype, w_obj.value.convert_to(calc_dtype))) - w_res = Call1(self.func, self.name, w_obj.shape, res_dtype, w_obj) + w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, + w_obj) w_obj.add_invalidates(w_res) return w_res class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): - W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) + W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity, + int_only) self.func = func self.comparison_func = comparison_func @@ -148,6 +299,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -156,10 +308,10 @@ else: res_dtype = calc_dtype if isinstance(w_lhs, Scalar) and isinstance(w_rhs, Scalar): - return self.func(calc_dtype, + return space.wrap(self.func(calc_dtype, w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) - ) + )) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, @@ -182,11 +334,14 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -230,6 +385,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -254,6 +410,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -282,12 +439,16 @@ return interp_dtype.get_dtype_cache(space).w_float64dtype -def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func): +def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func, + bool_result): + dtype_cache = interp_dtype.get_dtype_cache(space) if argcount == 1: def impl(res_dtype, value): - return getattr(res_dtype.itemtype, op_name)(value) + res = getattr(res_dtype.itemtype, op_name)(value) + if bool_result: + return dtype_cache.w_booldtype.box(res) + return res elif argcount == 2: - dtype_cache = interp_dtype.get_dtype_cache(space) def impl(res_dtype, lvalue, rvalue): res = getattr(res_dtype.itemtype, op_name)(lvalue, rvalue) if comparison_func: @@ -302,7 +463,13 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), + ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), + ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -312,6 +479,8 @@ ("less_equal", "le", 2, {"comparison_func": True}), ("greater", "gt", 2, {"comparison_func": True}), ("greater_equal", "ge", 2, {"comparison_func": True}), + ("isnan", "isnan", 1, {"bool_result": True}), + ("isinf", "isinf", 1, {"bool_result": True}), ("maximum", "max", 2), ("minimum", "min", 2), @@ -326,6 +495,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -347,11 +517,13 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), + bool_result=extra_kwargs.get("bool_result", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) @@ -361,3 +533,4 @@ def get(space): return space.fromcache(UfuncState) + diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,32 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -33,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -51,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -59,6 +82,22 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -70,6 +109,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -78,7 +120,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -95,13 +137,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -120,16 +162,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -137,23 +169,25 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -166,7 +200,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -186,8 +220,21 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) + +class FlatSignature(ViewSignature): + def debug_repr(self): + return 'Flat' + + def allocate_iter(self, arr, transforms): + from pypy.module.micronumpy.interp_numarray import W_FlatIterator + assert isinstance(arr, W_FlatIterator) + return ViewIterator(arr.base.start, arr.base.strides, + arr.base.backstrides, + arr.base.shape).apply_transformations(arr.base, + transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -198,6 +245,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -207,12 +257,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -248,17 +297,16 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - v = self.child.eval(frame, arr.values).convert_to(arr.res_dtype) - return self.unfunc(arr.res_dtype, v) + v = self.child.eval(frame, arr.values).convert_to(arr.calc_dtype) + return self.unfunc(arr.calc_dtype, v) class Call2(Signature): _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', 'left', 'right'] @@ -293,29 +341,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -325,3 +412,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,5 +1,5 @@ from pypy.rlib import jit - +from pypy.interpreter.error import OperationError @jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: jit.isconstant(len(chunks)) @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 @@ -37,3 +37,196 @@ rstrides = [0] * (len(res_shape) - len(orig_shape)) + rstrides rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides + +def find_shape_and_elems(space, w_iterable): + shape = [space.len_w(w_iterable)] + batch = space.listview(w_iterable) + while True: + new_batch = [] + if not batch: + return shape, [] + if not space.issequence_w(batch[0]): + for elem in batch: + if space.issequence_w(elem): + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + return shape, batch + size = space.len_w(batch[0]) + for w_elem in batch: + if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + new_batch += space.listview(w_elem) + shape.append(size) + batch = new_batch + +def to_coords(space, shape, size, order, w_item_or_slice): + '''Returns a start coord, step, and length. + ''' + start = lngth = step = 0 + if not (space.isinstance_w(w_item_or_slice, space.w_int) or + space.isinstance_w(w_item_or_slice, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + + start, stop, step, lngth = space.decode_index4(w_item_or_slice, size) + + coords = [0] * len(shape) + i = start + if order == 'C': + for s in range(len(shape) -1, -1, -1): + coords[s] = i % shape[s] + i //= shape[s] + else: + for s in range(len(shape)): + coords[s] = i % shape[s] + i //= shape[s] + return coords, step, lngth + +def shape_agreement(space, shape1, shape2): + ret = _shape_agreement(shape1, shape2) + if len(ret) < max(len(shape1), len(shape2)): + raise OperationError(space.w_ValueError, + space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( + ",".join([str(x) for x in shape1]), + ",".join([str(x) for x in shape2]), + )) + ) + return ret + +def _shape_agreement(shape1, shape2): + """ Checks agreement about two shapes with respect to broadcasting. Returns + the resulting shape. + """ + lshift = 0 + rshift = 0 + if len(shape1) > len(shape2): + m = len(shape1) + n = len(shape2) + rshift = len(shape2) - len(shape1) + remainder = shape1 + else: + m = len(shape2) + n = len(shape1) + lshift = len(shape1) - len(shape2) + remainder = shape2 + endshape = [0] * m + indices1 = [True] * m + indices2 = [True] * m + for i in range(m - 1, m - n - 1, -1): + left = shape1[i + lshift] + right = shape2[i + rshift] + if left == right: + endshape[i] = left + elif left == 1: + endshape[i] = right + indices1[i + lshift] = False + elif right == 1: + endshape[i] = left + indices2[i + rshift] = False + else: + return [] + #raise OperationError(space.w_ValueError, space.wrap( + # "frames are not aligned")) + for i in range(m - n): + endshape[i] = remainder[i] + return endshape + +def get_shape_from_iterable(space, old_size, w_iterable): + new_size = 0 + new_shape = [] + if space.isinstance_w(w_iterable, space.w_int): + new_size = space.int_w(w_iterable) + if new_size < 0: + new_size = old_size + new_shape = [new_size] + else: + neg_dim = -1 + batch = space.listview(w_iterable) + new_size = 1 + if len(batch) < 1: + if old_size == 1: + # Scalars can have an empty size. + new_size = 1 + else: + new_size = 0 + new_shape = [] + i = 0 + for elem in batch: + s = space.int_w(elem) + if s < 0: + if neg_dim >= 0: + raise OperationError(space.w_ValueError, space.wrap( + "can only specify one unknown dimension")) + s = 1 + neg_dim = i + new_size *= s + new_shape.append(s) + i += 1 + if neg_dim >= 0: + new_shape[neg_dim] = old_size / new_size + new_size *= new_shape[neg_dim] + if new_size != old_size: + raise OperationError(space.w_ValueError, + space.wrap("total size of new array must be unchanged")) + return new_shape + +# Recalculating strides. Find the steps that the iteration does for each +# dimension, given the stride and shape. Then try to create a new stride that +# fits the new shape, using those steps. If there is a shape/step mismatch +# (meaning that the realignment of elements crosses from one step into another) +# return None so that the caller can raise an exception. +def calc_new_strides(new_shape, old_shape, old_strides, order): + # Return the proper strides for new_shape, or None if the mapping crosses + # stepping boundaries + + # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and + # len(new_shape) > 0 + steps = [] + last_step = 1 + oldI = 0 + new_strides = [] + if order == 'F': + for i in range(len(old_shape)): + steps.append(old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[0] + n_new_elems_used = 1 + n_old_elems_to_use = old_shape[0] + for s in new_shape: + new_strides.append(cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI += 1 + if steps[oldI] != steps[oldI - 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI += 1 + if oldI < len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + elif order == 'C': + for i in range(len(old_shape) - 1, -1, -1): + steps.insert(0, old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[-1] + n_new_elems_used = 1 + oldI = -1 + n_old_elems_to_use = old_shape[-1] + for i in range(len(new_shape) - 1, -1, -1): + s = new_shape[i] + new_strides.insert(0, cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI -= 1 + if steps[oldI] != steps[oldI + 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI -= 1 + if oldI >= -len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + assert len(new_strides) == len(new_shape) + return new_strides diff --git a/pypy/module/micronumpy/support.py b/pypy/module/micronumpy/support.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/support.py @@ -0,0 +1,5 @@ +def product(s): + i = 1 + for x in s: + i *= x + return i \ No newline at end of file diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -3,9 +3,17 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray, Scalar from pypy.module.micronumpy.interp_ufuncs import (find_binop_result_dtype, find_unaryop_result_dtype) +from pypy.module.micronumpy.interp_boxes import W_Float64Box +from pypy.conftest import option +import sys class BaseNumpyAppTest(object): def setup_class(cls): + if option.runappdirect: + if '__pypy__' not in sys.builtin_module_names: + import numpy + sys.modules['numpypy'] = numpy + sys.modules['_numpypy'] = numpy cls.space = gettestobjspace(usemodules=['micronumpy']) class TestSignature(object): @@ -16,7 +24,7 @@ ar = W_NDimArray(10, [10], dtype=float64_dtype) ar2 = W_NDimArray(10, [10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) - v2 = ar.descr_add(space, Scalar(float64_dtype, 2.0)) + v2 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(2.0))) sig1 = v1.find_sig() sig2 = v2.find_sig() assert v1 is not v2 @@ -26,7 +34,7 @@ sig1b = ar2.descr_add(space, ar).find_sig() assert sig1b.left.array_no != sig1b.right.array_no assert sig1b is not sig1 - v3 = ar.descr_add(space, Scalar(float64_dtype, 1.0)) + v3 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(1.0))) sig3 = v3.find_sig() assert sig2 is sig3 v4 = ar.descr_add(space, ar) diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -245,3 +245,19 @@ a -> 3 """) assert interp.results[0].value == 11 + + def test_flat_iter(self): + interp = self.run(''' + a = |30| + b = flat(a) + b -> 3 + ''') + assert interp.results[0].value == 3 + + def test_take(self): + interp = self.run(""" + a = |10| + b = take(a, [1, 1, 3, 2]) + b -> 2 + """) + assert interp.results[0].value == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -11,8 +11,17 @@ assert dtype('int8').num == 1 assert dtype(d) is d assert dtype(None) is dtype(float) + assert dtype('int8').name == 'int8' raises(TypeError, dtype, 1042) + def test_dtype_eq(self): + from _numpypy import dtype + + assert dtype("int8") == "int8" + assert "int8" == dtype("int8") + raises(TypeError, lambda: dtype("int8") == 3) + assert dtype(bool) == bool + def test_dtype_with_types(self): from _numpypy import dtype @@ -30,7 +39,7 @@ def test_repr_str(self): from _numpypy import dtype - assert repr(dtype) == "" + assert '.dtype' in repr(dtype) d = dtype('?') assert repr(d) == "dtype('bool')" assert str(d) == "bool" @@ -166,14 +175,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +195,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +333,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') @@ -370,3 +385,19 @@ b = X(10) assert type(b) is X assert b.m() == 12 + + def test_long_as_index(self): + skip("waiting for removal of multimethods of __index__") + from _numpypy import int_ + assert (1, 2, 3)[int_(1)] == 2 + + def test_int(self): + import sys + from _numpypy import int32, int64, int_ + assert issubclass(int_, int) + if sys.maxint == (1<<31) - 1: + assert issubclass(int32, int) + assert int_ is int32 + else: + assert issubclass(int64, int) + assert int_ is int64 diff --git a/pypy/module/micronumpy/test/test_iter.py b/pypy/module/micronumpy/test/test_iter.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_iter.py @@ -0,0 +1,88 @@ +from pypy.module.micronumpy.interp_iter import ViewIterator + +class TestIterDirect(object): + def test_C_viewiterator(self): + #Let's get started, simple iteration in C order with + #contiguous layout => strides[-1] is 1 + start = 0 + shape = [3, 5] + strides = [5, 1] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [10, 4] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next(2) + i = i.next(2) + i = i.next(2) + assert i.offset == 3 + assert not i.done() + assert i.indices == [0,3] + #cause a dimension overflow + i = i.next(2) + i = i.next(2) + assert i.offset == 5 + assert i.indices == [1,0] + + #Now what happens if the array is transposed? strides[-1] != 1 + # therefore layout is non-contiguous + strides = [1, 3] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [2, 12] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next(2) + i = i.next(2) + i = i.next(2) + assert i.offset == 9 + assert not i.done() + assert i.indices == [0,3] + #cause a dimension overflow + i = i.next(2) + i = i.next(2) + assert i.offset == 1 + assert i.indices == [1,0] + + def test_C_viewiterator_step(self): + #iteration in C order with #contiguous layout => strides[-1] is 1 + #skip less than the shape + start = 0 + shape = [3, 5] + strides = [5, 1] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [10, 4] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + assert i.offset == 6 + assert not i.done() + assert i.indices == [1,1] + #And for some big skips + i = i.next_skip_x(2,5) + assert i.offset == 11 + assert i.indices == [2,1] + i = i.next_skip_x(2,5) + # Note: the offset does not overflow but recycles, + # this is good for broadcast + assert i.offset == 1 + assert i.indices == [0,1] + assert i.done() + + #Now what happens if the array is transposed? strides[-1] != 1 + # therefore layout is non-contiguous + strides = [1, 3] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [2, 12] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + assert i.offset == 4 + assert i.indices == [1,1] + assert not i.done() + i = i.next_skip_x(2,5) + assert i.offset == 5 + assert i.indices == [2,1] + assert not i.done() + i = i.next_skip_x(2,5) + assert i.indices == [0,1] + assert i.offset == 3 + assert i.done() diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -152,12 +154,36 @@ def test_calc_new_strides(self): from pypy.module.micronumpy.interp_numarray import calc_new_strides - assert calc_new_strides([2, 4], [4, 2], [4, 2]) == [8, 2] - assert calc_new_strides([2, 4, 3], [8, 3], [1, 16]) == [1, 2, 16] - assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None - assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None - assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([2, 4], [4, 2], [4, 2], "C") == [8, 2] + assert calc_new_strides([2, 4, 3], [8, 3], [1, 16], 'F') == [1, 2, 16] + assert calc_new_strides([2, 3, 4], [8, 3], [1, 16], 'F') is None + assert calc_new_strides([24], [2, 4, 3], [48, 6, 1], 'C') is None + assert calc_new_strides([24], [2, 4, 3], [24, 6, 2], 'C') == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1],'C') == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1],'C') == [105, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1],'F') is None + assert calc_new_strides([1, 1, 1, 105, 1], [15, 7], [7, 1],'C') == \ + [105, 105, 105, 1, 1] + assert calc_new_strides([1, 1, 105, 1, 1], [7, 15], [1, 7],'F') == \ + [1, 1, 1, 105, 105] + def test_to_coords(self): + from pypy.module.micronumpy.strides import to_coords + + def _to_coords(index, order): + return to_coords(self.space, [2, 3, 4], 24, order, + self.space.wrap(index))[0] + + assert _to_coords(0, 'C') == [0, 0, 0] + assert _to_coords(1, 'C') == [0, 0, 1] + assert _to_coords(-1, 'C') == [1, 2, 3] + assert _to_coords(5, 'C') == [0, 1, 1] + assert _to_coords(13, 'C') == [1, 0, 1] + assert _to_coords(0, 'F') == [0, 0, 0] + assert _to_coords(1, 'F') == [1, 0, 0] + assert _to_coords(-1, 'F') == [1, 2, 3] + assert _to_coords(5, 'F') == [1, 2, 0] + assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): @@ -202,6 +228,7 @@ # And check that changes stick. a[13] = 5.3 assert a[13] == 5.3 + assert zeros(()).shape == () def test_size(self): from _numpypy import array @@ -245,6 +272,11 @@ b = a[::2] c = b.copy() assert (c == b).all() + assert ((a + a).copy() == (a + a)).all() + + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() def test_iterator_init(self): from _numpypy import array @@ -270,6 +302,12 @@ for i in xrange(5): assert a[i] == b[i] + def test_getitem_nd(self): + from _numpypy import arange + a = arange(15).reshape(3, 5) + assert a[1, 3] == 8 + assert a.T[1, 2] == 11 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -357,6 +395,7 @@ assert b.shape == (5,) c = a[:3] assert c.shape == (3,) + assert array([]).shape == (0,) def test_set_shape(self): from _numpypy import array, zeros @@ -377,6 +416,8 @@ a.shape = () #numpy allows this a.shape = (1,) + a = array(range(6)).reshape(2,3).T + raises(AttributeError, 'a.shape = 6') def test_reshape(self): from _numpypy import array, zeros @@ -390,6 +431,7 @@ assert (a == [1000, 1, 2, 3, 1000, 5, 6, 7, 1000, 9, 10, 11]).all() a = zeros((4, 2, 3)) a.shape = (12, 2) + (a + a).reshape(2, 12) # assert did not explode def test_slice_reshape(self): from _numpypy import zeros, arange @@ -433,6 +475,13 @@ y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) + def test_scalar_reshape(self): + from numpypy import array + a = array(3) + assert a.reshape([1, 1]).shape == (1, 1) + assert a.reshape([1]).shape == (1,) + raises(ValueError, "a.reshape(3)") + def test_add(self): from _numpypy import array a = array(range(5)) @@ -720,10 +769,17 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array + from _numpypy import array, arange a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (arange(10).reshape(5, 2).mean(axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_sum(self): from _numpypy import array @@ -734,6 +790,34 @@ a = array([True] * 5, bool) assert a.sum() == 5 + raises(TypeError, 'a.sum(2, 3)') + + def test_reduce_nd(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + assert a.sum() == 105 + assert a.max() == 14 + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') + assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() + assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26, 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() + def test_identity(self): from _numpypy import identity, array from _numpypy import int32, float64, dtype @@ -906,7 +990,7 @@ assert debug_repr(a + a) == 'Call2(add, Array, Array)' assert debug_repr(a[::2]) == 'Slice' assert debug_repr(a + 2) == 'Call2(add, Array, Scalar)' - assert debug_repr(a + a.flat) == 'Call2(add, Array, Slice)' + assert debug_repr(a + a.flat) == 'Call2(add, Array, Flat)' assert debug_repr(sin(a)) == 'Call1(sin, Array)' b = a + a @@ -979,11 +1063,49 @@ assert a[0].tolist() == [17.1, 27.2] def test_var(self): - from _numpypy import array + from _numpypy import array, arange a = array(range(10)) assert a.var() == 8.25 a = array([5.0]) assert a.var() == 0.0 + a = arange(10).reshape(5, 2) + assert a.var() == 8.25 + assert (a.var(0) == [8, 8]).all() + assert (a.var(1) == [.25] * 5).all() + + def test_concatenate(self): + from numpypy import array, concatenate, dtype + a1 = array([0,1,2]) + a2 = array([3,4,5]) + a = concatenate((a1, a2)) + assert len(a) == 6 + assert (a == [0,1,2,3,4,5]).all() + assert a.dtype is dtype(int) + b1 = array([[1, 2], [3, 4]]) + b2 = array([[5, 6]]) + b = concatenate((b1, b2), axis=0) + assert (b == [[1, 2],[3, 4],[5, 6]]).all() + c = concatenate((b1, b2.T), axis=1) + assert (c == [[1, 2, 5],[3, 4, 6]]).all() + d = concatenate(([0],[1])) + assert (d == [0,1]).all() + e1 = array([[0,1],[2,3]]) + e = concatenate(e1) + assert (e == [0,1,2,3]).all() + f1 = array([0,1]) + f = concatenate((f1, [2], f1, [7])) + assert (f == [0,1,2,0,1,7]).all() + + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) + assert str(bad_axis.value) == "bad axis argument" + + concat_zero = raises(ValueError, concatenate, ()) + assert str(concat_zero.value) == \ + "concatenation of zero-length sequences is impossible" + + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) + assert str(dims_disagree.value) == \ + "array dimensions must agree except for axis being concatenated" def test_std(self): from _numpypy import array @@ -992,6 +1114,28 @@ a = array([5.0]) assert a.std() == 0.0 + def test_flatten(self): + from _numpypy import array + + assert array(3).flatten().shape == (1,) + a = array([[1, 2], [3, 4]]) + b = a.flatten() + c = a.ravel() + a[0, 0] = 15 + assert b[0] == 1 + assert c[0] == 15 + a = array([[1, 2, 3], [4, 5, 6]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() + a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6, 7, 8]).all() + a = array([1, 2, 3, 4, 5, 6, 7, 8]) + assert (a[::2].flatten() == [1, 3, 5, 7]).all() + a = array([1, 2, 3]) + assert ((a + a).flatten() == [2, 4, 6]).all() + a = array(2) + assert (a.flatten() == [2]).all() + a = array([[1, 2], [3, 4]]) + assert (a.T.flatten() == [1, 3, 2, 4]).all() class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): @@ -1007,7 +1151,7 @@ assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) assert len(_numpypy.zeros((3, 1, 2))) == 3 raises(TypeError, len, _numpypy.zeros(())) - raises(ValueError, _numpypy.array, [[1, 2], 3]) + raises(ValueError, _numpypy.array, [[1, 2], 3], dtype=float) def test_getsetitem(self): import _numpypy @@ -1222,7 +1366,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from _numpypy import array, flatiter + from _numpypy import array, flatiter, arange a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1235,6 +1379,9 @@ for k in a.flat: s += k assert s == 140 + a = arange(10).reshape(5, 2) + raises(IndexError, 'a.flat[(1, 2)]') + assert a.flat.base is a def test_flatiter_array_conv(self): from _numpypy import array, dot @@ -1246,6 +1393,75 @@ a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] + def test_flatiter_getitem(self): + from _numpypy import arange + a = arange(10) + assert a.flat[3] == 3 + assert a[2:].flat[3] == 5 + assert (a + a).flat[3] == 6 + assert a[::2].flat[3] == 6 + assert a.reshape(2,5).flat[3] == 3 + b = a.reshape(2,5).flat + b.next() + b.next() + b.next() + assert b[3] == 3 + assert (b[::3] == [0, 3, 6, 9]).all() + assert (b[2::5] == [2, 7]).all() + assert b[-2] == 8 + raises(IndexError, "b[11]") + raises(IndexError, "b[-11]") + raises(IndexError, 'b[0, 1]') + assert b.index == 3 + assert b.coords == (0,3) + + def test_flatiter_setitem(self): + from _numpypy import arange, array + a = arange(12).reshape(3,4) + b = a.T.flat + b[6::2] = [-1, -2] + assert (a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]]).all() + b[0:2] = [[[100]]] + assert(a[0,0] == 100) + assert(a[1,0] == 100) + raises(IndexError, 'b[array([10, 11])] == [-20, -40]') + + def test_flatiter_ops(self): + from _numpypy import arange, array + a = arange(12).reshape(3,4) + b = a.T.flat + assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() + assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() + assert ((b >= range(12)) == [True, True, True,False, True, True, + False, False, True, False, False, True]).all() + assert ((b < range(12)) != [True, True, True,False, True, True, + False, False, True, False, False, True]).all() + assert ((b <= range(12)) != [False, True, True,False, True, True, + False, False, True, False, False, False]).all() + assert ((b > range(12)) == [False, True, True,False, True, True, + False, False, True, False, False, False]).all() + def test_flatiter_view(self): + from _numpypy import arange + a = arange(10).reshape(5, 2) + #no == yet. + # a[::2].flat == [0, 1, 4, 5, 8, 9] + isequal = True + for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): + if y != z: + isequal = False + assert isequal == True + + def test_flatiter_transpose(self): + from _numpypy import arange + a = arange(10).reshape(2,5).T + b = a.flat + assert (b[:5] == [0, 5, 1, 6, 2]).all() + b.next() + b.next() + b.next() + assert b.index == 3 + assert b.coords == (1,1) + def test_slice_copy(self): from _numpypy import zeros a = zeros((10, 10)) @@ -1262,6 +1478,110 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + def test_copy_kwarg(self): + from _numpypy import array + x = array([1, 2, 3]) + assert (array(x) == x).all() + assert array(x) is not x + assert array(x, copy=False) is x + assert array(x, copy=True) is not x + + def test_isna(self): + from _numpypy import isna, array + # XXX for now + assert not isna(3) + assert (isna(array([1, 2, 3, 4])) == [False, False, False, False]).all() + + def test_ravel(self): + from _numpypy import arange + assert (arange(3).ravel() == arange(3)).all() + assert (arange(6).reshape(2, 3).ravel() == arange(6)).all() + assert (arange(6).reshape(2, 3).T.ravel() == [0, 3, 1, 4, 2, 5]).all() + + def test_take(self): + from _numpypy import arange + assert (arange(10).take([1, 2, 1, 1]) == [1, 2, 1, 1]).all() + raises(IndexError, "arange(3).take([15])") + a = arange(6).reshape(2, 3) + assert (a.take([1, 0, 3]) == [1, 0, 3]).all() + assert ((a + a).take([3]) == [6]).all() + a = arange(12).reshape(2, 6) + assert (a[:,::2].take([3, 2, 1]) == [6, 4, 2]).all() + + def test_compress(self): + from _numpypy import arange + a = arange(10) + assert (a.compress([True, False, True]) == [0, 2]).all() + assert (a.compress([1, 0, 13]) == [0, 2]).all() + assert (a.compress([1, 0, 13.5]) == [0, 2]).all() + a = arange(10).reshape(2, 5) + assert (a.compress([True, False, True]) == [0, 2]).all() + raises(IndexError, "a.compress([1] * 100)") + + def test_item(self): + from _numpypy import array + assert array(3).item() == 3 + assert type(array(3).item()) is int + assert type(array(True).item()) is bool + assert type(array(3.5).item()) is float + raises((ValueError, IndexError), "array(3).item(15)") + raises(ValueError, "array([1, 2, 3]).item()") + assert array([3]).item(0) == 3 + assert type(array([3]).item(0)) is int + assert array([1, 2, 3]).item(-1) == 3 + a = array([1, 2, 3]) + assert a[::2].item(1) == 3 + assert (a + a).item(1) == 4 + raises(ValueError, "array(5).item(1)") class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1373,123 +1693,6 @@ raises(ValueError, fromstring, "\x01\x02\x03", count=5, dtype=uint8) -class AppTestRepr(BaseNumpyAppTest): - def test_repr(self): - from _numpypy import array, zeros - int_size = array(5).dtype.itemsize - a = array(range(5), float) - assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" - a = array([], float) - assert repr(a) == "array([], dtype=float64)" - a = zeros(1001) - assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array(range(5), long) - if a.dtype.itemsize == int_size: - assert repr(a) == "array([0, 1, 2, 3, 4])" - else: - assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" - a = array(range(5), 'int32') - if a.dtype.itemsize == int_size: - assert repr(a) == "array([0, 1, 2, 3, 4])" - else: - assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" - a = array([], long) - assert repr(a) == "array([], dtype=int64)" - a = array([True, False, True, False], "?") - assert repr(a) == "array([True, False, True, False], dtype=bool)" - a = zeros([]) - assert repr(a) == "array(0.0)" - a = array(0.2) - assert repr(a) == "array(0.2)" - - def test_repr_multi(self): - from _numpypy import arange, zeros - a = zeros((3, 4)) - assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]])''' - a = zeros((2, 3, 4)) - assert repr(a) == '''array([[[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]], - - [[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]]])''' - a = arange(1002).reshape((2, 501)) - assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], - [501, 502, 503, ..., 999, 1000, 1001]])''' - assert repr(a.T) == '''array([[0, 501], - [1, 502], - [2, 503], - ..., - [498, 999], - [499, 1000], - [500, 1001]])''' - - def test_repr_slice(self): - from _numpypy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert repr(b) == "array([1.0, 3.0])" - a = zeros(2002) - b = a[::2] - assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array((range(5), range(5, 10)), dtype="int16") - b = a[1, 2:] - assert repr(b) == "array([7, 8, 9], dtype=int16)" - # an empty slice prints its shape - b = a[2:1, ] - assert repr(b) == "array([], shape=(0, 5), dtype=int16)" - - def test_str(self): - from _numpypy import array, zeros - a = array(range(5), float) - assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" - assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" - a = zeros(1001) - assert str(a) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - - a = array(range(5), dtype=long) - assert str(a) == "[0 1 2 3 4]" - a = array([True, False, True, False], dtype="?") - assert str(a) == "[True False True False]" - - a = array(range(5), dtype="int8") - assert str(a) == "[0 1 2 3 4]" - - a = array(range(5), dtype="int16") - assert str(a) == "[0 1 2 3 4]" - - a = array((range(5), range(5, 10)), dtype="int16") - assert str(a) == "[[0 1 2 3 4]\n [5 6 7 8 9]]" - - a = array(3, dtype=int) - assert str(a) == "3" - - a = zeros((400, 400), dtype=int) - assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" - a = zeros((2, 2, 2)) - r = str(a) - assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' - - def test_str_slice(self): - from _numpypy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert str(b) == "[1.0 3.0]" - a = zeros(2002) - b = a[::2] - assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - a = array((range(5), range(5, 10)), dtype="int16") - b = a[1, 2:] - assert str(b) == "[7 8 9]" - b = a[2:1, ] - assert str(b) == "[]" - - class AppTestRanges(BaseNumpyAppTest): def test_arange(self): from _numpypy import arange, array, dtype @@ -1511,13 +1714,23 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) +from pypy.module.micronumpy.appbridge import get_appbridge_cache -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) +class AppTestRepr(BaseNumpyAppTest): + def setup_class(cls): + BaseNumpyAppTest.setup_class.im_func(cls) + cache = get_appbridge_cache(cls.space) + cls.old_array_repr = cache.w_array_repr + cls.old_array_str = cache.w_array_str + cache.w_array_str = None + cache.w_array_repr = None + + def test_repr_str(self): + from _numpypy import array + assert repr(array([1, 2, 3])) == 'array([1, 2, 3])' + assert str(array([1, 2, 3])) == 'array([1, 2, 3])' + + def teardown_class(cls): + cache = get_appbridge_cache(cls.space) + cache.w_array_repr = cls.old_array_repr + cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -298,7 +306,7 @@ for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) @@ -336,9 +344,9 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises((ValueError, TypeError), add.reduce, 1) - def test_reduce(self): + def test_reduce_1d(self): from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 @@ -346,6 +354,35 @@ assert maximum.reduce([1, 2, 3]) == 3 raises(ValueError, maximum.reduce, []) + def test_reduceND(self): + from _numpypy import add, arange + a = arange(12).reshape(3, 4) + assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() + assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + + def test_reduce_keepdims(self): + from _numpypy import add, arange + a = arange(12).reshape(3, 4) + b = add.reduce(a, 0, keepdims=True) + assert b.shape == (1, 4) + assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() + + + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + + def test_unary_bitops(self): + from _numpypy import bitwise_not, array + a = array([1, 2, 3, 4]) + assert (~a == [-2, -3, -4, -5]).all() + assert (bitwise_not(a) == ~a).all() + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal @@ -371,3 +408,28 @@ (3, 3.5), ]: assert ufunc(a, b) == func(a, b) + + def test_count_reduce_items(self): + from _numpypy import count_reduce_items, arange + a = arange(24).reshape(2, 3, 4) + assert count_reduce_items(a) == 24 + assert count_reduce_items(a, 1) == 3 + assert count_reduce_items(a, (1, 2)) == 3 * 4 + + def test_true_divide(self): + from _numpypy import arange, array, true_divide + assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() + + def test_isnan_isinf(self): + from _numpypy import isnan, isinf, float64, array + assert isnan(float('nan')) + assert isnan(float64(float('nan'))) + assert not isnan(3) + assert isinf(float('inf')) + assert not isnan(3.5) + assert not isinf(3.5) + assert not isnan(float('inf')) + assert not isinf(float('nan')) + assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() + assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() + assert isinf(array([0.2])).dtype.kind == 'b' diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -12,7 +12,7 @@ from pypy.module.micronumpy.compile import (FakeSpace, IntObject, Parser, InterpreterState) from pypy.module.micronumpy.interp_numarray import (W_NDimArray, - BaseArray) + BaseArray, W_FlatIterator) from pypy.rlib.nonconst import NonConstant @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() @@ -115,6 +117,28 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_axissum(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + result = self.run("axissum") + assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, + 'guard_true': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 3, + }) + def define_prod(): return """ a = |30| @@ -193,9 +217,10 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + py.test.skip("too fragile") + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -212,7 +237,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -261,6 +287,27 @@ 'jump': 1, 'arraylen_gc': 1}) + def define_take(): + return """ + a = |10| + b = take(a, [1, 1, 3, 2]) + b -> 2 + """ + + def test_take(self): + result = self.run("take") + assert result == 3 + self.check_simple_loop({'getinteriorfield_raw': 2, + 'cast_float_to_int': 1, + 'int_lt': 1, + 'int_ge': 2, + 'guard_false': 3, + 'setinteriorfield_raw': 1, + 'int_mul': 1, + 'int_add': 3, + 'jump': 1, + 'arraylen_gc': 2}) + def define_multidim(): return """ a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] @@ -322,10 +369,10 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ @@ -339,10 +386,75 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + def define_flat_iter(): + return ''' + a = |30| + b = flat(a) + c = b + a + c -> 3 + ''' + + def test_flat_iter(self): + result = self.run("flat_iter") + assert result == 6 + self.check_trace_count(1) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 3, + 'int_ge': 1, 'guard_false': 1, + 'arraylen_gc': 1, 'jump': 1}) + + def define_flat_getitem(): + return ''' + a = |30| + b = flat(a) + b -> 4: -> 6 + ''' + + def test_flat_getitem(self): + result = self.run("flat_getitem") + assert result == 10.0 + self.check_trace_count(1) + self.check_simple_loop({'getinteriorfield_raw': 1, + 'setinteriorfield_raw': 1, + 'int_lt': 1, + 'int_ge': 1, + 'int_add': 3, + 'guard_true': 1, + 'guard_false': 1, + 'arraylen_gc': 2, + 'jump': 1}) + + def define_flat_setitem(): + return ''' + a = |30| + b = flat(a) + b[4:] = a->:26 + a -> 5 + ''' + + def test_flat_setitem(self): + result = self.run("flat_setitem") + assert result == 1.0 + self.check_trace_count(1) + # XXX not ideal, but hey, let's ignore it for now + self.check_simple_loop({'getinteriorfield_raw': 1, + 'setinteriorfield_raw': 1, + 'int_lt': 1, + 'int_gt': 1, + 'int_add': 4, + 'guard_true': 2, + 'arraylen_gc': 2, + 'jump': 1, + 'int_sub': 1, + # XXX bad part + 'int_and': 1, + 'int_mod': 1, + 'int_rshift': 1, + }) class TestNumpyOld(LLJitMixin): def setup_class(cls): @@ -377,4 +489,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - diff --git a/pypy/module/micronumpy/tool/numready/__init__.py b/pypy/module/micronumpy/tool/numready/__init__.py new file mode 100644 diff --git a/pypy/module/micronumpy/tool/numready/__main__.py b/pypy/module/micronumpy/tool/numready/__main__.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/tool/numready/__main__.py @@ -0,0 +1,6 @@ +import sys + +from .main import main + + +main(sys.argv) diff --git a/pypy/module/micronumpy/tool/numready/kinds.py b/pypy/module/micronumpy/tool/numready/kinds.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/tool/numready/kinds.py @@ -0,0 +1,4 @@ +KINDS = { + "UNKNOWN": "U", + "TYPE": "T", +} diff --git a/pypy/module/micronumpy/tool/numready/main.py b/pypy/module/micronumpy/tool/numready/main.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/tool/numready/main.py @@ -0,0 +1,123 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +""" +This should be run under PyPy. +""" + +import os +import platform +import subprocess +import tempfile +import webbrowser +from collections import OrderedDict + +import jinja2 + +from .kinds import KINDS + + +class SearchableSet(object): + def __init__(self, items=()): + self._items = {} + for item in items: + self.add(item) + + def __iter__(self): + return iter(self._items) + + def __contains__(self, other): + return other in self._items + + def __getitem__(self, idx): + return self._items[idx] + + def add(self, item): + self._items[item] = item + + def __len__(self): + return len(self._items) + +class Item(object): + def __init__(self, name, kind, subitems=None): + self.name = name + self.kind = kind + self.subitems = subitems + + def __hash__(self): + return hash(self.name) + + def __eq__(self, other): + if isinstance(other, str): + return self.name == other + return self.name == other.name + + +class ItemStatus(object): + def __init__(self, name, pypy_exists): + self.name = name + self.cls = 'exists' if pypy_exists else '' + self.symbol = u"✔" if pypy_exists else u'✖' + + def __lt__(self, other): + return self.name < other.name + +def find_numpy_items(python, modname="numpy", attr=None): + args = [ + python, os.path.join(os.path.dirname(__file__), "search.py"), modname + ] + if attr is not None: + args.append(attr) + lines = subprocess.check_output(args).splitlines() + items = SearchableSet() + for line in lines: + kind, name = line.split(" : ", 1) + subitems = None + if kind == KINDS["TYPE"] and name in SPECIAL_NAMES and attr is None: + subitems = find_numpy_items(python, modname, name) + items.add(Item(name, kind, subitems)) + return items + +def split(lst): + SPLIT = 5 + lgt = len(lst) // SPLIT + 1 + l = [[] for i in range(lgt)] + for i in range(lgt): + for k in range(SPLIT): + if k * lgt + i < len(lst): + l[i].append(lst[k * lgt + i]) + return l + +SPECIAL_NAMES = ["ndarray", "dtype", "generic"] + +def main(argv): + cpy_items = find_numpy_items("/usr/bin/python") + pypy_items = find_numpy_items(argv[1], "numpypy") + all_items = [] + + msg = "{:d}/{:d} names".format(len(pypy_items), len(cpy_items)) + " " + msg += ", ".join( + "{:d}/{:d} {} attributes".format( + len(pypy_items[name].subitems), len(cpy_items[name].subitems), name + ) + for name in SPECIAL_NAMES + ) + for item in cpy_items: + pypy_exists = item in pypy_items + if item.subitems: + for sub in item.subitems: + all_items.append( + ItemStatus(item.name + "." + sub.name, pypy_exists=pypy_exists and pypy_items[item].subitems and sub in pypy_items[item].subitems) + ) + all_items.append(ItemStatus(item.name, pypy_exists=item in pypy_items)) + env = jinja2.Environment( + loader=jinja2.FileSystemLoader(os.path.dirname(__file__)) + ) + html = env.get_template("page.html").render(all_items=split(sorted(all_items)), msg=msg) + if len(argv) > 2: + with open(argv[2], 'w') as f: + f.write(html.encode("utf-8")) + else: + with tempfile.NamedTemporaryFile(delete=False, suffix=".html") as f: + f.write(html.encode("utf-8")) + print "Saved in: %s" % f.name diff --git a/pypy/module/micronumpy/tool/numready/page.html b/pypy/module/micronumpy/tool/numready/page.html new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/tool/numready/page.html @@ -0,0 +1,65 @@ + + + + NumPyPy Status + + + + +

NumPyPy Status

+

Overall: {{ msg }}

+ + + + + + + + + + + + + + + + + {% for chunk in all_items %} + + {% for item in chunk %} + + + {% endfor %} + + {% endfor %} + +
PyPyPyPyPyPyPyPyPyPy
{{ item.name }}{{ item.symbol }}
+ + diff --git a/pypy/module/micronumpy/tool/numready/search.py b/pypy/module/micronumpy/tool/numready/search.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/tool/numready/search.py @@ -0,0 +1,33 @@ +import sys +import types + +# Evil implicit relative import. +from kinds import KINDS + + +def main(argv): + if len(argv) == 2: + [_, modname] = argv + attr = None + elif len(argv) == 3: + [_, modname, attr] = argv + else: + sys.exit("Wrong number of args") + __import__(modname) + obj = sys.modules[modname] + + if attr is not None: + obj = getattr(obj, attr) + + for name in dir(obj): + if attr is None and name.startswith("_"): + continue + subobj = getattr(obj, name) + if isinstance(subobj, types.TypeType): + kind = KINDS["TYPE"] + else: + kind = KINDS["UNKNOWN"] + print kind, ":", name + +if __name__ == "__main__": + main(sys.argv) \ No newline at end of file diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -23,6 +23,16 @@ ) return dispatcher +def raw_unary_op(func): + specialize.argtype(1)(func) + @functools.wraps(func) + def dispatcher(self, v): + return func( + self, + self.for_computation(self.unbox(v)) + ) + return dispatcher + def simple_binary_op(func): specialize.argtype(1, 2)(func) @functools.wraps(func) @@ -94,6 +104,11 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + return bool(self.for_computation( + libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset))) + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -134,6 +149,14 @@ def abs(self, v): return abs(v) + @raw_unary_op + def isnan(self, v): + return False + + @raw_unary_op + def isinf(self, v): + return False + @raw_binary_op def eq(self, v1, v2): return v1 == v2 @@ -169,6 +192,7 @@ def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool BoxType = interp_boxes.W_BoolBox @@ -205,6 +229,18 @@ def default_fromstring(self, space): return self.box(False) + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + + @simple_unary_op + def invert(self, v): + return ~v + class Integer(Primitive): _mixin_ = True @@ -253,6 +289,18 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + + @simple_unary_op + def invert(self, v): + return ~v + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -374,6 +422,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -427,6 +479,14 @@ except ValueError: return rfloat.NAN + @raw_unary_op + def isnan(self, v): + return rfloat.isnan(v) + + @raw_unary_op + def isinf(self, v): + return rfloat.isinf(v) + class Float32(BaseType, Float): T = rffi.FLOAT @@ -436,4 +496,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -371,6 +371,8 @@ if hasattr(__import__(os.name), "forkpty"): def test_forkpty(self): import sys + if 'freebsd' in sys.platform: + skip("hangs indifinitly on FreeBSD (also on CPython).") os = self.posix childpid, master_fd = os.forkpty() assert isinstance(childpid, int) diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,22 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - From noreply at buildbot.pypy.org Wed Feb 1 13:48:45 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 1 Feb 2012 13:48:45 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: merged default Message-ID: <20120201124845.D620182B67@wyvern.cs.uni-duesseldorf.de> Author: l.diekmann Branch: type-specialized-instances Changeset: r52012:a7f2f24549b4 Date: 2012-01-31 12:55 +0000 http://bitbucket.org/pypy/pypy/changeset/a7f2f24549b4/ Log: merged default diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py --- a/lib_pypy/numpypy/core/__init__.py +++ b/lib_pypy/numpypy/core/__init__.py @@ -1,1 +1,2 @@ from .fromnumeric import * +from .numeric import * diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/_methods.py @@ -0,0 +1,98 @@ +# Array methods which are called by the both the C-code for the method +# and the Python code for the NumPy-namespace function + +import _numpypy as mu +um = mu +#from numpypy.core import umath as um +from numpypy.core.numeric import asanyarray + +def _amax(a, axis=None, out=None, skipna=False, keepdims=False): + return um.maximum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _amin(a, axis=None, out=None, skipna=False, keepdims=False): + return um.minimum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _sum(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.add.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _prod(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.multiply.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _mean(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + arr = asanyarray(a) + + # Upgrade bool, unsigned int, and int to float64 + if dtype is None and arr.dtype.kind in ['b','u','i']: + ret = um.add.reduce(arr, axis=axis, dtype='f8', + out=out, skipna=skipna, keepdims=keepdims) + else: + ret = um.add.reduce(arr, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=keepdims) + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + return ret + +def _var(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + arr = asanyarray(a) + + # First compute the mean, saving 'rcount' for reuse later + if dtype is None and arr.dtype.kind in ['b','u','i']: + arrmean = um.add.reduce(arr, axis=axis, dtype='f8', + skipna=skipna, keepdims=True) + else: + arrmean = um.add.reduce(arr, axis=axis, dtype=dtype, + skipna=skipna, keepdims=True) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=True) + if isinstance(arrmean, mu.ndarray): + arrmean = um.true_divide(arrmean, rcount, + casting='unsafe', subok=False) + else: + arrmean = arrmean / float(rcount) + + # arr - arrmean + x = arr - arrmean + + # (arr - arrmean) ** 2 + if arr.dtype.kind == 'c': + x = um.multiply(x, um.conjugate(x)).real + else: + x = um.multiply(x, x) + + # add.reduce((arr - arrmean) ** 2, axis) + ret = um.add.reduce(x, axis=axis, dtype=dtype, out=out, + skipna=skipna, keepdims=keepdims) + + # add.reduce((arr - arrmean) ** 2, axis) / (n - ddof) + if not keepdims and isinstance(rcount, mu.ndarray): + rcount = rcount.squeeze(axis=axis) + rcount -= ddof + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + + return ret + +def _std(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof, + skipna=skipna, keepdims=keepdims) + + if isinstance(ret, mu.ndarray): + ret = um.sqrt(ret) + else: + ret = um.sqrt(ret) + + return ret diff --git a/lib_pypy/numpypy/core/arrayprint.py b/lib_pypy/numpypy/core/arrayprint.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/arrayprint.py @@ -0,0 +1,789 @@ +"""Array printing function + +$Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $ +""" +__all__ = ["array2string", "set_printoptions", "get_printoptions"] +__docformat__ = 'restructuredtext' + +# +# Written by Konrad Hinsen +# last revision: 1996-3-13 +# modified by Jim Hugunin 1997-3-3 for repr's and str's (and other details) +# and by Perry Greenfield 2000-4-1 for numarray +# and by Travis Oliphant 2005-8-22 for numpy + +import sys +import _numpypy as _nt +from _numpypy import maximum, minimum, absolute, not_equal, isinf, isnan, isna +#from _numpypy import format_longfloat, datetime_as_string, datetime_data +from .fromnumeric import ravel + + +def product(x, y): return x*y + +_summaryEdgeItems = 3 # repr N leading and trailing items of each dimension +_summaryThreshold = 1000 # total items > triggers array summarization + +_float_output_precision = 8 +_float_output_suppress_small = False +_line_width = 75 +_nan_str = 'nan' +_inf_str = 'inf' +_na_str = 'NA' +_formatter = None # formatting function for array elements + +if sys.version_info[0] >= 3: + from functools import reduce + +def set_printoptions(precision=None, threshold=None, edgeitems=None, + linewidth=None, suppress=None, + nanstr=None, infstr=None, nastr=None, + formatter=None): + """ + Set printing options. + + These options determine the way floating point numbers, arrays and + other NumPy objects are displayed. + + Parameters + ---------- + precision : int, optional + Number of digits of precision for floating point output (default 8). + threshold : int, optional + Total number of array elements which trigger summarization + rather than full repr (default 1000). + edgeitems : int, optional + Number of array items in summary at beginning and end of + each dimension (default 3). + linewidth : int, optional + The number of characters per line for the purpose of inserting + line breaks (default 75). + suppress : bool, optional + Whether or not suppress printing of small floating point values + using scientific notation (default False). + nanstr : str, optional + String representation of floating point not-a-number (default nan). + infstr : str, optional + String representation of floating point infinity (default inf). + nastr : str, optional + String representation of NA missing value (default NA). + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + See Also + -------- + get_printoptions, set_string_function, array2string + + Notes + ----- + `formatter` is always reset with a call to `set_printoptions`. + + Examples + -------- + Floating point precision can be set: + + >>> np.set_printoptions(precision=4) + >>> print np.array([1.123456789]) + [ 1.1235] + + Long arrays can be summarised: + + >>> np.set_printoptions(threshold=5) + >>> print np.arange(10) + [0 1 2 ..., 7 8 9] + + Small results can be suppressed: + + >>> eps = np.finfo(float).eps + >>> x = np.arange(4.) + >>> x**2 - (x + eps)**2 + array([ -4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) + >>> np.set_printoptions(suppress=True) + >>> x**2 - (x + eps)**2 + array([-0., -0., 0., 0.]) + + A custom formatter can be used to display array elements as desired: + + >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) + >>> x = np.arange(3) + >>> x + array([int: 0, int: -1, int: -2]) + >>> np.set_printoptions() # formatter gets reset + >>> x + array([0, 1, 2]) + + To put back the default options, you can use: + + >>> np.set_printoptions(edgeitems=3,infstr='inf', + ... linewidth=75, nanstr='nan', precision=8, + ... suppress=False, threshold=1000, formatter=None) + """ + + global _summaryThreshold, _summaryEdgeItems, _float_output_precision, \ + _line_width, _float_output_suppress_small, _nan_str, _inf_str, \ + _na_str, _formatter + if linewidth is not None: + _line_width = linewidth + if threshold is not None: + _summaryThreshold = threshold + if edgeitems is not None: + _summaryEdgeItems = edgeitems + if precision is not None: + _float_output_precision = precision + if suppress is not None: + _float_output_suppress_small = not not suppress + if nanstr is not None: + _nan_str = nanstr + if infstr is not None: + _inf_str = infstr + if nastr is not None: + _na_str = nastr + _formatter = formatter + +def get_printoptions(): + """ + Return the current print options. + + Returns + ------- + print_opts : dict + Dictionary of current print options with keys + + - precision : int + - threshold : int + - edgeitems : int + - linewidth : int + - suppress : bool + - nanstr : str + - infstr : str + - formatter : dict of callables + + For a full description of these options, see `set_printoptions`. + + See Also + -------- + set_printoptions, set_string_function + + """ + d = dict(precision=_float_output_precision, + threshold=_summaryThreshold, + edgeitems=_summaryEdgeItems, + linewidth=_line_width, + suppress=_float_output_suppress_small, + nanstr=_nan_str, + infstr=_inf_str, + nastr=_na_str, + formatter=_formatter) + return d + +def _leading_trailing(a): + import numeric as _nc + if a.ndim == 1: + if len(a) > 2*_summaryEdgeItems: + b = _nc.concatenate((a[:_summaryEdgeItems], + a[-_summaryEdgeItems:])) + else: + b = a + else: + if len(a) > 2*_summaryEdgeItems: + l = [_leading_trailing(a[i]) for i in range( + min(len(a), _summaryEdgeItems))] + l.extend([_leading_trailing(a[-i]) for i in range( + min(len(a), _summaryEdgeItems),0,-1)]) + else: + l = [_leading_trailing(a[i]) for i in range(0, len(a))] + b = _nc.concatenate(tuple(l)) + return b + +def _boolFormatter(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif x: + return ' True' + else: + return 'False' + + +def repr_format(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return repr(x) + +def _array2string(a, max_line_width, precision, suppress_small, separator=' ', + prefix="", formatter=None): + + if max_line_width is None: + max_line_width = _line_width + + if precision is None: + precision = _float_output_precision + + if suppress_small is None: + suppress_small = _float_output_suppress_small + + if formatter is None: + formatter = _formatter + + if a.size > _summaryThreshold: + summary_insert = "..., " + data = _leading_trailing(a) + else: + summary_insert = "" + data = ravel(a) + + formatdict = {'bool' : _boolFormatter, + 'int' : IntegerFormat(data), + 'float' : FloatFormat(data, precision, suppress_small), + 'longfloat' : LongFloatFormat(precision), + #'complexfloat' : ComplexFormat(data, precision, + # suppress_small), + #'longcomplexfloat' : LongComplexFormat(precision), + #'datetime' : DatetimeFormat(data), + #'timedelta' : TimedeltaFormat(data), + 'numpystr' : repr_format, + 'str' : str} + + if formatter is not None: + fkeys = [k for k in formatter.keys() if formatter[k] is not None] + if 'all' in fkeys: + for key in formatdict.keys(): + formatdict[key] = formatter['all'] + if 'int_kind' in fkeys: + for key in ['int']: + formatdict[key] = formatter['int_kind'] + if 'float_kind' in fkeys: + for key in ['float', 'longfloat']: + formatdict[key] = formatter['float_kind'] + if 'complex_kind' in fkeys: + for key in ['complexfloat', 'longcomplexfloat']: + formatdict[key] = formatter['complex_kind'] + if 'str_kind' in fkeys: + for key in ['numpystr', 'str']: + formatdict[key] = formatter['str_kind'] + for key in formatdict.keys(): + if key in fkeys: + formatdict[key] = formatter[key] + + try: + format_function = a._format + msg = "The `_format` attribute is deprecated in Numpy 2.0 and " \ + "will be removed in 2.1. Use the `formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + # find the right formatting function for the array + dtypeobj = a.dtype.type + if issubclass(dtypeobj, _nt.bool_): + format_function = formatdict['bool'] + elif issubclass(dtypeobj, _nt.integer): + #if issubclass(dtypeobj, _nt.timedelta64): + # format_function = formatdict['timedelta'] + #else: + format_function = formatdict['int'] + elif issubclass(dtypeobj, _nt.floating): + #if issubclass(dtypeobj, _nt.longfloat): + # format_function = formatdict['longfloat'] + #else: + format_function = formatdict['float'] + elif issubclass(dtypeobj, _nt.complexfloating): + if issubclass(dtypeobj, _nt.clongfloat): + format_function = formatdict['longcomplexfloat'] + else: + format_function = formatdict['complexfloat'] + elif issubclass(dtypeobj, (_nt.unicode_, _nt.string_)): + format_function = formatdict['numpystr'] + elif issubclass(dtypeobj, _nt.datetime64): + format_function = formatdict['datetime'] + else: + format_function = formatdict['str'] + + # skip over "[" + next_line_prefix = " " + # skip over array( + next_line_prefix += " "*len(prefix) + + lst = _formatArray(a, format_function, len(a.shape), max_line_width, + next_line_prefix, separator, + _summaryEdgeItems, summary_insert)[:-1] + return lst + +def _convert_arrays(obj): + import numeric as _nc + newtup = [] + for k in obj: + if isinstance(k, _nc.ndarray): + k = k.tolist() + elif isinstance(k, tuple): + k = _convert_arrays(k) + newtup.append(k) + return tuple(newtup) + + +def array2string(a, max_line_width=None, precision=None, + suppress_small=None, separator=' ', prefix="", + style=repr, formatter=None): + """ + Return a string representation of an array. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters splits the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing + precision (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero. A number is "very small" if it + is smaller than the current printing precision. + separator : str, optional + Inserted between elements. + prefix : str, optional + An array is typically printed as:: + + 'prefix(' + array2string(a) + ')' + + The length of the prefix string is used to align the + output correctly. + style : function, optional + A function that accepts an ndarray and returns a string. Used only + when the shape of `a` is equal to ``()``, i.e. for 0-D arrays. + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + Returns + ------- + array_str : str + String representation of the array. + + Raises + ------ + TypeError : if a callable in `formatter` does not return a string. + + See Also + -------- + array_str, array_repr, set_printoptions, get_printoptions + + Notes + ----- + If a formatter is specified for a certain type, the `precision` keyword is + ignored for that type. + + Examples + -------- + >>> x = np.array([1e-16,1,2,3]) + >>> print np.array2string(x, precision=2, separator=',', + ... suppress_small=True) + [ 0., 1., 2., 3.] + + >>> x = np.arange(3.) + >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) + '[0.00 1.00 2.00]' + + >>> x = np.arange(3) + >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) + '[0x0L 0x1L 0x2L]' + + """ + + if a.shape == (): + x = a.item() + if isna(x): + lst = str(x).replace('NA', _na_str, 1) + else: + try: + lst = a._format(x) + msg = "The `_format` attribute is deprecated in Numpy " \ + "2.0 and will be removed in 2.1. Use the " \ + "`formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + if isinstance(x, tuple): + x = _convert_arrays(x) + lst = style(x) + elif reduce(product, a.shape) == 0: + # treat as a null array if any of shape elements == 0 + lst = "[]" + else: + lst = _array2string(a, max_line_width, precision, suppress_small, + separator, prefix, formatter=formatter) + return lst + +def _extendLine(s, line, word, max_line_len, next_line_prefix): + if len(line.rstrip()) + len(word.rstrip()) >= max_line_len: + s += line.rstrip() + "\n" + line = next_line_prefix + line += word + return s, line + + +def _formatArray(a, format_function, rank, max_line_len, + next_line_prefix, separator, edge_items, summary_insert): + """formatArray is designed for two modes of operation: + + 1. Full output + + 2. Summarized output + + """ + if rank == 0: + obj = a.item() + if isinstance(obj, tuple): + obj = _convert_arrays(obj) + return str(obj) + + if summary_insert and 2*edge_items < len(a): + leading_items, trailing_items, summary_insert1 = \ + edge_items, edge_items, summary_insert + else: + leading_items, trailing_items, summary_insert1 = 0, len(a), "" + + if rank == 1: + s = "" + line = next_line_prefix + for i in xrange(leading_items): + word = format_function(a[i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + if summary_insert1: + s, line = _extendLine(s, line, summary_insert1, max_line_len, next_line_prefix) + + for i in xrange(trailing_items, 1, -1): + word = format_function(a[-i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + word = format_function(a[-1]) + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + s += line + "]\n" + s = '[' + s[len(next_line_prefix):] + else: + s = '[' + sep = separator.rstrip() + for i in xrange(leading_items): + if i > 0: + s += next_line_prefix + s += _formatArray(a[i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + + if summary_insert1: + s += next_line_prefix + summary_insert1 + "\n" + + for i in xrange(trailing_items, 1, -1): + if leading_items or i != trailing_items: + s += next_line_prefix + s += _formatArray(a[-i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + if leading_items or trailing_items > 1: + s += next_line_prefix + s += _formatArray(a[-1], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert).rstrip()+']\n' + return s + +class FloatFormat(object): + def __init__(self, data, precision, suppress_small, sign=False): + self.precision = precision + self.suppress_small = suppress_small + self.sign = sign + self.exp_format = False + self.large_exponent = False + self.max_str_len = 0 + #try: + self.fillFormat(data) + #except (TypeError, NotImplementedError): + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + #pass + + def fillFormat(self, data): + import numeric as _nc + # XXX pypy unimplemented + #errstate = _nc.seterr(all='ignore') + try: + special = isnan(data) | isinf(data) | isna(data) + special[isna(data)] = False + valid = not_equal(data, 0) & ~special + valid[isna(data)] = False + non_zero = absolute(data.compress(valid)) + if len(non_zero) == 0: + max_val = 0. + min_val = 0. + else: + max_val = maximum.reduce(non_zero, skipna=True) + min_val = minimum.reduce(non_zero, skipna=True) + if max_val >= 1.e8: + self.exp_format = True + if not self.suppress_small and (min_val < 0.0001 + or max_val/min_val > 1000.): + self.exp_format = True + finally: + pass + # XXX pypy unimplemented + #_nc.seterr(**errstate) + + if self.exp_format: + self.large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 + self.max_str_len = 8 + self.precision + if self.large_exponent: + self.max_str_len += 1 + if self.sign: + format = '%+' + else: + format = '%' + format = format + '%d.%de' % (self.max_str_len, self.precision) + else: + format = '%%.%df' % (self.precision,) + if len(non_zero): + precision = max([_digits(x, self.precision, format) + for x in non_zero]) + else: + precision = 0 + precision = min(self.precision, precision) + self.max_str_len = len(str(int(max_val))) + precision + 2 + if special.any(): + self.max_str_len = max(self.max_str_len, + len(_nan_str), + len(_inf_str)+1, + len(_na_str)) + if self.sign: + format = '%#+' + else: + format = '%#' + format = format + '%d.%df' % (self.max_str_len, precision) + + self.special_fmt = '%%%ds' % (self.max_str_len,) + self.format = format + + def __call__(self, x, strip_zeros=True): + import numeric as _nc + #err = _nc.seterr(invalid='ignore') + try: + if isna(x): + return self.special_fmt % (str(x).replace('NA', _na_str, 1),) + elif isnan(x): + if self.sign: + return self.special_fmt % ('+' + _nan_str,) + else: + return self.special_fmt % (_nan_str,) + elif isinf(x): + if x > 0: + if self.sign: + return self.special_fmt % ('+' + _inf_str,) + else: + return self.special_fmt % (_inf_str,) + else: + return self.special_fmt % ('-' + _inf_str,) + finally: + pass + #_nc.seterr(**err) + + s = self.format % x + if self.large_exponent: + # 3-digit exponent + expsign = s[-3] + if expsign == '+' or expsign == '-': + s = s[1:-2] + '0' + s[-2:] + elif self.exp_format: + # 2-digit exponent + if s[-3] == '0': + s = ' ' + s[:-3] + s[-2:] + elif strip_zeros: + z = s.rstrip('0') + s = z + ' '*(len(s)-len(z)) + return s + + +def _digits(x, precision, format): + s = format % x + z = s.rstrip('0') + return precision - len(s) + len(z) + + +_MAXINT = sys.maxint +_MININT = -sys.maxint-1 +class IntegerFormat(object): + def __init__(self, data): + try: + max_str_len = max(len(str(maximum.reduce(data, skipna=True))), + len(str(minimum.reduce(data, skipna=True)))) + self.format = '%' + str(max_str_len) + 'd' + except TypeError, NotImplementedError: + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + pass + except ValueError: + # this occurs when everything is NA + pass + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif _MININT < x < _MAXINT: + return self.format % x + else: + return "%s" % x + +class LongFloatFormat(object): + # XXX Have to add something to determine the width to use a la FloatFormat + # Right now, things won't line up properly + def __init__(self, precision, sign=False): + self.precision = precision + self.sign = sign + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif isnan(x): + if self.sign: + return '+' + _nan_str + else: + return ' ' + _nan_str + elif isinf(x): + if x > 0: + if self.sign: + return '+' + _inf_str + else: + return ' ' + _inf_str + else: + return '-' + _inf_str + elif x >= 0: + if self.sign: + return '+' + format_longfloat(x, self.precision) + else: + return ' ' + format_longfloat(x, self.precision) + else: + return format_longfloat(x, self.precision) + + +class LongComplexFormat(object): + def __init__(self, precision): + self.real_format = LongFloatFormat(precision) + self.imag_format = LongFloatFormat(precision, sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real) + i = self.imag_format(x.imag) + return r + i + 'j' + + +class ComplexFormat(object): + def __init__(self, x, precision, suppress_small): + self.real_format = FloatFormat(x.real, precision, suppress_small) + self.imag_format = FloatFormat(x.imag, precision, suppress_small, + sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real, strip_zeros=False) + i = self.imag_format(x.imag, strip_zeros=False) + if not self.imag_format.exp_format: + z = i.rstrip('0') + i = z + 'j' + ' '*(len(i)-len(z)) + else: + i = i + 'j' + return r + i + +class DatetimeFormat(object): + def __init__(self, x, unit=None, + timezone=None, casting='same_kind'): + # Get the unit from the dtype + if unit is None: + if x.dtype.kind == 'M': + unit = datetime_data(x.dtype)[0] + else: + unit = 's' + + # If timezone is default, make it 'local' or 'UTC' based on the unit + if timezone is None: + # Date units -> UTC, time units -> local + if unit in ('Y', 'M', 'W', 'D'): + self.timezone = 'UTC' + else: + self.timezone = 'local' + else: + self.timezone = timezone + self.unit = unit + self.casting = casting + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return "'%s'" % datetime_as_string(x, + unit=self.unit, + timezone=self.timezone, + casting=self.casting) + +class TimedeltaFormat(object): + def __init__(self, data): + if data.dtype.kind == 'm': + v = data.view('i8') + max_str_len = max(len(str(maximum.reduce(v))), + len(str(minimum.reduce(v)))) + self.format = '%' + str(max_str_len) + 'd' + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return self.format % x.astype('i8') + diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -30,7 +30,7 @@ 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', 'amax', 'amin', ] - + def take(a, indices, axis=None, out=None, mode='raise'): """ Take elements from an array along an axis. @@ -1054,8 +1054,9 @@ array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) """ - raise NotImplementedError('Waiting on interp level method') - + if not hasattr(a, 'ravel'): + a = numpypy.array(a) + return a.ravel(order=order) def nonzero(a): """ @@ -2324,13 +2325,12 @@ 0.44999999925552653 """ - assert axis is None assert dtype is None assert out is None assert ddof == 0 if not hasattr(a, "std"): a = numpypy.array(a) - return a.std() + return a.std(axis=axis) def var(a, axis=None, dtype=None, out=None, ddof=0): @@ -2421,10 +2421,9 @@ 0.20250000000000001 """ - assert axis is None assert dtype is None assert out is None assert ddof == 0 if not hasattr(a, "var"): a = numpypy.array(a) - return a.var() + return a.var(axis=axis) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/numeric.py @@ -0,0 +1,311 @@ + +from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import concatenate +import sys +import _numpypy as multiarray # ARGH +from numpypy.core.arrayprint import array2string + + + +def asanyarray(a, dtype=None, order=None, maskna=None, ownmaskna=False): + """ + Convert the input to an ndarray, but pass ndarray subclasses through. + + Parameters + ---------- + a : array_like + Input data, in any form that can be converted to an array. This + includes scalars, lists, lists of tuples, tuples, tuples of tuples, + tuples of lists, and ndarrays. + dtype : data-type, optional + By default, the data-type is inferred from the input data. + order : {'C', 'F'}, optional + Whether to use row-major ('C') or column-major ('F') memory + representation. Defaults to 'C'. + maskna : bool or None, optional + If this is set to True, it forces the array to have an NA mask. + If this is set to False, it forces the array to not have an NA + mask. + ownmaskna : bool, optional + If this is set to True, forces the array to have a mask which + it owns. + + Returns + ------- + out : ndarray or an ndarray subclass + Array interpretation of `a`. If `a` is an ndarray or a subclass + of ndarray, it is returned as-is and no copy is performed. + + See Also + -------- + asarray : Similar function which always returns ndarrays. + ascontiguousarray : Convert input to a contiguous array. + asfarray : Convert input to a floating point ndarray. + asfortranarray : Convert input to an ndarray with column-major + memory order. + asarray_chkfinite : Similar function which checks input for NaNs and + Infs. + fromiter : Create an array from an iterator. + fromfunction : Construct an array by executing a function on grid + positions. + + Examples + -------- + Convert a list into an array: + + >>> a = [1, 2] + >>> np.asanyarray(a) + array([1, 2]) + + Instances of `ndarray` subclasses are passed through as-is: + + >>> a = np.matrix([1, 2]) + >>> np.asanyarray(a) is a + True + + """ + return array(a, dtype, copy=False, order=order, subok=True, + maskna=maskna, ownmaskna=ownmaskna) + +def base_repr(number, base=2, padding=0): + """ + Return a string representation of a number in the given base system. + + Parameters + ---------- + number : int + The value to convert. Only positive values are handled. + base : int, optional + Convert `number` to the `base` number system. The valid range is 2-36, + the default value is 2. + padding : int, optional + Number of zeros padded on the left. Default is 0 (no padding). + + Returns + ------- + out : str + String representation of `number` in `base` system. + + See Also + -------- + binary_repr : Faster version of `base_repr` for base 2. + + Examples + -------- + >>> np.base_repr(5) + '101' + >>> np.base_repr(6, 5) + '11' + >>> np.base_repr(7, base=5, padding=3) + '00012' + + >>> np.base_repr(10, base=16) + 'A' + >>> np.base_repr(32, base=16) + '20' + + """ + digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' + if base > len(digits): + raise ValueError("Bases greater than 36 not handled in base_repr.") + + num = abs(number) + res = [] + while num: + res.append(digits[num % base]) + num //= base + if padding: + res.append('0' * padding) + if number < 0: + res.append('-') + return ''.join(reversed(res or '0')) + +_typelessdata = [int_, float_]#, complex_] +# XXX +#if issubclass(intc, int): +# _typelessdata.append(intc) + +#if issubclass(longlong, int): +# _typelessdata.append(longlong) + +def array_repr(arr, max_line_width=None, precision=None, suppress_small=None): + """ + Return the string representation of an array. + + Parameters + ---------- + arr : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters split the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero, default is False. Very small + is defined by `precision`, if the precision is 8 then + numbers smaller than 5e-9 are represented as zero. + + Returns + ------- + string : str + The string representation of an array. + + See Also + -------- + array_str, array2string, set_printoptions + + Examples + -------- + >>> np.array_repr(np.array([1,2])) + 'array([1, 2])' + >>> np.array_repr(np.ma.array([0.])) + 'MaskedArray([ 0.])' + >>> np.array_repr(np.array([], np.int32)) + 'array([], dtype=int32)' + + >>> x = np.array([1e-6, 4e-7, 2, 3]) + >>> np.array_repr(x, precision=6, suppress_small=True) + 'array([ 0.000001, 0. , 2. , 3. ])' + + """ + if arr.size > 0 or arr.shape==(0,): + lst = array2string(arr, max_line_width, precision, suppress_small, + ', ', "array(") + else: # show zero-length shape unless it is (0,) + lst = "[], shape=%s" % (repr(arr.shape),) + + if arr.__class__ is not ndarray: + cName= arr.__class__.__name__ + else: + cName = "array" + + skipdtype = (arr.dtype.type in _typelessdata) and arr.size > 0 + + # XXX pypy lacks support + if 0 and arr.flags.maskna: + whichna = isna(arr) + # If nothing is NA, explicitly signal the NA-mask + if not any(whichna): + lst += ", maskna=True" + # If everything is NA, can't skip the dtype + if skipdtype and all(whichna): + skipdtype = False + + if skipdtype: + return "%s(%s)" % (cName, lst) + else: + typename = arr.dtype.name + # Quote typename in the output if it is "complex". + if typename and not (typename[0].isalpha() and typename.isalnum()): + typename = "'%s'" % typename + + lf = '' + if 0: # or issubclass(arr.dtype.type, flexible): + if arr.dtype.names: + typename = "%s" % str(arr.dtype) + else: + typename = "'%s'" % str(arr.dtype) + lf = '\n'+' '*len("array(") + return cName + "(%s, %sdtype=%s)" % (lst, lf, typename) + +def array_str(a, max_line_width=None, precision=None, suppress_small=None): + """ + Return a string representation of the data in an array. + + The data in the array is returned as a single string. This function is + similar to `array_repr`, the difference being that `array_repr` also + returns information on the kind of array and its data type. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + Inserts newlines if text is longer than `max_line_width`. The + default is, indirectly, 75. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent numbers "very close" to zero as zero; default is False. + Very close is defined by precision: if the precision is 8, e.g., + numbers smaller (in absolute value) than 5e-9 are represented as + zero. + + See Also + -------- + array2string, array_repr, set_printoptions + + Examples + -------- + >>> np.array_str(np.arange(3)) + '[0 1 2]' + + """ + return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) + +def set_string_function(f, repr=True): + """ + Set a Python function to be used when pretty printing arrays. + + Parameters + ---------- + f : function or None + Function to be used to pretty print arrays. The function should expect + a single array argument and return a string of the representation of + the array. If None, the function is reset to the default NumPy function + to print arrays. + repr : bool, optional + If True (default), the function for pretty printing (``__repr__``) + is set, if False the function that returns the default string + representation (``__str__``) is set. + + See Also + -------- + set_printoptions, get_printoptions + + Examples + -------- + >>> def pprint(arr): + ... return 'HA! - What are you going to do now?' + ... + >>> np.set_string_function(pprint) + >>> a = np.arange(10) + >>> a + HA! - What are you going to do now? + >>> print a + [0 1 2 3 4 5 6 7 8 9] + + We can reset the function to the default: + + >>> np.set_string_function(None) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + + `repr` affects either pretty printing or normal string representation. + Note that ``__repr__`` is still affected by setting ``__str__`` + because the width of each array element in the returned string becomes + equal to the length of the result of ``__str__()``. + + >>> x = np.arange(4) + >>> np.set_string_function(lambda x:'random', repr=False) + >>> x.__str__() + 'random' + >>> x.__repr__() + 'array([ 0, 1, 2, 3])' + + """ + if f is None: + if repr: + return multiarray.set_string_function(array_repr, 1) + else: + return multiarray.set_string_function(array_str, 0) + else: + return multiarray.set_string_function(f, repr) + +set_string_function(array_str, 0) +set_string_function(array_repr, 1) + +little_endian = (sys.byteorder == 'little') diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -440,6 +440,12 @@ def method_popitem(dct): return dct.getanyitem('items') + def method_pop(dct, s_key, s_dfl=None): + dct.dictdef.generalize_key(s_key) + if s_dfl is not None: + dct.dictdef.generalize_value(s_dfl) + return dct.dictdef.read_value() + def _can_only_throw(dic, *ignore): if dic1.dictdef.dictkey.custom_eq_hash: return None # r_dict: can throw anything diff --git a/pypy/jit/codewriter/assembler.py b/pypy/jit/codewriter/assembler.py --- a/pypy/jit/codewriter/assembler.py +++ b/pypy/jit/codewriter/assembler.py @@ -81,10 +81,15 @@ if not isinstance(value, (llmemory.AddressAsInt, ComputedIntSymbolic)): value = lltype.cast_primitive(lltype.Signed, value) - if allow_short and -128 <= value <= 127: - # emit the constant as a small integer - self.code.append(chr(value & 0xFF)) - return True + if allow_short: + try: + short_num = -128 <= value <= 127 + except TypeError: # "Symbolics cannot be compared!" + short_num = False + if short_num: + # emit the constant as a small integer + self.code.append(chr(value & 0xFF)) + return True constants = self.constants_i elif kind == 'ref': value = lltype.cast_opaque_ptr(llmemory.GCREF, value) diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -32,11 +32,10 @@ Py_DecRef(space, w_item) if not isinstance(w_list, W_ListObject): PyErr_BadInternalCall(space) - wrappeditems = w_list.getitems() - if index < 0 or index >= len(wrappeditems): + if index < 0 or index >= w_list.length(): raise OperationError(space.w_IndexError, space.wrap( "list assignment index out of range")) - wrappeditems[index] = w_item + w_list.setitem(index, w_item) return 0 @cpython_api([PyObject, Py_ssize_t], PyObject) @@ -74,7 +73,7 @@ """Macro form of PyList_Size() without error checking. """ assert isinstance(w_list, W_ListObject) - return len(w_list.getitems()) + return w_list.length() @cpython_api([PyObject], Py_ssize_t, error=-1) diff --git a/pypy/module/cpyext/test/test_listobject.py b/pypy/module/cpyext/test/test_listobject.py --- a/pypy/module/cpyext/test/test_listobject.py +++ b/pypy/module/cpyext/test/test_listobject.py @@ -128,3 +128,6 @@ module.setslice(l, None) assert l == [0, 4, 5] + l = [1, 2, 3] + module.setlistitem(l,0) + assert l == [None, 2, 3] diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -28,6 +28,11 @@ 'fromstring': 'interp_support.fromstring', 'flatiter': 'interp_numarray.W_FlatIterator', 'isna': 'interp_numarray.isna', + 'concatenate': 'interp_numarray.concatenate', + + 'set_string_function': 'appbridge.set_string_function', + + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', @@ -67,6 +72,7 @@ ("copysign", "copysign"), ("cos", "cos"), ("divide", "divide"), + ("true_divide", "true_divide"), ("equal", "equal"), ("exp", "exp"), ("fabs", "fabs"), @@ -89,6 +95,9 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_not', 'invert'), + ('isnan', 'isnan'), + ('isinf', 'isinf'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -59,7 +59,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/appbridge.py @@ -0,0 +1,38 @@ + +from pypy.rlib.objectmodel import specialize + +class AppBridgeCache(object): + w__var = None + w__std = None + w_module = None + w_array_repr = None + w_array_str = None + + def __init__(self, space): + self.w_import = space.appexec([], """(): + def f(): + import sys + __import__('numpypy.core._methods') + return sys.modules['numpypy.core._methods'] + return f + """) + + @specialize.arg(2) + def call_method(self, space, name, *args): + w_meth = getattr(self, 'w_' + name) + if w_meth is None: + if self.w_module is None: + self.w_module = space.call_function(self.w_import) + w_meth = space.getattr(self.w_module, space.wrap(name)) + setattr(self, 'w_' + name, w_meth) + return space.call_function(w_meth, *args) + +def set_string_function(space, w_f, w_repr): + cache = get_appbridge_cache(space) + if space.is_true(w_repr): + cache.w_array_repr = w_f + else: + cache.w_array_str = w_f + +def get_appbridge_cache(space): + return space.fromcache(AppBridgeCache) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -32,13 +32,16 @@ class BadToken(Exception): pass -SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative"] +SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", + "unegative", "flat"] +TWO_ARG_FUNCTIONS = ['take'] class FakeSpace(object): w_ValueError = None w_TypeError = None w_IndexError = None w_OverflowError = None + w_NotImplementedError = None w_None = None w_bool = "bool" @@ -53,6 +56,9 @@ """NOT_RPYTHON""" self.fromcache = InternalSpaceCache(self).getorbuild + def _freeze_(self): + return True + def issequence_w(self, w_obj): return isinstance(w_obj, ListObject) or isinstance(w_obj, W_NDimArray) @@ -144,6 +150,9 @@ def allocate_instance(self, klass, w_subtype): return instantiate(klass) + def newtuple(self, list_w): + raise ValueError + def len_w(self, w_obj): if isinstance(w_obj, ListObject): return len(w_obj.items) @@ -371,12 +380,12 @@ for arg in self.args])) def execute(self, interp): + arr = self.args[0].execute(interp) + if not isinstance(arr, BaseArray): + raise ArgumentNotAnArray if self.name in SINGLE_ARG_FUNCTIONS: if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch - arr = self.args[0].execute(interp) - if not isinstance(arr, BaseArray): - raise ArgumentNotAnArray if self.name == "sum": if len(self.args)>1: w_res = arr.descr_sum(interp.space, @@ -396,21 +405,31 @@ elif self.name == "unegative": neg = interp_ufuncs.get(interp.space).negative w_res = neg.call(interp.space, [arr]) + elif self.name == "flat": + w_res = arr.descr_get_flatiter(interp.space) else: assert False # unreachable code - if isinstance(w_res, BaseArray): - return w_res - if isinstance(w_res, FloatObject): - dtype = get_dtype_cache(interp.space).w_float64dtype - elif isinstance(w_res, BoolObject): - dtype = get_dtype_cache(interp.space).w_booldtype - elif isinstance(w_res, interp_boxes.W_GenericBox): - dtype = w_res.get_dtype(interp.space) + elif self.name in TWO_ARG_FUNCTIONS: + arg = self.args[1].execute(interp) + if not isinstance(arg, BaseArray): + raise ArgumentNotAnArray + if self.name == 'take': + w_res = arr.descr_take(interp.space, arg) else: - dtype = None - return scalar_w(interp.space, dtype, w_res) + assert False # unreachable else: raise WrongFunctionName + if isinstance(w_res, BaseArray): + return w_res + if isinstance(w_res, FloatObject): + dtype = get_dtype_cache(interp.space).w_float64dtype + elif isinstance(w_res, BoolObject): + dtype = get_dtype_cache(interp.space).w_booldtype + elif isinstance(w_res, interp_boxes.W_GenericBox): + dtype = w_res.get_dtype(interp.space) + else: + dtype = None + return scalar_w(interp.space, dtype, w_res) _REGEXES = [ ('-?[\d\.]+', 'number'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -8,6 +8,7 @@ from pypy.tool.sourcetools import func_with_new_name +MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): @@ -28,6 +29,7 @@ def convert_to(self, dtype): return dtype.box(self.value) + class W_GenericBox(Wrappable): _attrs_ = () @@ -93,7 +95,7 @@ descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") - def descr_tolist(self, space): + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -104,7 +106,8 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - pass + def int_w(self, space): + return space.int_w(self.descr_int(space)) class W_SignedIntegerBox(W_IntegerBox): pass @@ -187,7 +190,7 @@ __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), - tolist = interp2app(W_GenericBox.descr_tolist), + tolist = interp2app(W_GenericBox.item), ) W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, @@ -231,7 +234,7 @@ __new__ = interp2app(W_UInt16Box.descr__new__.im_func), ) -W_Int32Box.typedef = TypeDef("int32", W_SignedIntegerBox.typedef, +W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), ) @@ -241,24 +244,18 @@ __new__ = interp2app(W_UInt32Box.descr__new__.im_func), ) -if LONG_BIT == 32: - long_name = "int32" -elif LONG_BIT == 64: - long_name = "int64" -W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), - __module__ = "numpypy", - __new__ = interp2app(W_LongBox.descr__new__.im_func), -) - -W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, - __module__ = "numpypy", -) - W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), ) +if LONG_BIT == 32: + W_LongBox = W_Int32Box + W_ULongBox = W_UInt32Box +elif LONG_BIT == 64: + W_LongBox = W_Int64Box + W_ULongBox = W_UInt64Box + W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), @@ -283,3 +280,4 @@ __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) + diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -89,8 +89,19 @@ def descr_get_shape(self, space): return space.newtuple([]) + def eq(self, space, w_other): + w_other = space.call_function(space.gettypefor(W_Dtype), w_other) + return space.is_w(self, w_other) + + def descr_eq(self, space, w_other): + return space.wrap(self.eq(space, w_other)) + + def descr_ne(self, space, w_other): + return space.wrap(not self.eq(space, w_other)) + def is_int_type(self): - return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + return (self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR or + self.kind == BOOLLTR) def is_bool_type(self): return self.kind == BOOLLTR @@ -101,12 +112,15 @@ __str__= interp2app(W_Dtype.descr_str), __repr__ = interp2app(W_Dtype.descr_repr), + __eq__ = interp2app(W_Dtype.descr_eq), + __ne__ = interp2app(W_Dtype.descr_ne), num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), shape = GetSetProperty(W_Dtype.descr_get_shape), + name = interp_attrproperty('name', cls=W_Dtype), ) W_Dtype.typedef.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -4,6 +4,49 @@ from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ calculate_slice_strides +""" This is a mini-tutorial on iterators, strides, and +memory layout. It assumes you are familiar with the terms, see +http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html +for a more gentle introduction. + +Given an array x: x.shape == [5,6], + +At which byte in x.data does the item x[3,4] begin? +if x.strides==[1,5]: + pData = x.pData + (x.start + 3*1 + 4*5)*sizeof(x.pData[0]) + pData = x.pData + (x.start + 24) * sizeof(x.pData[0]) +so the offset of the element is 24 elements after the first + +What is the next element in x after coordinates [3,4]? +if x.order =='C': + next == [3,5] => offset is 28 +if x.order =='F': + next == [4,4] => offset is 24 +so for the strides [1,5] x is 'F' contiguous +likewise, for the strides [6,1] x would be 'C' contiguous. + +Iterators have an internal representation of the current coordinates +(indices), the array, strides, and backstrides. A short digression to +explain backstrides: what is the coordinate and offset after [3,5] in +the example above? +if x.order == 'C': + next == [4,0] => offset is 4 +if x.order == 'F': + next == [4,5] => offset is 25 +Note that in 'C' order we stepped BACKWARDS 24 while 'overflowing' a +shape dimension + which is back 25 and forward 1, + which is x.strides[1] * (x.shape[1] - 1) + x.strides[0] +so if we precalculate the overflow backstride as +[x.strides[i] * (x.shape[i] - 1) for i in range(len(x.shape))] +we can go faster. +All the calculations happen in next() + +next_step_x() tries to do the iteration for a number of steps at once, +but then we cannot gaurentee that we only overflow one single shape +dimension, perhaps we could overflow times in one big step. +""" + # structures to describe slicing class Chunk(object): @@ -17,6 +60,10 @@ if self.step != 0: shape.append(self.lgt) + def __repr__(self): + return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, + self.lgt) + class BaseTransform(object): pass @@ -51,9 +98,9 @@ self.size = size def next(self, shapelen): - return self._next(1) + return self.next_skip_x(1) - def _next(self, ofs): + def next_skip_x(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size arr.offset = self.offset + ofs @@ -61,7 +108,7 @@ def next_no_increase(self, shapelen): # a hack to make JIT believe this is always virtual - return self._next(0) + return self.next_skip_x(0) def done(self): return self.offset >= self.size @@ -133,6 +180,36 @@ res._done = done return res + @jit.unroll_safe + def next_skip_x(self, shapelen, step): + shapelen = jit.promote(len(self.res_shape)) + offset = self.offset + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - step: + indices[i] += step + offset += self.strides[i] * step + break + else: + remaining_step = (indices[i] + step) // self.res_shape[i] + this_i_step = step - remaining_step * self.res_shape[i] + offset += self.strides[i] * this_i_step + indices[i] = indices[i] + this_i_step + step = remaining_step + else: + done = True + res = instantiate(ViewIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + return res + def apply_transformations(self, arr, transformations): v = BaseIterator.apply_transformations(self, arr, transformations) if len(arr.shape) == 1: @@ -153,8 +230,13 @@ class AxisIterator(BaseIterator): def __init__(self, start, dim, shape, strides, backstrides): self.res_shape = shape[:] - self.strides = strides[:dim] + [0] + strides[dim:] - self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + if len(shape) == len(strides): + # keepdims = True + self.strides = strides[:dim] + [0] + strides[dim + 1:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim + 1:] + else: + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] self.first_line = True self.indices = [0] * len(shape) self._done = False diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,16 +1,20 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ - interp_boxes -from pypy.module.micronumpy.strides import calculate_slice_strides +from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, + signature, support) +from pypy.module.micronumpy.strides import (calculate_slice_strides, + shape_agreement, find_shape_and_elems, get_shape_from_iterable, + calc_new_strides, to_coords) from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator, Chunk, ViewIterator +from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, + SkipLastAxisIterator, Chunk, ViewIterator) +from pypy.module.micronumpy.appbridge import get_appbridge_cache + numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -58,176 +62,21 @@ reds=['idx', 'idxi', 'frame', 'arr'], name='numpy_filterset', ) - -def _find_shape_and_elems(space, w_iterable): - shape = [space.len_w(w_iterable)] - batch = space.listview(w_iterable) - while True: - new_batch = [] - if not batch: - return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - return shape, batch - size = space.len_w(batch[0]) - for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - new_batch += space.listview(w_elem) - shape.append(size) - batch = new_batch - -def shape_agreement(space, shape1, shape2): - ret = _shape_agreement(shape1, shape2) - if len(ret) < max(len(shape1), len(shape2)): - raise OperationError(space.w_ValueError, - space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( - ",".join([str(x) for x in shape1]), - ",".join([str(x) for x in shape2]), - )) - ) - return ret - -def _shape_agreement(shape1, shape2): - """ Checks agreement about two shapes with respect to broadcasting. Returns - the resulting shape. - """ - lshift = 0 - rshift = 0 - if len(shape1) > len(shape2): - m = len(shape1) - n = len(shape2) - rshift = len(shape2) - len(shape1) - remainder = shape1 - else: - m = len(shape2) - n = len(shape1) - lshift = len(shape1) - len(shape2) - remainder = shape2 - endshape = [0] * m - indices1 = [True] * m - indices2 = [True] * m - for i in range(m - 1, m - n - 1, -1): - left = shape1[i + lshift] - right = shape2[i + rshift] - if left == right: - endshape[i] = left - elif left == 1: - endshape[i] = right - indices1[i + lshift] = False - elif right == 1: - endshape[i] = left - indices2[i + rshift] = False - else: - return [] - #raise OperationError(space.w_ValueError, space.wrap( - # "frames are not aligned")) - for i in range(m - n): - endshape[i] = remainder[i] - return endshape - -def get_shape_from_iterable(space, old_size, w_iterable): - new_size = 0 - new_shape = [] - if space.isinstance_w(w_iterable, space.w_int): - new_size = space.int_w(w_iterable) - if new_size < 0: - new_size = old_size - new_shape = [new_size] - else: - neg_dim = -1 - batch = space.listview(w_iterable) - new_size = 1 - if len(batch) < 1: - if old_size == 1: - # Scalars can have an empty size. - new_size = 1 - else: - new_size = 0 - new_shape = [] - i = 0 - for elem in batch: - s = space.int_w(elem) - if s < 0: - if neg_dim >= 0: - raise OperationError(space.w_ValueError, space.wrap( - "can only specify one unknown dimension")) - s = 1 - neg_dim = i - new_size *= s - new_shape.append(s) - i += 1 - if neg_dim >= 0: - new_shape[neg_dim] = old_size / new_size - new_size *= new_shape[neg_dim] - if new_size != old_size: - raise OperationError(space.w_ValueError, - space.wrap("total size of new array must be unchanged")) - return new_shape - -# Recalculating strides. Find the steps that the iteration does for each -# dimension, given the stride and shape. Then try to create a new stride that -# fits the new shape, using those steps. If there is a shape/step mismatch -# (meaning that the realignment of elements crosses from one step into another) -# return None so that the caller can raise an exception. -def calc_new_strides(new_shape, old_shape, old_strides, order): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and - # len(new_shape) > 0 - steps = [] - last_step = 1 - oldI = 0 - new_strides = [] - if order == 'F': - for i in range(len(old_shape)): - steps.append(old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[0] - n_new_elems_used = 1 - n_old_elems_to_use = old_shape[0] - for s in new_shape: - new_strides.append(cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI += 1 - if steps[oldI] != steps[oldI - 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI += 1 - if oldI < len(old_shape): - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - elif order == 'C': - for i in range(len(old_shape) - 1, -1, -1): - steps.insert(0, old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[-1] - n_new_elems_used = 1 - oldI = -1 - n_old_elems_to_use = old_shape[-1] - for i in range(len(new_shape) - 1, -1, -1): - s = new_shape[i] - new_strides.insert(0, cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI -= 1 - if steps[oldI] != steps[oldI + 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI -= 1 - if oldI >= -len(old_shape): - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - assert len(new_strides) == len(new_shape) - return new_strides +take_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + reds=['index_i', 'res_i', 'concr', 'index', 'res'], + name='numpy_take', +) +flat_get_driver = jit.JitDriver( + greens=['shapelen', 'base'], + reds=['step', 'ri', 'basei', 'res'], + name='numpy_flatget', +) +flat_set_driver = jit.JitDriver( + greens=['shapelen', 'base'], + reds=['step', 'ai', 'lngth', 'arr', 'basei'], + name='numpy_flatset', +) class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -268,6 +117,7 @@ descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") def _binop_impl(ufunc_name): def impl(self, space, w_other): @@ -310,9 +160,11 @@ def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): - w_axis = space.wrap(-1) + axis = -1 + else: + axis = space.int_w(w_axis) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_axis) + self, True, promote_to_largest, axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -430,21 +282,22 @@ def descr_copy(self, space): return self.copy(space) - def descr_flatten(self, space): - return self.flatten(space) + def descr_flatten(self, space, w_order=None): + if isinstance(self, Scalar): + # scalars have no storage + return self.descr_reshape(space, [space.wrap(1)]) + concr = self.get_concrete() + w_res = concr.descr_ravel(space, w_order) + if w_res.storage == concr.storage: + return w_res.copy(space) + return w_res def copy(self, space): return self.get_concrete().copy(space) def empty_copy(self, space, dtype): shape = self.shape - size = 1 - for elem in shape: - size *= elem - return W_NDimArray(size, shape[:], dtype, 'C') - - def flatten(self, space): - return self.get_concrete().flatten(space) + return W_NDimArray(support.product(shape), shape[:], dtype, 'C') def descr_len(self, space): if len(self.shape): @@ -453,33 +306,32 @@ "len() of unsized object")) def descr_repr(self, space): - res = StringBuilder() - res.append("array(") - concrete = self.get_concrete_or_scalar() - dtype = concrete.find_dtype() - if not concrete.size: - res.append('[]') - if len(self.shape) > 1: - # An empty slice reports its shape - res.append(", shape=(") - self_shape = str(self.shape) - res.append_slice(str(self_shape), 1, len(self_shape) - 1) - res.append(')') - else: - concrete.to_str(space, 1, res, indent=' ') - if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - not (dtype.kind == interp_dtype.SIGNEDLTR and - dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or - not self.size): - res.append(", dtype=" + dtype.name) - res.append(")") - return space.wrap(res.build()) + cache = get_appbridge_cache(space) + if cache.w_array_repr is None: + return space.wrap(self.dump_data()) + return space.call_function(cache.w_array_repr, self) + + def dump_data(self): + concr = self.get_concrete() + i = concr.create_iter() + first = True + s = StringBuilder() + s.append('array([') + while not i.done(): + if first: + first = False + else: + s.append(', ') + s.append(concr.dtype.itemtype.str_format(concr.getitem(i.offset))) + i = i.next(len(concr.shape)) + s.append('])') + return s.build() def descr_str(self, space): - ret = StringBuilder() - concrete = self.get_concrete_or_scalar() - concrete.to_str(space, 0, ret, ' ') - return space.wrap(ret.build()) + cache = get_appbridge_cache(space) + if cache.w_array_str is None: + return space.wrap(self.dump_data()) + return space.call_function(cache.w_array_str, self) @jit.unroll_safe def _single_item_result(self, space, w_idx): @@ -519,7 +371,7 @@ def count_all_true(self, arr): sig = arr.find_sig() - frame = sig.create_frame(self) + frame = sig.create_frame(arr) shapelen = len(arr.shape) s = 0 iter = None @@ -533,6 +385,9 @@ def getitem_filter(self, space, arr): concr = arr.get_concrete() + if concr.size > self.size: + raise OperationError(space.w_IndexError, + space.wrap("index out of range for array")) size = self.count_all_true(concr) res = W_NDimArray(size, [size], self.find_dtype()) ri = ArrayIterator(size) @@ -541,7 +396,7 @@ sig = self.find_sig() frame = sig.create_frame(self) v = None - while not frame.done(): + while not ri.done(): filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, frame=frame, v=v, res=res, sig=sig, shapelen=shapelen, self=self) @@ -581,7 +436,7 @@ item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item) chunks = self._prepare_slice_args(space, w_idx) - return space.wrap(self.create_slice(chunks)) + return self.create_slice(chunks) def descr_setitem(self, space, w_idx, w_value): self.invalidated() @@ -635,8 +490,11 @@ w_shape = args_w[0] else: w_shape = space.newtuple(args_w) + new_shape = get_shape_from_iterable(space, self.size, w_shape) + return self.reshape(space, new_shape) + + def reshape(self, space, new_shape): concrete = self.get_concrete() - new_shape = get_shape_from_iterable(space, concrete.size, w_shape) # Since we got to here, prod(new_shape) == self.size new_strides = calc_new_strides(new_shape, concrete.shape, concrete.strides, concrete.order) @@ -647,7 +505,7 @@ for nd in range(ndims): new_backstrides[nd] = (new_shape[nd] - 1) * new_strides[nd] arr = W_NDimSlice(concrete.start, new_strides, new_backstrides, - new_shape, self) + new_shape, concrete) else: # Create copy with contiguous data arr = concrete.copy(space) @@ -657,7 +515,7 @@ def descr_tolist(self, space): if len(self.shape) == 0: assert isinstance(self, Scalar) - return self.value.descr_tolist(space) + return self.value.item(space) w_result = space.newlist([]) for i in range(self.shape[0]): space.call_method(w_result, "append", @@ -674,17 +532,13 @@ w_denom = space.wrap(self.shape[dim]) return space.div(self.descr_sum_promote(space, w_axis), w_denom) - def descr_var(self, space): - # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) - w_res = w_res.descr_pow(space, space.wrap(2)) - assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space, space.w_None) + def descr_var(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_var', self, + w_axis) - def descr_std(self, space): - # std(v) = sqrt(var(v)) - return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_std(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_std', self, + w_axis) def descr_fill(self, space, w_value): concr = self.get_concrete_or_scalar() @@ -717,6 +571,16 @@ return space.wrap(W_NDimSlice(concrete.start, strides, backstrides, shape, concrete)) + def descr_ravel(self, space, w_order=None): + if w_order is None or space.is_w(w_order, space.w_None): + order = 'C' + else: + order = space.str_w(w_order) + if order != 'C': + raise OperationError(space.w_NotImplementedError, space.wrap( + "order not implemented")) + return self.descr_reshape(space, [space.wrap(-1)]) + def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) @@ -746,6 +610,60 @@ def supports_fast_slicing(self): return False + def descr_compress(self, space, w_obj, w_axis=None): + index = convert_to_array(space, w_obj) + return self.getitem_filter(space, index) + + def descr_take(self, space, w_obj, w_axis=None): + index = convert_to_array(space, w_obj).get_concrete() + concr = self.get_concrete() + if space.is_w(w_axis, space.w_None): + concr = concr.descr_ravel(space) + else: + raise OperationError(space.w_NotImplementedError, + space.wrap("axis unsupported for take")) + index_i = index.create_iter() + res_shape = index.shape + size = support.product(res_shape) + res = W_NDimArray(size, res_shape[:], concr.dtype, concr.order) + res_i = res.create_iter() + shapelen = len(index.shape) + sig = concr.find_sig() + while not index_i.done(): + take_driver.jit_merge_point(index_i=index_i, index=index, + res_i=res_i, concr=concr, + res=res, + shapelen=shapelen, sig=sig) + w_item = index._getitem_long(space, index_i.offset) + res.setitem(res_i.offset, concr.descr_getitem(space, w_item)) + index_i = index_i.next(shapelen) + res_i = res_i.next(shapelen) + return res + + def _getitem_long(self, space, offset): + # an obscure hack to not have longdtype inside a jitted loop + longdtype = interp_dtype.get_dtype_cache(space).w_longdtype + return self.getitem(offset).convert_to(longdtype).item( + space) + + def descr_item(self, space, w_arg=None): + if space.is_w(w_arg, space.w_None): + if not isinstance(self, Scalar): + raise OperationError(space.w_ValueError, space.wrap("index out of bounds")) + return self.value.item(space) + if space.isinstance_w(w_arg, space.w_int): + if isinstance(self, Scalar): + raise OperationError(space.w_ValueError, space.wrap("index out of bounds")) + concr = self.get_concrete() + i = to_coords(space, self.shape, concr.size, concr.order, w_arg)[0] + # XXX a bit around + item = self.descr_getitem(space, space.newtuple([space.wrap(x) + for x in i])) + assert isinstance(item, interp_boxes.W_GenericBox) + return item.item(space) + raise OperationError(space.w_NotImplementedError, space.wrap( + "non-int arg not supported")) + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -771,22 +689,15 @@ self.shape = [] BaseArray.__init__(self, []) self.dtype = dtype + assert isinstance(value, interp_boxes.W_GenericBox) self.value = value def find_dtype(self): return self.dtype - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): - builder.append(self.dtype.itemtype.str_format(self.value)) - def copy(self, space): return Scalar(self.dtype, self.value) - def flatten(self, space): - array = W_NDimArray(self.size, [self.size], self.dtype) - array.setitem(0, self.value) - return array - def fill(self, space, w_value): self.value = self.dtype.coerce(space, w_value) @@ -796,6 +707,11 @@ def get_concrete_or_scalar(self): return self + def reshape(self, space, new_shape): + size = support.product(new_shape) + res = W_NDimArray(size, new_shape, self.dtype, 'C') + res.setitem(0, self.value) + return res class VirtualArray(BaseArray): """ @@ -852,12 +768,9 @@ class VirtualSlice(VirtualArray): def __init__(self, child, chunks, shape): - size = 1 - for sh in shape: - size *= sh self.child = child self.chunks = chunks - self.size = size + self.size = support.product(shape) VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) def create_sig(self): @@ -876,11 +789,12 @@ class Call1(VirtualArray): - def __init__(self, ufunc, name, shape, res_dtype, values): + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) self.values = values self.size = values.size self.ufunc = ufunc + self.calc_dtype = calc_dtype def _del_sources(self): self.values = None @@ -902,9 +816,7 @@ self.left = left self.right = right self.calc_dtype = calc_dtype - self.size = 1 - for s in self.shape: - self.size *= s + self.size = support.product(self.shape) def _del_sources(self): self.left = None @@ -1014,89 +926,6 @@ self.strides = strides self.backstrides = backstrides - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): - '''Modifies builder with a representation of the array/slice - The items will be seperated by a comma if comma is 1 - Multidimensional arrays/slices will span a number of lines, - each line will begin with indent. - ''' - size = self.size - ccomma = ',' * comma - ncomma = ',' * (1 - comma) - dtype = self.find_dtype() - if size < 1: - builder.append('[]') - return - if size > 1000: - # Once this goes True it does not go back to False for recursive - # calls - use_ellipsis = True - ndims = len(self.shape) - if ndims == 0: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return - i = 0 - builder.append('[') - if ndims > 1: - if use_ellipsis: - for i in range(min(3, self.shape[0])): - if i > 0: - builder.append(ccomma + '\n') - if ndims >= 3: - builder.append('\n' + indent) - else: - builder.append(indent) - view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', - use_ellipsis=use_ellipsis) - if i < self.shape[0] - 1: - builder.append(ccomma + '\n' + indent + '...' + ncomma) - i = self.shape[0] - 3 - else: - i += 1 - while i < self.shape[0]: - if i > 0: - builder.append(ccomma + '\n') - if ndims >= 3: - builder.append('\n' + indent) - else: - builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', - use_ellipsis=use_ellipsis) - i += 1 - elif ndims == 1: - spacer = ccomma + ' ' - item = self.start - # An iterator would be a nicer way to walk along the 1d array, but - # how do I reset it if printing ellipsis? iterators have no - # "set_offset()" - i = 0 - if use_ellipsis: - for i in range(min(3, self.shape[0])): - if i > 0: - builder.append(spacer) - builder.append(dtype.itemtype.str_format(self.getitem(item))) - item += self.strides[0] - if i < self.shape[0] - 1: - # Add a comma only if comma is False - this prevents adding - # two commas - builder.append(spacer + '...' + ncomma) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 - else: - i += 1 - while i < self.shape[0]: - if i > 0: - builder.append(spacer) - builder.append(dtype.itemtype.str_format(self.getitem(item))) - item += self.strides[0] - i += 1 - builder.append(']') - @jit.unroll_safe def _index_of_single_item(self, space, w_idx): if space.isinstance_w(w_idx, space.w_int): @@ -1169,15 +998,6 @@ array.setslice(space, self) return array - def flatten(self, space): - array = W_NDimArray(self.size, [self.size], self.dtype, self.order) - if self.supports_fast_slicing(): - array._fast_setslice(space, self) - else: - arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) - array._sliceloop(arr) - return array - def fill(self, space, w_value): self.setslice(space, scalar_w(space, self.dtype, w_value)) @@ -1192,13 +1012,10 @@ assert isinstance(parent, ConcreteArray) if isinstance(parent, W_NDimSlice): parent = parent.parent - size = 1 - for sh in shape: - size *= sh self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, size, shape, parent.dtype, parent.order, - parent) + ViewArray.__init__(self, support.product(shape), shape, parent.dtype, + parent.order, parent) self.start = start def create_iter(self): @@ -1274,27 +1091,42 @@ shape.append(item) return size, shape -def array(space, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): + at unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) +def array(space, w_item_or_iterable, w_dtype=None, w_order=None, + subok=True, copy=True, w_maskna=None, ownmaskna=False): # find scalar + if w_maskna is None: + w_maskna = space.w_None + if (not subok or not space.is_w(w_maskna, space.w_None) or + ownmaskna): + raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) if not space.issequence_w(w_item_or_iterable): - if space.is_w(w_dtype, space.w_None): + if w_dtype is None or space.is_w(w_dtype, space.w_None): w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_item_or_iterable) dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) return scalar_w(space, dtype, w_item_or_iterable) - if w_order is None: + if space.is_w(w_order, space.w_None) or w_order is None: order = 'C' else: order = space.str_w(w_order) if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) - shape, elems_w = _find_shape_and_elems(space, w_item_or_iterable) + if isinstance(w_item_or_iterable, BaseArray): + if (not space.is_w(w_dtype, space.w_None) and + w_item_or_iterable.find_dtype() is not w_dtype): + raise OperationError(space.w_NotImplementedError, space.wrap( + "copying over different dtypes unsupported")) + if copy: + return w_item_or_iterable.copy(space) + return w_item_or_iterable + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) # they come back in C order size = len(elems_w) - if space.is_w(w_dtype, space.w_None): + if w_dtype is None or space.is_w(w_dtype, space.w_None): w_dtype = None for w_elem in elems_w: w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, @@ -1322,6 +1154,8 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) size, shape = _find_size_and_shape(space, w_size) + if not shape: + return scalar_w(space, dtype, space.wrap(0)) return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) def ones(space, w_size, w_dtype=None): @@ -1330,17 +1164,66 @@ ) size, shape = _find_size_and_shape(space, w_size) + if not shape: + return scalar_w(space, dtype, space.wrap(1)) arr = W_NDimArray(size, shape[:], dtype=dtype) one = dtype.box(1) arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) + at unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) +def count_reduce_items(space, arr, w_axis=None, skipna=False, keepdims=True): + if not keepdims: + raise OperationError(space.w_NotImplementedError, space.wrap("unsupported")) + if space.is_w(w_axis, space.w_None): + return space.wrap(support.product(arr.shape)) + if space.isinstance_w(w_axis, space.w_int): + return space.wrap(arr.shape[space.int_w(w_axis)]) + s = 1 + elems = space.fixedview(w_axis) + for w_elem in elems: + s *= arr.shape[space.int_w(w_elem)] + return space.wrap(s) + def dot(space, w_obj, w_obj2): w_arr = convert_to_array(space, w_obj) if isinstance(w_arr, Scalar): return convert_to_array(space, w_obj2).descr_dot(space, w_arr) return w_arr.descr_dot(space, w_obj2) + at unwrap_spec(axis=int) +def concatenate(space, w_args, axis=0): + args_w = space.listview(w_args) + if len(args_w) == 0: + raise OperationError(space.w_ValueError, space.wrap("concatenation of zero-length sequences is impossible")) + args_w = [convert_to_array(space, w_arg) for w_arg in args_w] + dtype = args_w[0].find_dtype() + shape = args_w[0].shape[:] + if len(shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for arr in args_w[1:]: + dtype = interp_ufuncs.find_binop_result_dtype(space, dtype, + arr.find_dtype()) + if len(arr.shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for i, axis_size in enumerate(arr.shape): + if len(arr.shape) != len(shape) or (i != axis and axis_size != shape[i]): + raise OperationError(space.w_ValueError, space.wrap( + "array dimensions must agree except for axis being concatenated")) + elif i == axis: + shape[i] += axis_size + res = W_NDimArray(support.product(shape), shape, dtype, 'C') + chunks = [Chunk(0, i, 1, i) for i in shape] + axis_start = 0 + for arr in args_w: + chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, + arr.shape[axis]) + res.create_slice(chunks).setslice(space, arr) + axis_start += arr.shape[axis] + return res + BaseArray.typedef = TypeDef( 'ndarray', __module__ = "numpypy", @@ -1378,6 +1261,7 @@ __and__ = interp2app(BaseArray.descr_and), __or__ = interp2app(BaseArray.descr_or), + __invert__ = interp2app(BaseArray.descr_invert), __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), @@ -1388,9 +1272,11 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), + item = interp2app(BaseArray.descr_item), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), + ravel = interp2app(BaseArray.descr_ravel), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1411,6 +1297,8 @@ flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), + take = interp2app(BaseArray.descr_take), + compress = interp2app(BaseArray.descr_compress), ) @@ -1419,31 +1307,122 @@ @jit.unroll_safe def __init__(self, arr): arr = arr.get_concrete() - size = 1 - for sh in arr.shape: - size *= sh self.strides = [arr.strides[-1]] self.backstrides = [arr.backstrides[-1]] - ViewArray.__init__(self, size, [size], arr.dtype, arr.order, - arr) self.shapelen = len(arr.shape) - self.iter = OneDimIterator(arr.start, self.strides[0], - self.shape[0]) + sig = arr.find_sig() + self.iter = sig.create_frame(arr).get_final_iter() + self.base = arr + self.index = 0 + ViewArray.__init__(self, arr.size, [arr.size], arr.dtype, arr.order, + arr) def descr_next(self, space): if self.iter.done(): raise OperationError(space.w_StopIteration, space.w_None) - result = self.getitem(self.iter.offset) + result = self.base.getitem(self.iter.offset) self.iter = self.iter.next(self.shapelen) + self.index += 1 return result def descr_iter(self): return self + def descr_index(self, space): + return space.wrap(self.index) + + def descr_coords(self, space): + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, + space.wrap(self.index)) + return space.newtuple([space.wrap(c) for c in coords]) + + @jit.unroll_safe + def descr_getitem(self, space, w_idx): + if not (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + base = self.base + start, stop, step, lngth = space.decode_index4(w_idx, base.size) + # setslice would have been better, but flat[u:v] for arbitrary + # shapes of array a cannot be represented as a[x1:x2, y1:y2] + basei = ViewIterator(base.start, base.strides, + base.backstrides,base.shape) + shapelen = len(base.shape) + basei = basei.next_skip_x(shapelen, start) + if lngth <2: + return base.getitem(basei.offset) + ri = ArrayIterator(lngth) + res = W_NDimArray(lngth, [lngth], base.dtype, + base.order) + while not ri.done(): + flat_get_driver.jit_merge_point(shapelen=shapelen, + base=base, + basei=basei, + step=step, + res=res, + ri=ri, + ) + w_val = base.getitem(basei.offset) + res.setitem(ri.offset,w_val) + basei = basei.next_skip_x(shapelen, step) + ri = ri.next(shapelen) + return res + + def descr_setitem(self, space, w_idx, w_value): + if not (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + base = self.base + start, stop, step, lngth = space.decode_index4(w_idx, base.size) + arr = convert_to_array(space, w_value) + ai = 0 + basei = ViewIterator(base.start, base.strides, + base.backstrides,base.shape) + shapelen = len(base.shape) + basei = basei.next_skip_x(shapelen, start) + while lngth > 0: + flat_set_driver.jit_merge_point(shapelen=shapelen, + basei=basei, + base=base, + step=step, + arr=arr, + ai=ai, + lngth=lngth, + ) + v = arr.getitem(ai).convert_to(base.dtype) + base.setitem(basei.offset, v) + # need to repeat input values until all assignments are done + ai = (ai + 1) % arr.size + basei = basei.next_skip_x(shapelen, step) + lngth -= 1 + + def create_sig(self): + return signature.FlatSignature(self.base.dtype) + + def descr_base(self, space): + return space.wrap(self.base) + W_FlatIterator.typedef = TypeDef( 'flatiter', + #__array__ = #MISSING + __iter__ = interp2app(W_FlatIterator.descr_iter), + __getitem__ = interp2app(W_FlatIterator.descr_getitem), + __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), + __ne__ = interp2app(BaseArray.descr_ne), + __lt__ = interp2app(BaseArray.descr_lt), + __le__ = interp2app(BaseArray.descr_le), + __gt__ = interp2app(BaseArray.descr_gt), + __ge__ = interp2app(BaseArray.descr_ge), + #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), + index = GetSetProperty(W_FlatIterator.descr_index), + coords = GetSetProperty(W_FlatIterator.descr_coords), next = interp2app(W_FlatIterator.descr_next), - __iter__ = interp2app(W_FlatIterator.descr_iter), + ) W_FlatIterator.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,14 +1,15 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature,\ - find_sig, new_printable_location, AxisReduceSignature, ScalarSignature +from pypy.module.micronumpy import interp_boxes, interp_dtype, support +from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, + new_printable_location, AxisReduceSignature, ScalarSignature) from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name + reduce_driver = jit.JitDriver( greens=['shapelen', "sig"], virtualizables=["frame"], @@ -30,12 +31,14 @@ _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] - def __init__(self, name, promote_to_float, promote_bools, identity): + def __init__(self, name, promote_to_float, promote_bools, identity, + int_only): self.name = name self.promote_to_float = promote_to_float self.promote_bools = promote_bools self.identity = identity + self.int_only = int_only def descr_repr(self, space): return space.wrap("" % self.name) @@ -46,19 +49,31 @@ return self.identity def descr_call(self, space, __args__): - if __args__.keywords or len(__args__.arguments_w) < self.argcount: + args_w, kwds_w = __args__.unpack() + # it occurs to me that we don't support any datatypes that + # require casting, change it later when we do + kwds_w.pop('casting', None) + w_subok = kwds_w.pop('subok', None) + w_out = kwds_w.pop('out', space.w_None) + if ((w_subok is not None and space.is_true(w_subok)) or + not space.is_w(w_out, space.w_None)): + raise OperationError(space.w_NotImplementedError, + space.wrap("parameters unsupported")) + if kwds_w or len(args_w) < self.argcount: raise OperationError(space.w_ValueError, space.wrap("invalid number of arguments") ) - elif len(__args__.arguments_w) > self.argcount: + elif len(args_w) > self.argcount: # The extra arguments should actually be the output array, but we # don't support that yet. raise OperationError(space.w_TypeError, space.wrap("invalid number of arguments") ) - return self.call(space, __args__.arguments_w) + return self.call(space, args_w) - def descr_reduce(self, space, w_obj, w_dim=0): + @unwrap_spec(skipna=bool, keepdims=bool) + def descr_reduce(self, space, w_obj, w_axis=NoneNotWrapped, w_dtype=None, + skipna=False, keepdims=False, w_out=None): """reduce(...) reduce(a, axis=0) @@ -111,15 +126,24 @@ array([[ 1, 5], [ 9, 13]]) """ - return self.reduce(space, w_obj, False, False, w_dim) + if not space.is_w(w_out, space.w_None): + raise OperationError(space.w_NotImplementedError, space.wrap( + "out not supported")) + if w_axis is None: + axis = 0 + elif space.is_w(w_axis, space.w_None): + axis = -1 + else: + axis = space.int_w(w_axis) + return self.reduce(space, w_obj, False, False, axis, keepdims) - def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + def reduce(self, space, w_obj, multidim, promote_to_largest, dim, + keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) if dim >= len(obj.shape): @@ -140,7 +164,7 @@ raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim) + res = self.do_axis_reduce(obj, dtype, dim, keepdims) return space.wrap(res) scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, @@ -154,15 +178,15 @@ value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) - def do_axis_reduce(self, obj, dtype, dim): + def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - - shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] - size = 1 - for s in shape: - size *= s - result = W_NDimArray(size, shape, dtype) + + if keepdims: + shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] + else: + shape = obj.shape[:dim] + obj.shape[dim + 1:] + result = W_NDimArray(support.product(shape), shape, dtype) rightsig = obj.create_sig() # note - this is just a wrapper so signature can fetch # both left and right, nothing more, especially @@ -224,10 +248,12 @@ _immutable_fields_ = ["func", "name"] def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None): + identity=None, bool_result=False, int_only=False): - W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) + W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity, + int_only) self.func = func + self.bool_result = bool_result def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call1, @@ -235,15 +261,19 @@ [w_obj] = args_w w_obj = convert_to_array(space, w_obj) - res_dtype = find_unaryop_result_dtype(space, - w_obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_bools=self.promote_bools, - ) + calc_dtype = find_unaryop_result_dtype(space, + w_obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_bools=self.promote_bools) + if self.bool_result: + res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + res_dtype = calc_dtype if isinstance(w_obj, Scalar): - return self.func(res_dtype, w_obj.value.convert_to(res_dtype)) + return space.wrap(self.func(calc_dtype, w_obj.value.convert_to(calc_dtype))) - w_res = Call1(self.func, self.name, w_obj.shape, res_dtype, w_obj) + w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, + w_obj) w_obj.add_invalidates(w_res) return w_res @@ -255,10 +285,10 @@ def __init__(self, func, name, promote_to_float=False, promote_bools=False, identity=None, comparison_func=False, int_only=False): - W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) + W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity, + int_only) self.func = func self.comparison_func = comparison_func - self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -278,10 +308,10 @@ else: res_dtype = calc_dtype if isinstance(w_lhs, Scalar) and isinstance(w_rhs, Scalar): - return self.func(calc_dtype, + return space.wrap(self.func(calc_dtype, w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) - ) + )) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, @@ -409,12 +439,16 @@ return interp_dtype.get_dtype_cache(space).w_float64dtype -def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func): +def ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func, + bool_result): + dtype_cache = interp_dtype.get_dtype_cache(space) if argcount == 1: def impl(res_dtype, value): - return getattr(res_dtype.itemtype, op_name)(value) + res = getattr(res_dtype.itemtype, op_name)(value) + if bool_result: + return dtype_cache.w_booldtype.box(res) + return res elif argcount == 2: - dtype_cache = interp_dtype.get_dtype_cache(space) def impl(res_dtype, lvalue, rvalue): res = getattr(res_dtype.itemtype, op_name)(lvalue, rvalue) if comparison_func: @@ -433,7 +467,9 @@ 'int_only': True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, 'int_only': True}), + ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), + ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -443,6 +479,8 @@ ("less_equal", "le", 2, {"comparison_func": True}), ("greater", "gt", 2, {"comparison_func": True}), ("greater_equal", "ge", 2, {"comparison_func": True}), + ("isnan", "isnan", 1, {"bool_result": True}), + ("isinf", "isinf", 1, {"bool_result": True}), ("maximum", "max", 2), ("minimum", "min", 2), @@ -485,6 +523,7 @@ func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, comparison_func=extra_kwargs.get("comparison_func", False), + bool_result=extra_kwargs.get("bool_result", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) @@ -494,3 +533,4 @@ def get(space): return space.fromcache(UfuncState) + diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -224,6 +224,18 @@ return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape).apply_transformations(arr, transforms) +class FlatSignature(ViewSignature): + def debug_repr(self): + return 'Flat' + + def allocate_iter(self, arr, transforms): + from pypy.module.micronumpy.interp_numarray import W_FlatIterator + assert isinstance(arr, W_FlatIterator) + return ViewIterator(arr.base.start, arr.base.strides, + arr.base.backstrides, + arr.base.shape).apply_transformations(arr.base, + transforms) + class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -293,8 +305,8 @@ def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - v = self.child.eval(frame, arr.values).convert_to(arr.res_dtype) - return self.unfunc(arr.res_dtype, v) + v = self.child.eval(frame, arr.values).convert_to(arr.calc_dtype) + return self.unfunc(arr.calc_dtype, v) class Call2(Signature): _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', 'left', 'right'] diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,5 +1,5 @@ from pypy.rlib import jit - +from pypy.interpreter.error import OperationError @jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: jit.isconstant(len(chunks)) @@ -37,3 +37,196 @@ rstrides = [0] * (len(res_shape) - len(orig_shape)) + rstrides rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides + +def find_shape_and_elems(space, w_iterable): + shape = [space.len_w(w_iterable)] + batch = space.listview(w_iterable) + while True: + new_batch = [] + if not batch: + return shape, [] + if not space.issequence_w(batch[0]): + for elem in batch: + if space.issequence_w(elem): + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + return shape, batch + size = space.len_w(batch[0]) + for w_elem in batch: + if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + new_batch += space.listview(w_elem) + shape.append(size) + batch = new_batch + +def to_coords(space, shape, size, order, w_item_or_slice): + '''Returns a start coord, step, and length. + ''' + start = lngth = step = 0 + if not (space.isinstance_w(w_item_or_slice, space.w_int) or + space.isinstance_w(w_item_or_slice, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + + start, stop, step, lngth = space.decode_index4(w_item_or_slice, size) + + coords = [0] * len(shape) + i = start + if order == 'C': + for s in range(len(shape) -1, -1, -1): + coords[s] = i % shape[s] + i //= shape[s] + else: + for s in range(len(shape)): + coords[s] = i % shape[s] + i //= shape[s] + return coords, step, lngth + +def shape_agreement(space, shape1, shape2): + ret = _shape_agreement(shape1, shape2) + if len(ret) < max(len(shape1), len(shape2)): + raise OperationError(space.w_ValueError, + space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( + ",".join([str(x) for x in shape1]), + ",".join([str(x) for x in shape2]), + )) + ) + return ret + +def _shape_agreement(shape1, shape2): + """ Checks agreement about two shapes with respect to broadcasting. Returns + the resulting shape. + """ + lshift = 0 + rshift = 0 + if len(shape1) > len(shape2): + m = len(shape1) + n = len(shape2) + rshift = len(shape2) - len(shape1) + remainder = shape1 + else: + m = len(shape2) + n = len(shape1) + lshift = len(shape1) - len(shape2) + remainder = shape2 + endshape = [0] * m + indices1 = [True] * m + indices2 = [True] * m + for i in range(m - 1, m - n - 1, -1): + left = shape1[i + lshift] + right = shape2[i + rshift] + if left == right: + endshape[i] = left + elif left == 1: + endshape[i] = right + indices1[i + lshift] = False + elif right == 1: + endshape[i] = left + indices2[i + rshift] = False + else: + return [] + #raise OperationError(space.w_ValueError, space.wrap( + # "frames are not aligned")) + for i in range(m - n): + endshape[i] = remainder[i] + return endshape + +def get_shape_from_iterable(space, old_size, w_iterable): + new_size = 0 + new_shape = [] + if space.isinstance_w(w_iterable, space.w_int): + new_size = space.int_w(w_iterable) + if new_size < 0: + new_size = old_size + new_shape = [new_size] + else: + neg_dim = -1 + batch = space.listview(w_iterable) + new_size = 1 + if len(batch) < 1: + if old_size == 1: + # Scalars can have an empty size. + new_size = 1 + else: + new_size = 0 + new_shape = [] + i = 0 + for elem in batch: + s = space.int_w(elem) + if s < 0: + if neg_dim >= 0: + raise OperationError(space.w_ValueError, space.wrap( + "can only specify one unknown dimension")) + s = 1 + neg_dim = i + new_size *= s + new_shape.append(s) + i += 1 + if neg_dim >= 0: + new_shape[neg_dim] = old_size / new_size + new_size *= new_shape[neg_dim] + if new_size != old_size: + raise OperationError(space.w_ValueError, + space.wrap("total size of new array must be unchanged")) + return new_shape + +# Recalculating strides. Find the steps that the iteration does for each +# dimension, given the stride and shape. Then try to create a new stride that +# fits the new shape, using those steps. If there is a shape/step mismatch +# (meaning that the realignment of elements crosses from one step into another) +# return None so that the caller can raise an exception. +def calc_new_strides(new_shape, old_shape, old_strides, order): + # Return the proper strides for new_shape, or None if the mapping crosses + # stepping boundaries + + # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and + # len(new_shape) > 0 + steps = [] + last_step = 1 + oldI = 0 + new_strides = [] + if order == 'F': + for i in range(len(old_shape)): + steps.append(old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[0] + n_new_elems_used = 1 + n_old_elems_to_use = old_shape[0] + for s in new_shape: + new_strides.append(cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI += 1 + if steps[oldI] != steps[oldI - 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI += 1 + if oldI < len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + elif order == 'C': + for i in range(len(old_shape) - 1, -1, -1): + steps.insert(0, old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[-1] + n_new_elems_used = 1 + oldI = -1 + n_old_elems_to_use = old_shape[-1] + for i in range(len(new_shape) - 1, -1, -1): + s = new_shape[i] + new_strides.insert(0, cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI -= 1 + if steps[oldI] != steps[oldI + 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI -= 1 + if oldI >= -len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + assert len(new_strides) == len(new_shape) + return new_strides diff --git a/pypy/module/micronumpy/support.py b/pypy/module/micronumpy/support.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/support.py @@ -0,0 +1,5 @@ +def product(s): + i = 1 + for x in s: + i *= x + return i \ No newline at end of file diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -3,9 +3,17 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray, Scalar from pypy.module.micronumpy.interp_ufuncs import (find_binop_result_dtype, find_unaryop_result_dtype) +from pypy.module.micronumpy.interp_boxes import W_Float64Box +from pypy.conftest import option +import sys class BaseNumpyAppTest(object): def setup_class(cls): + if option.runappdirect: + if '__pypy__' not in sys.builtin_module_names: + import numpy + sys.modules['numpypy'] = numpy + sys.modules['_numpypy'] = numpy cls.space = gettestobjspace(usemodules=['micronumpy']) class TestSignature(object): @@ -16,7 +24,7 @@ ar = W_NDimArray(10, [10], dtype=float64_dtype) ar2 = W_NDimArray(10, [10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) - v2 = ar.descr_add(space, Scalar(float64_dtype, 2.0)) + v2 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(2.0))) sig1 = v1.find_sig() sig2 = v2.find_sig() assert v1 is not v2 @@ -26,7 +34,7 @@ sig1b = ar2.descr_add(space, ar).find_sig() assert sig1b.left.array_no != sig1b.right.array_no assert sig1b is not sig1 - v3 = ar.descr_add(space, Scalar(float64_dtype, 1.0)) + v3 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(1.0))) sig3 = v3.find_sig() assert sig2 is sig3 v4 = ar.descr_add(space, ar) diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -245,3 +245,19 @@ a -> 3 """) assert interp.results[0].value == 11 + + def test_flat_iter(self): + interp = self.run(''' + a = |30| + b = flat(a) + b -> 3 + ''') + assert interp.results[0].value == 3 + + def test_take(self): + interp = self.run(""" + a = |10| + b = take(a, [1, 1, 3, 2]) + b -> 2 + """) + assert interp.results[0].value == 3 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -11,8 +11,17 @@ assert dtype('int8').num == 1 assert dtype(d) is d assert dtype(None) is dtype(float) + assert dtype('int8').name == 'int8' raises(TypeError, dtype, 1042) + def test_dtype_eq(self): + from _numpypy import dtype + + assert dtype("int8") == "int8" + assert "int8" == dtype("int8") + raises(TypeError, lambda: dtype("int8") == 3) + assert dtype(bool) == bool + def test_dtype_with_types(self): from _numpypy import dtype @@ -30,7 +39,7 @@ def test_repr_str(self): from _numpypy import dtype - assert repr(dtype) == "" + assert '.dtype' in repr(dtype) d = dtype('?') assert repr(d) == "dtype('bool')" assert str(d) == "bool" @@ -376,3 +385,19 @@ b = X(10) assert type(b) is X assert b.m() == 12 + + def test_long_as_index(self): + skip("waiting for removal of multimethods of __index__") + from _numpypy import int_ + assert (1, 2, 3)[int_(1)] == 2 + + def test_int(self): + import sys + from _numpypy import int32, int64, int_ + assert issubclass(int_, int) + if sys.maxint == (1<<31) - 1: + assert issubclass(int32, int) + assert int_ is int32 + else: + assert issubclass(int64, int) + assert int_ is int64 diff --git a/pypy/module/micronumpy/test/test_iter.py b/pypy/module/micronumpy/test/test_iter.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_iter.py @@ -0,0 +1,88 @@ +from pypy.module.micronumpy.interp_iter import ViewIterator + +class TestIterDirect(object): + def test_C_viewiterator(self): + #Let's get started, simple iteration in C order with + #contiguous layout => strides[-1] is 1 + start = 0 + shape = [3, 5] + strides = [5, 1] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [10, 4] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next(2) + i = i.next(2) + i = i.next(2) + assert i.offset == 3 + assert not i.done() + assert i.indices == [0,3] + #cause a dimension overflow + i = i.next(2) + i = i.next(2) + assert i.offset == 5 + assert i.indices == [1,0] + + #Now what happens if the array is transposed? strides[-1] != 1 + # therefore layout is non-contiguous + strides = [1, 3] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [2, 12] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next(2) + i = i.next(2) + i = i.next(2) + assert i.offset == 9 + assert not i.done() + assert i.indices == [0,3] + #cause a dimension overflow + i = i.next(2) + i = i.next(2) + assert i.offset == 1 + assert i.indices == [1,0] + + def test_C_viewiterator_step(self): + #iteration in C order with #contiguous layout => strides[-1] is 1 + #skip less than the shape + start = 0 + shape = [3, 5] + strides = [5, 1] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [10, 4] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + assert i.offset == 6 + assert not i.done() + assert i.indices == [1,1] + #And for some big skips + i = i.next_skip_x(2,5) + assert i.offset == 11 + assert i.indices == [2,1] + i = i.next_skip_x(2,5) + # Note: the offset does not overflow but recycles, + # this is good for broadcast + assert i.offset == 1 + assert i.indices == [0,1] + assert i.done() + + #Now what happens if the array is transposed? strides[-1] != 1 + # therefore layout is non-contiguous + strides = [1, 3] + backstrides = [x * (y - 1) for x,y in zip(strides, shape)] + assert backstrides == [2, 12] + i = ViewIterator(start, strides, backstrides, shape) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + assert i.offset == 4 + assert i.indices == [1,1] + assert not i.done() + i = i.next_skip_x(2,5) + assert i.offset == 5 + assert i.indices == [2,1] + assert not i.done() + i = i.next_skip_x(2,5) + assert i.indices == [0,1] + assert i.offset == 3 + assert i.done() diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -167,6 +167,23 @@ assert calc_new_strides([1, 1, 105, 1, 1], [7, 15], [1, 7],'F') == \ [1, 1, 1, 105, 105] + def test_to_coords(self): + from pypy.module.micronumpy.strides import to_coords + + def _to_coords(index, order): + return to_coords(self.space, [2, 3, 4], 24, order, + self.space.wrap(index))[0] + + assert _to_coords(0, 'C') == [0, 0, 0] + assert _to_coords(1, 'C') == [0, 0, 1] + assert _to_coords(-1, 'C') == [1, 2, 3] + assert _to_coords(5, 'C') == [0, 1, 1] + assert _to_coords(13, 'C') == [1, 0, 1] + assert _to_coords(0, 'F') == [0, 0, 0] + assert _to_coords(1, 'F') == [1, 0, 0] + assert _to_coords(-1, 'F') == [1, 2, 3] + assert _to_coords(5, 'F') == [1, 2, 0] + assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): @@ -211,6 +228,7 @@ # And check that changes stick. a[13] = 5.3 assert a[13] == 5.3 + assert zeros(()).shape == () def test_size(self): from _numpypy import array @@ -254,6 +272,7 @@ b = a[::2] c = b.copy() assert (c == b).all() + assert ((a + a).copy() == (a + a)).all() a = arange(15).reshape(5,3) b = a.copy() @@ -283,6 +302,12 @@ for i in xrange(5): assert a[i] == b[i] + def test_getitem_nd(self): + from _numpypy import arange + a = arange(15).reshape(3, 5) + assert a[1, 3] == 8 + assert a.T[1, 2] == 11 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -370,6 +395,7 @@ assert b.shape == (5,) c = a[:3] assert c.shape == (3,) + assert array([]).shape == (0,) def test_set_shape(self): from _numpypy import array, zeros @@ -405,6 +431,7 @@ assert (a == [1000, 1, 2, 3, 1000, 5, 6, 7, 1000, 9, 10, 11]).all() a = zeros((4, 2, 3)) a.shape = (12, 2) + (a + a).reshape(2, 12) # assert did not explode def test_slice_reshape(self): from _numpypy import zeros, arange @@ -448,6 +475,13 @@ y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) + def test_scalar_reshape(self): + from numpypy import array + a = array(3) + assert a.reshape([1, 1]).shape == (1, 1) + assert a.reshape([1]).shape == (1,) + raises(ValueError, "a.reshape(3)") + def test_add(self): from _numpypy import array a = array(range(5)) @@ -956,7 +990,7 @@ assert debug_repr(a + a) == 'Call2(add, Array, Array)' assert debug_repr(a[::2]) == 'Slice' assert debug_repr(a + 2) == 'Call2(add, Array, Scalar)' - assert debug_repr(a + a.flat) == 'Call2(add, Array, Slice)' + assert debug_repr(a + a.flat) == 'Call2(add, Array, Flat)' assert debug_repr(sin(a)) == 'Call1(sin, Array)' b = a + a @@ -1036,8 +1070,42 @@ assert a.var() == 0.0 a = arange(10).reshape(5, 2) assert a.var() == 8.25 - #assert (a.var(0) == [8, 8]).all() - #assert (a.var(1) == [.25] * 5).all() + assert (a.var(0) == [8, 8]).all() + assert (a.var(1) == [.25] * 5).all() + + def test_concatenate(self): + from numpypy import array, concatenate, dtype + a1 = array([0,1,2]) + a2 = array([3,4,5]) + a = concatenate((a1, a2)) + assert len(a) == 6 + assert (a == [0,1,2,3,4,5]).all() + assert a.dtype is dtype(int) + b1 = array([[1, 2], [3, 4]]) + b2 = array([[5, 6]]) + b = concatenate((b1, b2), axis=0) + assert (b == [[1, 2],[3, 4],[5, 6]]).all() + c = concatenate((b1, b2.T), axis=1) + assert (c == [[1, 2, 5],[3, 4, 6]]).all() + d = concatenate(([0],[1])) + assert (d == [0,1]).all() + e1 = array([[0,1],[2,3]]) + e = concatenate(e1) + assert (e == [0,1,2,3]).all() + f1 = array([0,1]) + f = concatenate((f1, [2], f1, [7])) + assert (f == [0,1,2,0,1,7]).all() + + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) + assert str(bad_axis.value) == "bad axis argument" + + concat_zero = raises(ValueError, concatenate, ()) + assert str(concat_zero.value) == \ + "concatenation of zero-length sequences is impossible" + + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) + assert str(dims_disagree.value) == \ + "array dimensions must agree except for axis being concatenated" def test_std(self): from _numpypy import array @@ -1049,6 +1117,13 @@ def test_flatten(self): from _numpypy import array + assert array(3).flatten().shape == (1,) + a = array([[1, 2], [3, 4]]) + b = a.flatten() + c = a.ravel() + a[0, 0] = 15 + assert b[0] == 1 + assert c[0] == 15 a = array([[1, 2, 3], [4, 5, 6]]) assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) @@ -1062,7 +1137,6 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() - class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1077,7 +1151,7 @@ assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) assert len(_numpypy.zeros((3, 1, 2))) == 3 raises(TypeError, len, _numpypy.zeros(())) - raises(ValueError, _numpypy.array, [[1, 2], 3]) + raises(ValueError, _numpypy.array, [[1, 2], 3], dtype=float) def test_getsetitem(self): import _numpypy @@ -1292,7 +1366,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from _numpypy import array, flatiter + from _numpypy import array, flatiter, arange a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1305,6 +1379,9 @@ for k in a.flat: s += k assert s == 140 + a = arange(10).reshape(5, 2) + raises(IndexError, 'a.flat[(1, 2)]') + assert a.flat.base is a def test_flatiter_array_conv(self): from _numpypy import array, dot @@ -1316,6 +1393,75 @@ a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] + def test_flatiter_getitem(self): + from _numpypy import arange + a = arange(10) + assert a.flat[3] == 3 + assert a[2:].flat[3] == 5 + assert (a + a).flat[3] == 6 + assert a[::2].flat[3] == 6 + assert a.reshape(2,5).flat[3] == 3 + b = a.reshape(2,5).flat + b.next() + b.next() + b.next() + assert b[3] == 3 + assert (b[::3] == [0, 3, 6, 9]).all() + assert (b[2::5] == [2, 7]).all() + assert b[-2] == 8 + raises(IndexError, "b[11]") + raises(IndexError, "b[-11]") + raises(IndexError, 'b[0, 1]') + assert b.index == 3 + assert b.coords == (0,3) + + def test_flatiter_setitem(self): + from _numpypy import arange, array + a = arange(12).reshape(3,4) + b = a.T.flat + b[6::2] = [-1, -2] + assert (a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]]).all() + b[0:2] = [[[100]]] + assert(a[0,0] == 100) + assert(a[1,0] == 100) + raises(IndexError, 'b[array([10, 11])] == [-20, -40]') + + def test_flatiter_ops(self): + from _numpypy import arange, array + a = arange(12).reshape(3,4) + b = a.T.flat + assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() + assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() + assert ((b >= range(12)) == [True, True, True,False, True, True, + False, False, True, False, False, True]).all() + assert ((b < range(12)) != [True, True, True,False, True, True, + False, False, True, False, False, True]).all() + assert ((b <= range(12)) != [False, True, True,False, True, True, + False, False, True, False, False, False]).all() + assert ((b > range(12)) == [False, True, True,False, True, True, + False, False, True, False, False, False]).all() + def test_flatiter_view(self): + from _numpypy import arange + a = arange(10).reshape(5, 2) + #no == yet. + # a[::2].flat == [0, 1, 4, 5, 8, 9] + isequal = True + for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): + if y != z: + isequal = False + assert isequal == True + + def test_flatiter_transpose(self): + from _numpypy import arange + a = arange(10).reshape(2,5).T + b = a.flat + assert (b[:5] == [0, 5, 1, 6, 2]).all() + b.next() + b.next() + b.next() + assert b.index == 3 + assert b.coords == (1,1) + def test_slice_copy(self): from _numpypy import zeros a = zeros((10, 10)) @@ -1381,12 +1527,61 @@ a[a & 1 == 1] = array([8, 9, 10]) assert (a == [[0, 8], [2, 9], [4, 10]]).all() + def test_copy_kwarg(self): + from _numpypy import array + x = array([1, 2, 3]) + assert (array(x) == x).all() + assert array(x) is not x + assert array(x, copy=False) is x + assert array(x, copy=True) is not x + def test_isna(self): from _numpypy import isna, array # XXX for now assert not isna(3) assert (isna(array([1, 2, 3, 4])) == [False, False, False, False]).all() + def test_ravel(self): + from _numpypy import arange + assert (arange(3).ravel() == arange(3)).all() + assert (arange(6).reshape(2, 3).ravel() == arange(6)).all() + assert (arange(6).reshape(2, 3).T.ravel() == [0, 3, 1, 4, 2, 5]).all() + + def test_take(self): + from _numpypy import arange + assert (arange(10).take([1, 2, 1, 1]) == [1, 2, 1, 1]).all() + raises(IndexError, "arange(3).take([15])") + a = arange(6).reshape(2, 3) + assert (a.take([1, 0, 3]) == [1, 0, 3]).all() + assert ((a + a).take([3]) == [6]).all() + a = arange(12).reshape(2, 6) + assert (a[:,::2].take([3, 2, 1]) == [6, 4, 2]).all() + + def test_compress(self): + from _numpypy import arange + a = arange(10) + assert (a.compress([True, False, True]) == [0, 2]).all() + assert (a.compress([1, 0, 13]) == [0, 2]).all() + assert (a.compress([1, 0, 13.5]) == [0, 2]).all() + a = arange(10).reshape(2, 5) + assert (a.compress([True, False, True]) == [0, 2]).all() + raises(IndexError, "a.compress([1] * 100)") + + def test_item(self): + from _numpypy import array + assert array(3).item() == 3 + assert type(array(3).item()) is int + assert type(array(True).item()) is bool + assert type(array(3.5).item()) is float + raises((ValueError, IndexError), "array(3).item(15)") + raises(ValueError, "array([1, 2, 3]).item()") + assert array([3]).item(0) == 3 + assert type(array([3]).item(0)) is int + assert array([1, 2, 3]).item(-1) == 3 + a = array([1, 2, 3]) + assert a[::2].item(1) == 3 + assert (a + a).item(1) == 4 + raises(ValueError, "array(5).item(1)") class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1498,128 +1693,6 @@ raises(ValueError, fromstring, "\x01\x02\x03", count=5, dtype=uint8) -class AppTestRepr(BaseNumpyAppTest): - def test_repr(self): - from _numpypy import array, zeros - int_size = array(5).dtype.itemsize - a = array(range(5), float) - assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" - a = array([], float) - assert repr(a) == "array([], dtype=float64)" - a = zeros(1001) - assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array(range(5), long) - if a.dtype.itemsize == int_size: - assert repr(a) == "array([0, 1, 2, 3, 4])" - else: - assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" - a = array(range(5), 'int32') - if a.dtype.itemsize == int_size: - assert repr(a) == "array([0, 1, 2, 3, 4])" - else: - assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" - a = array([], long) - assert repr(a) == "array([], dtype=int64)" - a = array([True, False, True, False], "?") - assert repr(a) == "array([True, False, True, False], dtype=bool)" - a = zeros([]) - assert repr(a) == "array(0.0)" - a = array(0.2) - assert repr(a) == "array(0.2)" - a = array([2]) - assert repr(a) == "array([2])" - - def test_repr_multi(self): - from _numpypy import arange, zeros, array - a = zeros((3, 4)) - assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]])''' - a = zeros((2, 3, 4)) - assert repr(a) == '''array([[[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]], - - [[0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0], - [0.0, 0.0, 0.0, 0.0]]])''' - a = arange(1002).reshape((2, 501)) - assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], - [501, 502, 503, ..., 999, 1000, 1001]])''' - assert repr(a.T) == '''array([[0, 501], - [1, 502], - [2, 503], - ..., - [498, 999], - [499, 1000], - [500, 1001]])''' - a = arange(2).reshape((2,1)) - assert repr(a) == '''array([[0], - [1]])''' - - def test_repr_slice(self): - from _numpypy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert repr(b) == "array([1.0, 3.0])" - a = zeros(2002) - b = a[::2] - assert repr(b) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" - a = array((range(5), range(5, 10)), dtype="int16") - b = a[1, 2:] - assert repr(b) == "array([7, 8, 9], dtype=int16)" - # an empty slice prints its shape - b = a[2:1, ] - assert repr(b) == "array([], shape=(0, 5), dtype=int16)" - - def test_str(self): - from _numpypy import array, zeros - a = array(range(5), float) - assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" - assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" - a = zeros(1001) - assert str(a) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - - a = array(range(5), dtype=long) - assert str(a) == "[0 1 2 3 4]" - a = array([True, False, True, False], dtype="?") - assert str(a) == "[True False True False]" - - a = array(range(5), dtype="int8") - assert str(a) == "[0 1 2 3 4]" - - a = array(range(5), dtype="int16") - assert str(a) == "[0 1 2 3 4]" - - a = array((range(5), range(5, 10)), dtype="int16") - assert str(a) == "[[0 1 2 3 4]\n [5 6 7 8 9]]" - - a = array(3, dtype=int) - assert str(a) == "3" - - a = zeros((400, 400), dtype=int) - assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" - a = zeros((2, 2, 2)) - r = str(a) - assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' - - def test_str_slice(self): - from _numpypy import array, zeros - a = array(range(5), float) - b = a[1::2] - assert str(b) == "[1.0 3.0]" - a = zeros(2002) - b = a[::2] - assert str(b) == "[0.0 0.0 0.0 ..., 0.0 0.0 0.0]" - a = array((range(5), range(5, 10)), dtype="int16") - b = a[1, 2:] - assert str(b) == "[7 8 9]" - b = a[2:1, ] - assert str(b) == "[]" - - class AppTestRanges(BaseNumpyAppTest): def test_arange(self): from _numpypy import arange, array, dtype @@ -1640,3 +1713,24 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) + +from pypy.module.micronumpy.appbridge import get_appbridge_cache + +class AppTestRepr(BaseNumpyAppTest): + def setup_class(cls): + BaseNumpyAppTest.setup_class.im_func(cls) + cache = get_appbridge_cache(cls.space) + cls.old_array_repr = cache.w_array_repr + cls.old_array_str = cache.w_array_str + cache.w_array_str = None + cache.w_array_repr = None + + def test_repr_str(self): + from _numpypy import array + assert repr(array([1, 2, 3])) == 'array([1, 2, 3])' + assert str(array([1, 2, 3])) == 'array([1, 2, 3])' + + def teardown_class(cls): + cache = get_appbridge_cache(cls.space) + cache.w_array_repr = cls.old_array_repr + cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -344,7 +344,7 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(ValueError, add.reduce, 1) + raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): from _numpypy import add, maximum @@ -360,6 +360,14 @@ assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_reduce_keepdims(self): + from _numpypy import add, arange + a = arange(12).reshape(3, 4) + b = add.reduce(a, 0, keepdims=True) + assert b.shape == (1, 4) + assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() + + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array a = arange(6).reshape(2, 3) @@ -369,6 +377,12 @@ assert (a | 1 == bitwise_or(a, 1)).all() raises(TypeError, 'array([1.0]) & 1') + def test_unary_bitops(self): + from _numpypy import bitwise_not, array + a = array([1, 2, 3, 4]) + assert (~a == [-2, -3, -4, -5]).all() + assert (bitwise_not(a) == ~a).all() + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal @@ -394,3 +408,28 @@ (3, 3.5), ]: assert ufunc(a, b) == func(a, b) + + def test_count_reduce_items(self): + from _numpypy import count_reduce_items, arange + a = arange(24).reshape(2, 3, 4) + assert count_reduce_items(a) == 24 + assert count_reduce_items(a, 1) == 3 + assert count_reduce_items(a, (1, 2)) == 3 * 4 + + def test_true_divide(self): + from _numpypy import arange, array, true_divide + assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() + + def test_isnan_isinf(self): + from _numpypy import isnan, isinf, float64, array + assert isnan(float('nan')) + assert isnan(float64(float('nan'))) + assert not isnan(3) + assert isinf(float('inf')) + assert not isnan(3.5) + assert not isinf(3.5) + assert not isnan(float('inf')) + assert not isinf(float('nan')) + assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() + assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() + assert isinf(array([0.2])).dtype.kind == 'b' diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -12,7 +12,7 @@ from pypy.module.micronumpy.compile import (FakeSpace, IntObject, Parser, InterpreterState) from pypy.module.micronumpy.interp_numarray import (W_NDimArray, - BaseArray) + BaseArray, W_FlatIterator) from pypy.rlib.nonconst import NonConstant @@ -287,6 +287,27 @@ 'jump': 1, 'arraylen_gc': 1}) + def define_take(): + return """ + a = |10| + b = take(a, [1, 1, 3, 2]) + b -> 2 + """ + + def test_take(self): + result = self.run("take") + assert result == 3 + self.check_simple_loop({'getinteriorfield_raw': 2, + 'cast_float_to_int': 1, + 'int_lt': 1, + 'int_ge': 2, + 'guard_false': 3, + 'setinteriorfield_raw': 1, + 'int_mul': 1, + 'int_add': 3, + 'jump': 1, + 'arraylen_gc': 2}) + def define_multidim(): return """ a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] @@ -369,7 +390,71 @@ 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + def define_flat_iter(): + return ''' + a = |30| + b = flat(a) + c = b + a + c -> 3 + ''' + def test_flat_iter(self): + result = self.run("flat_iter") + assert result == 6 + self.check_trace_count(1) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 3, + 'int_ge': 1, 'guard_false': 1, + 'arraylen_gc': 1, 'jump': 1}) + + def define_flat_getitem(): + return ''' + a = |30| + b = flat(a) + b -> 4: -> 6 + ''' + + def test_flat_getitem(self): + result = self.run("flat_getitem") + assert result == 10.0 + self.check_trace_count(1) + self.check_simple_loop({'getinteriorfield_raw': 1, + 'setinteriorfield_raw': 1, + 'int_lt': 1, + 'int_ge': 1, + 'int_add': 3, + 'guard_true': 1, + 'guard_false': 1, + 'arraylen_gc': 2, + 'jump': 1}) + + def define_flat_setitem(): + return ''' + a = |30| + b = flat(a) + b[4:] = a->:26 + a -> 5 + ''' + + def test_flat_setitem(self): + result = self.run("flat_setitem") + assert result == 1.0 + self.check_trace_count(1) + # XXX not ideal, but hey, let's ignore it for now + self.check_simple_loop({'getinteriorfield_raw': 1, + 'setinteriorfield_raw': 1, + 'int_lt': 1, + 'int_gt': 1, + 'int_add': 4, + 'guard_true': 2, + 'arraylen_gc': 2, + 'jump': 1, + 'int_sub': 1, + # XXX bad part + 'int_and': 1, + 'int_mod': 1, + 'int_rshift': 1, + }) class TestNumpyOld(LLJitMixin): def setup_class(cls): diff --git a/pypy/module/micronumpy/tool/numready/main.py b/pypy/module/micronumpy/tool/numready/main.py --- a/pypy/module/micronumpy/tool/numready/main.py +++ b/pypy/module/micronumpy/tool/numready/main.py @@ -73,9 +73,8 @@ for line in lines: kind, name = line.split(" : ", 1) subitems = None - if kind == KINDS["TYPE"]: - if name in ['ndarray', 'dtype']: - subitems = find_numpy_items(python, modname, name) + if kind == KINDS["TYPE"] and name in SPECIAL_NAMES and attr is None: + subitems = find_numpy_items(python, modname, name) items.add(Item(name, kind, subitems)) return items @@ -89,15 +88,20 @@ l[i].append(lst[k * lgt + i]) return l +SPECIAL_NAMES = ["ndarray", "dtype", "generic"] + def main(argv): cpy_items = find_numpy_items("/usr/bin/python") pypy_items = find_numpy_items(argv[1], "numpypy") all_items = [] - msg = '%d/%d names, %d/%d ndarray attributes, %d/%d dtype attributes' % ( - len(pypy_items), len(cpy_items), len(pypy_items['ndarray'].subitems), - len(cpy_items['ndarray'].subitems), len(pypy_items['dtype'].subitems), - len(cpy_items['dtype'].subitems)) + msg = "{:d}/{:d} names".format(len(pypy_items), len(cpy_items)) + " " + msg += ", ".join( + "{:d}/{:d} {} attributes".format( + len(pypy_items[name].subitems), len(cpy_items[name].subitems), name + ) + for name in SPECIAL_NAMES + ) for item in cpy_items: pypy_exists = item in pypy_items if item.subitems: @@ -114,6 +118,6 @@ with open(argv[2], 'w') as f: f.write(html.encode("utf-8")) else: - with tempfile.NamedTemporaryFile(delete=False) as f: + with tempfile.NamedTemporaryFile(delete=False, suffix=".html") as f: f.write(html.encode("utf-8")) print "Saved in: %s" % f.name diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -23,6 +23,16 @@ ) return dispatcher +def raw_unary_op(func): + specialize.argtype(1)(func) + @functools.wraps(func) + def dispatcher(self, v): + return func( + self, + self.for_computation(self.unbox(v)) + ) + return dispatcher + def simple_binary_op(func): specialize.argtype(1, 2)(func) @functools.wraps(func) @@ -95,7 +105,9 @@ )) def read_bool(self, storage, width, i, offset): - raise NotImplementedError + return bool(self.for_computation( + libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset))) def store(self, storage, width, i, offset, box): value = self.unbox(box) @@ -137,6 +149,14 @@ def abs(self, v): return abs(v) + @raw_unary_op + def isnan(self, v): + return False + + @raw_unary_op + def isinf(self, v): + return False + @raw_binary_op def eq(self, v1, v2): return v1 == v2 @@ -171,7 +191,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) - + class Bool(BaseType, Primitive): T = lltype.Bool @@ -189,11 +209,6 @@ else: return self.False - - def read_bool(self, storage, width, i, offset): - return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset) - def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -214,6 +229,18 @@ def default_fromstring(self, space): return self.box(False) + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + + @simple_unary_op + def invert(self, v): + return ~v + class Integer(Primitive): _mixin_ = True @@ -270,6 +297,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_unary_op + def invert(self, v): + return ~v + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -448,6 +479,14 @@ except ValueError: return rfloat.NAN + @raw_unary_op + def isnan(self, v): + return rfloat.isnan(v) + + @raw_unary_op + def isinf(self, v): + return rfloat.isinf(v) + class Float32(BaseType, Float): T = rffi.FLOAT diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -98,15 +98,15 @@ from numpypy import array, var a = array([[1,2],[3,4]]) assert var(a) == 1.25 - #assert (var(a,0) == array([ 1., 1.])).all() - #assert (var(a,1) == array([ 0.25, 0.25])).all() + assert (var(a,0) == array([ 1., 1.])).all() + assert (var(a,1) == array([ 0.25, 0.25])).all() def test_std(self): from numpypy import array, std a = array([[1, 2], [3, 4]]) assert std(a) == 1.1180339887498949 - #assert (std(a, axis=0) == array([ 1., 1.])).all() - #assert (std(a, axis=1) == array([ 0.5, 0.5])).all() + assert (std(a, axis=0) == array([ 1., 1.])).all() + assert (std(a, axis=1) == array([ 0.5, 0.5])).all() def test_mean(self): from numpypy import array, mean, arange @@ -136,4 +136,4 @@ raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") # x = ones((1, 2, 3)) # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) - + diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py @@ -0,0 +1,144 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + + +class AppTestBaseRepr(BaseNumpyAppTest): + def test_base3(self): + from numpypy import base_repr + assert base_repr(3**5, 3) == '100000' + + def test_positive(self): + from numpypy import base_repr + assert base_repr(12, 10) == '12' + assert base_repr(12, 10, 4) == '000012' + assert base_repr(12, 4) == '30' + assert base_repr(3731624803700888, 36) == '10QR0ROFCEW' + + def test_negative(self): + from numpypy import base_repr + assert base_repr(-12, 10) == '-12' + assert base_repr(-12, 10, 4) == '-000012' + assert base_repr(-12, 4) == '-30' + +class AppTestRepr(BaseNumpyAppTest): + def test_repr(self): + from numpypy import array + assert repr(array([1, 2, 3, 4])) == 'array([1, 2, 3, 4])' + + def test_repr_2(self): + from numpypy import array, zeros + int_size = array(5).dtype.itemsize + a = array(range(5), float) + assert repr(a) == "array([ 0., 1., 2., 3., 4.])" + a = array([], float) + assert repr(a) == "array([], dtype=float64)" + a = zeros(1001) + assert repr(a) == "array([ 0., 0., 0., ..., 0., 0., 0.])" + a = array(range(5), long) + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" + a = array([], long) + assert repr(a) == "array([], dtype=int64)" + a = array([True, False, True, False], "?") + assert repr(a) == "array([ True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" + + def test_repr_multi(self): + from numpypy import arange, zeros, array + a = zeros((3, 4)) + assert repr(a) == '''array([[ 0., 0., 0., 0.], + [ 0., 0., 0., 0.], + [ 0., 0., 0., 0.]])''' + a = zeros((2, 3, 4)) + assert repr(a) == '''array([[[ 0., 0., 0., 0.], + [ 0., 0., 0., 0.], + [ 0., 0., 0., 0.]], + + [[ 0., 0., 0., 0.], + [ 0., 0., 0., 0.], + [ 0., 0., 0., 0.]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[ 0, 1, 2, ..., 498, 499, 500], + [ 501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[ 0, 501], + [ 1, 502], + [ 2, 503], + ..., + [ 498, 999], + [ 499, 1000], + [ 500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' + + def test_repr_slice(self): + from numpypy import array, zeros + a = array(range(5), float) + b = a[1::2] + assert repr(b) == "array([ 1., 3.])" + a = zeros(2002) + b = a[::2] + assert repr(b) == "array([ 0., 0., 0., ..., 0., 0., 0.])" + a = array((range(5), range(5, 10)), dtype="int16") + b = a[1, 2:] + assert repr(b) == "array([7, 8, 9], dtype=int16)" + # an empty slice prints its shape + b = a[2:1, ] + assert repr(b) == "array([], shape=(0, 5), dtype=int16)" + + def test_str(self): + from numpypy import array, zeros + a = array(range(5), float) + assert str(a) == "[ 0. 1. 2. 3. 4.]" + assert str((2 * a)[:]) == "[ 0. 2. 4. 6. 8.]" + a = zeros(1001) + assert str(a) == "[ 0. 0. 0. ..., 0. 0. 0.]" + + a = array(range(5), dtype=long) + assert str(a) == "[0 1 2 3 4]" + a = array([True, False, True, False], dtype="?") + assert str(a) == "[ True False True False]" + + a = array(range(5), dtype="int8") + assert str(a) == "[0 1 2 3 4]" + + a = array(range(5), dtype="int16") + assert str(a) == "[0 1 2 3 4]" + + a = array((range(5), range(5, 10)), dtype="int16") + assert str(a) == "[[0 1 2 3 4]\n [5 6 7 8 9]]" + + a = array(3, dtype=int) + assert str(a) == "3" + + a = zeros((400, 400), dtype=int) + assert str(a) == '[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]' + a = zeros((2, 2, 2)) + r = str(a) + assert r == '[[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 0. 0.]\n [ 0. 0.]]]' + + def test_str_slice(self): + from numpypy import array, zeros + a = array(range(5), float) + b = a[1::2] + assert str(b) == "[ 1. 3.]" + a = zeros(2002) + b = a[::2] + assert str(b) == "[ 0. 0. 0. ..., 0. 0. 0.]" + a = array((range(5), range(5, 10)), dtype="int16") + b = a[1, 2:] + assert str(b) == "[7 8 9]" + b = a[2:1, ] + assert str(b) == "[]" diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -150,6 +150,8 @@ self._see_getsetproperty(x) if isinstance(x, r_singlefloat): self._wrap_not_rpython(x) + if isinstance(x, list): + self._wrap_not_rpython(x) return w_some_obj() wrap._annspecialcase_ = "specialize:argtype(1)" diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -110,7 +110,10 @@ return list(items) def switch_to_object_strategy(self): - list_w = self.getitems() + if self.strategy is self.space.fromcache(EmptyListStrategy): + list_w = [] + else: + list_w = self.getitems() self.strategy = self.space.fromcache(ObjectListStrategy) # XXX this is quite indirect self.init_from_list_w(list_w) @@ -344,8 +347,6 @@ def __init__(self, space): ListStrategy.__init__(self, space) - # cache an empty list that is used whenever getitems is called (i.e. sorting) - self.cached_emptylist_w = [] def init_from_list_w(self, w_list, list_w): assert len(list_w) == 0 @@ -373,10 +374,10 @@ def getslice(self, w_list, start, stop, step, length): # will never be called because the empty list case is already caught in # getslice__List_ANY_ANY and getitem__List_Slice - return W_ListObject(self.space, self.cached_emptylist_w) + return W_ListObject(self.space, []) def getitems(self, w_list): - return self.cached_emptylist_w + return [] def getitems_copy(self, w_list): return [] diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -1251,6 +1251,20 @@ l.reverse() assert l == [2,1,0] +class AppTestWithoutStrategies(object): + + def setup_class(cls): + cls.space = gettestobjspace(**{"objspace.std.withliststrategies" : + False}) + + def test_no_shared_empty_list(self): + l = [] + copy = l[:] + copy.append({}) + assert copy == [{}] + + notshared = l[:] + assert notshared == [] class AppTestListFastSubscr: diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -130,19 +130,29 @@ if self is other: return 0 else: - raise TypeError("Symbolics can not be compared!") + raise TypeError("Symbolics cannot be compared! (%r, %r)" + % (self, other)) def __hash__(self): - raise TypeError("Symbolics are not hashable!") + raise TypeError("Symbolics are not hashable! %r" % (self,)) def __nonzero__(self): - raise TypeError("Symbolics are not comparable") + raise TypeError("Symbolics are not comparable! %r" % (self,)) class ComputedIntSymbolic(Symbolic): def __init__(self, compute_fn): self.compute_fn = compute_fn + def __repr__(self): + # repr(self.compute_fn) can arrive back here in an + # infinite recursion + try: + name = self.compute_fn.__name__ + except (AttributeError, TypeError): + name = hex(id(self.compute_fn)) + return '%s(%r)' % (self.__class__.__name__, name) + def annotation(self): from pypy.annotation import model return model.SomeInteger() @@ -157,6 +167,9 @@ self.expr = expr self.default = default + def __repr__(self): + return '%s(%r)' % (self.__class__.__name__, self.expr) + def annotation(self): from pypy.annotation import model return model.SomeInteger() diff --git a/pypy/rlib/rstacklet.py b/pypy/rlib/rstacklet.py --- a/pypy/rlib/rstacklet.py +++ b/pypy/rlib/rstacklet.py @@ -1,3 +1,4 @@ +import sys from pypy.rlib import _rffi_stacklet as _c from pypy.rlib import jit from pypy.rlib.objectmodel import we_are_translated @@ -72,6 +73,11 @@ if translated: assert config is not None, ("you have to pass a valid config, " "e.g. from 'driver.config'") + elif '__pypy__' in sys.builtin_module_names: + import py + py.test.skip("cannot run the stacklet tests on top of pypy: " + "calling directly the C function stacklet_switch() " + "will crash, depending on details of your config") if config is not None: assert config.translation.continuation, ( "stacklet: you have to translate with --continuation") diff --git a/pypy/rpython/lltypesystem/llgroup.py b/pypy/rpython/lltypesystem/llgroup.py --- a/pypy/rpython/lltypesystem/llgroup.py +++ b/pypy/rpython/lltypesystem/llgroup.py @@ -76,6 +76,10 @@ self.index = memberindex self.member = grp.members[memberindex]._as_ptr() + def __repr__(self): + return '%s(%s, %s)' % (self.__class__.__name__, + self.grpptr, self.index) + def __nonzero__(self): return True diff --git a/pypy/rpython/lltypesystem/llmemory.py b/pypy/rpython/lltypesystem/llmemory.py --- a/pypy/rpython/lltypesystem/llmemory.py +++ b/pypy/rpython/lltypesystem/llmemory.py @@ -32,7 +32,8 @@ self.known_nonneg()): return True else: - raise TypeError("Symbolics can not be compared!") + raise TypeError("Symbolics cannot be compared! (%r, %r)" + % (self, other)) def __lt__(self, other): return not self.__ge__(other) diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -323,6 +323,17 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + v_res = hop.gendirectcall(target, *v_args) + return self.recast_value(hop.llops, v_res) + class __extend__(pairtype(DictRepr, rmodel.Repr)): def rtype_getitem((r_dict, r_key), hop): @@ -874,3 +885,18 @@ r.item1 = recast(ELEM.TO.item1, entry.value) _ll_dict_del(dic, i) return r + +def ll_pop(dic, key): + i = ll_dict_lookup(dic, key, dic.keyhash(key)) + if not i & HIGHEST_BIT: + value = ll_get_value(dic, i) + _ll_dict_del(dic, i) + return value + else: + raise KeyError + +def ll_pop_default(dic, key, dfl): + try: + return ll_pop(dic, key) + except KeyError: + return dfl diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -27,6 +27,10 @@ self.c_name = c_name self.TP = TP + def __repr__(self): + return '%s(%r, %s)' % (self.__class__.__name__, + self.c_name, self.TP) + def annotation(self): return lltype_to_annotation(self.TP) diff --git a/pypy/rpython/ootypesystem/rdict.py b/pypy/rpython/ootypesystem/rdict.py --- a/pypy/rpython/ootypesystem/rdict.py +++ b/pypy/rpython/ootypesystem/rdict.py @@ -160,6 +160,16 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + return hop.gendirectcall(target, *v_args) + def __get_func(self, interp, r_func, fn, TYPE): if isinstance(r_func, MethodOfFrozenPBCRepr): obj = r_func.r_im_self.convert_const(fn.im_self) @@ -370,6 +380,20 @@ return res raise KeyError +def ll_pop(d, key): + if d.ll_contains(key): + value = d.ll_get(key) + d.ll_remove(key) + return value + else: + raise KeyError + +def ll_pop_default(d, key, dfl): + try: + return ll_pop(d, key) + except KeyError: + return dfl + # ____________________________________________________________ # # Iteration. diff --git a/pypy/rpython/test/test_rdict.py b/pypy/rpython/test/test_rdict.py --- a/pypy/rpython/test/test_rdict.py +++ b/pypy/rpython/test/test_rdict.py @@ -25,37 +25,37 @@ class BaseTestRdict(BaseRtypingTest): def test_dict_creation(self): - def createdict(i): + def createdict(i): d = {'hello' : i} return d['hello'] res = self.interpret(createdict, [42]) assert res == 42 - def test_dict_getitem_setitem(self): - def func(i): + def test_dict_getitem_setitem(self): + def func(i): d = {'hello' : i} d['world'] = i + 1 - return d['hello'] * d['world'] + return d['hello'] * d['world'] res = self.interpret(func, [6]) assert res == 42 - def test_dict_getitem_keyerror(self): - def func(i): + def test_dict_getitem_keyerror(self): + def func(i): d = {'hello' : i} try: return d['world'] except KeyError: - return 0 + return 0 res = self.interpret(func, [6]) assert res == 0 def test_dict_del_simple(self): - def func(i): + def func(i): d = {'hello' : i} d['world'] = i + 1 del d['hello'] - return len(d) + return len(d) res = self.interpret(func, [6]) assert res == 1 @@ -71,7 +71,7 @@ assert res == True def test_empty_strings(self): - def func(i): + def func(i): d = {'' : i} del d[''] try: @@ -83,7 +83,7 @@ res = self.interpret(func, [6]) assert res == 1 - def func(i): + def func(i): d = {'' : i} del d[''] d[''] = i + 1 @@ -146,8 +146,8 @@ d1 = {} d1['hello'] = i + 1 d2 = {} - d2['world'] = d1 - return d2['world']['hello'] + d2['world'] = d1 + return d2['world']['hello'] res = self.interpret(func, [5]) assert res == 6 @@ -297,7 +297,7 @@ a = 0 for k, v in items: b += isinstance(k, B) - a += isinstance(v, A) + a += isinstance(v, A) return 3*b+a res = self.interpret(func, []) assert res == 8 @@ -316,7 +316,7 @@ a = 0 for k, v in dic.iteritems(): b += isinstance(k, B) - a += isinstance(v, A) + a += isinstance(v, A) return 3*b+a res = self.interpret(func, []) assert res == 8 @@ -342,11 +342,11 @@ def test_dict_contains_with_constant_dict(self): dic = {'4':1000, ' 8':200} def func(i): - return chr(i) in dic - res = self.interpret(func, [ord('4')]) + return chr(i) in dic + res = self.interpret(func, [ord('4')]) assert res is True - res = self.interpret(func, [1]) - assert res is False + res = self.interpret(func, [1]) + assert res is False def test_dict_or_none(self): class A: @@ -413,7 +413,7 @@ return g(get) res = self.interpret(f, []) - assert res == 2 + assert res == 2 def test_specific_obscure_bug(self): class A: pass @@ -622,6 +622,42 @@ res = self.interpret(func, []) assert res in [5263, 6352] + def test_dict_pop(self): + def f(n, default): + d = {} + d[2] = 3 + d[4] = 5 + if default == -1: + try: + x = d.pop(n) + except KeyError: + x = -1 + else: + x = d.pop(n, default) + return x * 10 + len(d) + res = self.interpret(f, [2, -1]) + assert res == 31 + res = self.interpret(f, [3, -1]) + assert res == -8 + res = self.interpret(f, [2, 5]) + assert res == 31 + + def test_dict_pop_instance(self): + class A(object): + pass + def f(n): + d = {} + d[2] = A() + x = d.pop(n, None) + if x is None: + return 12 + else: + return 15 + res = self.interpret(f, [2]) + assert res == 15 + res = self.interpret(f, [700]) + assert res == 12 + class TestLLtype(BaseTestRdict, LLRtypeMixin): def test_dict_but_not_with_char_keys(self): def func(i): @@ -633,19 +669,19 @@ res = self.interpret(func, [6]) assert res == 0 - def test_deleted_entry_reusage_with_colliding_hashes(self): - def lowlevelhash(value): + def test_deleted_entry_reusage_with_colliding_hashes(self): + def lowlevelhash(value): p = rstr.mallocstr(len(value)) for i in range(len(value)): p.chars[i] = value[i] - return rstr.LLHelpers.ll_strhash(p) + return rstr.LLHelpers.ll_strhash(p) - def func(c1, c2): - c1 = chr(c1) - c2 = chr(c2) + def func(c1, c2): + c1 = chr(c1) + c2 = chr(c2) d = {} d[c1] = 1 - d[c2] = 2 + d[c2] = 2 del d[c1] return d[c2] @@ -653,7 +689,7 @@ base = rdict.DICT_INITSIZE for y in range(0, 256): y = chr(y) - y_hash = lowlevelhash(y) % base + y_hash = lowlevelhash(y) % base char_by_hash.setdefault(y_hash, []).append(y) x, y = char_by_hash[0][:2] # find a collision @@ -661,18 +697,18 @@ res = self.interpret(func, [ord(x), ord(y)]) assert res == 2 - def func2(c1, c2): - c1 = chr(c1) - c2 = chr(c2) + def func2(c1, c2): + c1 = chr(c1) + c2 = chr(c2) d = {} d[c1] = 1 - d[c2] = 2 + d[c2] = 2 del d[c1] d[c1] = 3 - return d + return d res = self.interpret(func2, [ord(x), ord(y)]) - for i in range(len(res.entries)): + for i in range(len(res.entries)): assert not (res.entries.everused(i) and not res.entries.valid(i)) def func3(c0, c1, c2, c3, c4, c5, c6, c7): @@ -687,9 +723,9 @@ c7 = chr(c7) ; d[c7] = 1; del d[c7] return d - if rdict.DICT_INITSIZE != 8: + if rdict.DICT_INITSIZE != 8: py.test.skip("make dict tests more indepdent from initsize") - res = self.interpret(func3, [ord(char_by_hash[i][0]) + res = self.interpret(func3, [ord(char_by_hash[i][0]) for i in range(rdict.DICT_INITSIZE)]) count_frees = 0 for i in range(len(res.entries)): @@ -707,9 +743,9 @@ del d[chr(ord('a') + i)] return d res = self.interpret(func, [0]) - assert len(res.entries) > rdict.DICT_INITSIZE + assert len(res.entries) > rdict.DICT_INITSIZE res = self.interpret(func, [1]) - assert len(res.entries) == rdict.DICT_INITSIZE + assert len(res.entries) == rdict.DICT_INITSIZE def test_dict_valid_resize(self): # see if we find our keys after resize @@ -844,7 +880,7 @@ def test_prebuilt_list_of_addresses(self): from pypy.rpython.lltypesystem import llmemory - + TP = lltype.Struct('x', ('y', lltype.Signed)) a = lltype.malloc(TP, flavor='raw', immortal=True) b = lltype.malloc(TP, flavor='raw', immortal=True) @@ -858,7 +894,7 @@ d = {a_a: 3, a_b: 4, a_c: 5} d[a0] = 8 - + def func(i): if i == 0: ptr = a @@ -888,7 +924,7 @@ return a == b def rhash(a): return 3 - + def func(i): d = r_dict(eq, rhash, force_non_null=True) if not i: diff --git a/pypy/translator/goal/targetnopstandalone.py b/pypy/translator/goal/targetnopstandalone.py --- a/pypy/translator/goal/targetnopstandalone.py +++ b/pypy/translator/goal/targetnopstandalone.py @@ -7,10 +7,8 @@ actually implementing argv of the executable. """ -import os, sys - -def debug(msg): - os.write(2, "debug: " + msg + '\n') +def debug(msg): + print "debug:", msg # __________ Entry point __________ From noreply at buildbot.pypy.org Wed Feb 1 13:48:47 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 1 Feb 2012 13:48:47 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: fixed OverflowError for type-specialized instances Message-ID: <20120201124847.2121882B67@wyvern.cs.uni-duesseldorf.de> Author: l.diekmann Branch: type-specialized-instances Changeset: r52013:ed1dbd45c349 Date: 2012-02-01 12:48 +0000 http://bitbucket.org/pypy/pypy/changeset/ed1dbd45c349/ Log: fixed OverflowError for type-specialized instances diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -29,26 +29,29 @@ self.terminator = terminator def read(self, obj, selector): - attr = self.findmap(selector) # index = self.index(selector) + attr = self.findmap(selector) if attr is None: return self.terminator._read_terminator(obj, selector) - return attr.read_attr(obj) #obj._mapdict_read_storage(index) + return attr.read_attr(obj) def write(self, obj, selector, w_value): from pypy.interpreter.error import OperationError - attr = self.findmap(selector) # index = self.index(selector) + attr = self.findmap(selector) if attr is None: return self.terminator._write_terminator(obj, selector, w_value) try: - attr.write_attr(obj, w_value) #obj._mapdict_write_storage(index, w_value) + attr.write_attr(obj, w_value) except OperationError, e: if not e.match(self.space, self.space.w_TypeError): raise - firstattr = obj._get_mapdict_map() - firstattr.delete(obj, selector) - firstattr.add_attr(obj, selector, w_value) + self._replace(obj, selector, w_value) return True + def _replace(self, obj, selector, w_value): + firstattr = obj._get_mapdict_map() + firstattr.delete(obj, selector) + firstattr.add_attr(obj, selector, w_value) + def delete(self, obj, selector): return None @@ -362,6 +365,9 @@ return self.space.wrap(value) def write_attr(self, obj, w_value): + if not is_taggable_int(self.space, w_value): + self._replace(obj, self.selector, w_value) + return erased = self.erase_item(self.space.int_w(w_value)) obj._mapdict_write_storage(self.position, erased) diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -776,6 +776,16 @@ assert a.x == 5 + def test_too_large_int(self): + class A(object): + def __init__(self): + self.x = 1 + + a = A() + a.x = 1234567890L + + assert a.x == 1234567890L + class AppTestWithMapDictAndCounters(object): def setup_class(cls): from pypy.interpreter import gateway From noreply at buildbot.pypy.org Wed Feb 1 13:56:16 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 1 Feb 2012 13:56:16 +0100 (CET) Subject: [pypy-commit] pypy type-specialized-instances: these lines are now unnecessary Message-ID: <20120201125616.572D182B67@wyvern.cs.uni-duesseldorf.de> Author: l.diekmann Branch: type-specialized-instances Changeset: r52014:ce9d7cbdddfb Date: 2012-02-01 12:56 +0000 http://bitbucket.org/pypy/pypy/changeset/ce9d7cbdddfb/ Log: these lines are now unnecessary diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -39,12 +39,7 @@ attr = self.findmap(selector) if attr is None: return self.terminator._write_terminator(obj, selector, w_value) - try: - attr.write_attr(obj, w_value) - except OperationError, e: - if not e.match(self.space, self.space.w_TypeError): - raise - self._replace(obj, selector, w_value) + attr.write_attr(obj, w_value) return True def _replace(self, obj, selector, w_value): From noreply at buildbot.pypy.org Wed Feb 1 13:56:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 13:56:28 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: following armin's suggestion remove the VECTOR type Message-ID: <20120201125628.78E4282B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52015:51f070072eca Date: 2012-02-01 14:56 +0200 http://bitbucket.org/pypy/pypy/changeset/51f070072eca/ Log: following armin's suggestion remove the VECTOR type diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,7 +5,7 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, INT, REF, FLOAT, VECTOR, + BoxFloat, INT, REF, FLOAT, TargetToken, JitCellToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr @@ -87,7 +87,7 @@ class X86XMMRegisterManager(RegisterManager): - box_types = [FLOAT, VECTOR] + box_types = [FLOAT] all_regs = [xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7] # we never need lower byte I hope save_around_call_regs = all_regs @@ -256,7 +256,7 @@ return pass_on_stack def possibly_free_var(self, var): - if var.type in self.xrm.box_types: + if var.type == FLOAT: self.xrm.possibly_free_var(var) else: self.rm.possibly_free_var(var) @@ -274,7 +274,7 @@ def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - if var.type in self.xrm.box_types: + if var.type == FLOAT: if isinstance(var, ConstFloat): return FloatImmedLoc(var.getfloatstorage()) return self.xrm.make_sure_var_in_reg(var, forbidden_vars, @@ -285,7 +285,7 @@ def force_allocate_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - if var.type in self.xrm.box_types: + if var.type == FLOAT: return self.xrm.force_allocate_reg(var, forbidden_vars, selected_reg, need_lower_byte) else: @@ -293,7 +293,7 @@ selected_reg, need_lower_byte) def force_spill_var(self, var): - if var.type in self.xrm.box_types: + if var.type == FLOAT: return self.xrm.force_spill_var(var) else: return self.rm.force_spill_var(var) @@ -530,7 +530,7 @@ def loc(self, v): if v is None: # xxx kludgy return None - if v.type in self.xrm.box_types: + if v.type == FLOAT: return self.xrm.loc(v) return self.rm.loc(v) diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -17,7 +17,6 @@ INT = 'i' REF = 'r' FLOAT = 'f' -VECTOR = 'e' STRUCT = 's' HOLE = '_' VOID = 'v' @@ -482,15 +481,6 @@ def repr_rpython(self): return repr_rpython(self, 'bi') -class BoxVector(Box): - type = VECTOR - - def __init__(self): - pass - - def _getrepr_(self): - return '' - class BoxFloat(Box): type = FLOAT _attrs_ = ('value',) @@ -523,6 +513,12 @@ def repr_rpython(self): return repr_rpython(self, 'bf') +class BoxVector(BoxFloat): + value = float('nan') + + def _getrepr_(self): + return '' + class BoxPtr(Box): type = REF _attrs_ = ('value',) From noreply at buildbot.pypy.org Wed Feb 1 14:52:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 14:52:36 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: implement spilling. A bit of fun with alignment Message-ID: <20120201135236.4D67782B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52016:f33339bbc410 Date: 2012-02-01 15:52 +0200 http://bitbucket.org/pypy/pypy/changeset/f33339bbc410/ Log: implement spilling. A bit of fun with alignment diff --git a/pypy/jit/backend/llsupport/regalloc.py b/pypy/jit/backend/llsupport/regalloc.py --- a/pypy/jit/backend/llsupport/regalloc.py +++ b/pypy/jit/backend/llsupport/regalloc.py @@ -20,8 +20,6 @@ self.used = [] # list of bools self.hint_frame_locations = {} - frame_depth = property(lambda:xxx, lambda:xxx) # XXX kill me - def get_frame_depth(self): return len(self.used) @@ -45,7 +43,7 @@ return self.get_new_loc(box) def get_new_loc(self, box): - size = self.frame_size(box.type) + size = self.frame_size(box) # frame_depth is rounded up to a multiple of 'size', assuming # that 'size' is a power of two. The reason for doing so is to # avoid obscure issues in jump.py with stack locations that try @@ -54,7 +52,7 @@ self.used.append(False) # index = self.get_frame_depth() - newloc = self.frame_pos(index, box.type) + newloc = self.frame_pos(index, box) for i in range(size): self.used.append(True) # @@ -71,7 +69,7 @@ index = self.get_loc_index(loc) if index < 0: return - endindex = index + self.frame_size(box.type) + endindex = index + self.frame_size(box) while len(self.used) < endindex: self.used.append(False) while index < endindex: @@ -91,7 +89,7 @@ return # already gone del self.bindings[box] # - size = self.frame_size(box.type) + size = self.frame_size(box) baseindex = self.get_loc_index(loc) if baseindex < 0: return @@ -104,7 +102,7 @@ index = self.get_loc_index(loc) if index < 0: return False - size = self.frame_size(box.type) + size = self.frame_size(box) for i in range(size): while (index + i) >= len(self.used): self.used.append(False) @@ -118,10 +116,10 @@ # abstract methods that need to be overwritten for specific assemblers @staticmethod - def frame_pos(loc, type): + def frame_pos(loc, box): raise NotImplementedError("Purely abstract") @staticmethod - def frame_size(type): + def frame_size(box): return 1 @staticmethod def get_loc_index(loc): @@ -256,7 +254,7 @@ del self.reg_bindings[v_to_spill] if self.frame_manager.get(v_to_spill) is None: newloc = self.frame_manager.loc(v_to_spill) - self.assembler.regalloc_mov(loc, newloc) + self.assembler.regalloc_mov(v_to_spill, loc, newloc) return loc def _pick_variable_to_spill(self, v, forbidden_vars, selected_reg=None, @@ -343,11 +341,11 @@ immloc = self.convert_to_imm(v) if selected_reg: if selected_reg in self.free_regs: - self.assembler.regalloc_mov(immloc, selected_reg) + self.assembler.regalloc_mov(v, immloc, selected_reg) return selected_reg loc = self._spill_var(v, forbidden_vars, selected_reg) self.free_regs.append(loc) - self.assembler.regalloc_mov(immloc, loc) + self.assembler.regalloc_mov(v, immloc, loc) return loc return immloc @@ -366,7 +364,7 @@ loc = self.force_allocate_reg(v, forbidden_vars, selected_reg, need_lower_byte=need_lower_byte) if prev_loc is not loc: - self.assembler.regalloc_mov(prev_loc, loc) + self.assembler.regalloc_mov(v, prev_loc, loc) return loc def _reallocate_from_to(self, from_v, to_v): @@ -378,10 +376,10 @@ if self.free_regs: loc = self.free_regs.pop() self.reg_bindings[v] = loc - self.assembler.regalloc_mov(prev_loc, loc) + self.assembler.regalloc_mov(v, prev_loc, loc) else: loc = self.frame_manager.loc(v) - self.assembler.regalloc_mov(prev_loc, loc) + self.assembler.regalloc_mov(v, prev_loc, loc) def force_result_in_reg(self, result_v, v, forbidden_vars=[]): """ Make sure that result is in the same register as v. @@ -395,13 +393,13 @@ loc = self.free_regs.pop() else: loc = self._spill_var(v, forbidden_vars, None) - self.assembler.regalloc_mov(self.convert_to_imm(v), loc) + self.assembler.regalloc_mov(v, self.convert_to_imm(v), loc) self.reg_bindings[result_v] = loc return loc if v not in self.reg_bindings: prev_loc = self.frame_manager.loc(v) loc = self.force_allocate_reg(v, forbidden_vars) - self.assembler.regalloc_mov(prev_loc, loc) + self.assembler.regalloc_mov(v, prev_loc, loc) assert v in self.reg_bindings if self.longevity[v][1] > self.position: # we need to find a new place for variable v and @@ -420,7 +418,7 @@ if not self.frame_manager.get(v): reg = self.reg_bindings[v] to = self.frame_manager.loc(v) - self.assembler.regalloc_mov(reg, to) + self.assembler.regalloc_mov(v, reg, to) # otherwise it's clean def before_call(self, force_store=[], save_all_regs=0): diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3164,6 +3164,7 @@ assert a[0] == 26 assert a[1] == 30 lltype.free(a, flavor='raw') + class OOtypeBackendTest(BaseBackendTest): diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1,7 +1,7 @@ import sys, os from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper -from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt +from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt, BoxVector from pypy.jit.metainterp.history import AbstractFailDescr, INT, REF, FLOAT from pypy.jit.metainterp.history import JitCellToken from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory @@ -833,8 +833,10 @@ # ------------------------------------------------------------ - def mov(self, from_loc, to_loc): - if (isinstance(from_loc, RegLoc) and from_loc.is_xmm) or (isinstance(to_loc, RegLoc) and to_loc.is_xmm): + def mov(self, box, from_loc, to_loc): + if isinstance(box, BoxVector): + self.mc.MOVDQU(to_loc, from_loc) + elif (isinstance(from_loc, RegLoc) and from_loc.is_xmm) or (isinstance(to_loc, RegLoc) and to_loc.is_xmm): self.mc.MOVSD(to_loc, from_loc) else: assert to_loc is not ebp @@ -1285,7 +1287,7 @@ self.mc.MOVZX8(resloc, rl) def genop_same_as(self, op, arglocs, resloc): - self.mov(arglocs[0], resloc) + self.mov(op.getarg(0), arglocs[0], resloc) genop_cast_ptr_to_int = genop_same_as genop_cast_int_to_ptr = genop_same_as diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -5,7 +5,7 @@ import os from pypy.jit.metainterp.history import (Box, Const, ConstInt, ConstPtr, ResOperation, BoxPtr, ConstFloat, - BoxFloat, INT, REF, FLOAT, + BoxFloat, INT, REF, FLOAT, BoxVector, TargetToken, JitCellToken) from pypy.jit.backend.x86.regloc import * from pypy.rpython.lltypesystem import lltype, rffi, rstr @@ -128,17 +128,25 @@ class X86FrameManager(FrameManager): @staticmethod - def frame_pos(i, box_type): - if IS_X86_32 and box_type == FLOAT: - return StackLoc(i, get_ebp_ofs(i+1), box_type) - else: - return StackLoc(i, get_ebp_ofs(i), box_type) + def frame_pos(i, box): + assert isinstance(box, Box) + if isinstance(box, BoxVector): + if IS_X86_32: + return StackLoc(i, get_ebp_ofs(i + 3), box.type) + return StackLoc(i, get_ebp_ofs(i + 1), box.type) + if IS_X86_32 and box.type == FLOAT: + return StackLoc(i, get_ebp_ofs(i+1), box.type) + return StackLoc(i, get_ebp_ofs(i), box.type) @staticmethod - def frame_size(box_type): - if IS_X86_32 and box_type == FLOAT: + def frame_size(box): + assert isinstance(box, Box) + if isinstance(box, BoxVector): + if IS_X86_32: + return 4 return 2 - else: - return 1 + if IS_X86_32 and box.type == FLOAT: + return 2 + return 1 @staticmethod def get_loc_index(loc): assert isinstance(loc, StackLoc) @@ -370,7 +378,10 @@ self.assembler.regalloc_perform_math(op, arglocs, result_loc) def locs_for_fail(self, guard_op): - return [self.loc(v) for v in guard_op.getfailargs()] + failargs = guard_op.getfailargs() + for arg in failargs: + assert not isinstance(arg, BoxVector) + return [self.loc(v) for v in failargs] def get_current_depth(self): # return (self.fm.frame_depth, self.param_depth), but trying to share @@ -701,11 +712,19 @@ self.xrm.possibly_free_vars_for_op(op) consider_float_add = _consider_float_op - consider_float_vector_add = _consider_float_op consider_float_sub = _consider_float_op consider_float_mul = _consider_float_op consider_float_truediv = _consider_float_op + def _consider_float_vector_op(self, op): + loc1 = self.xrm.make_sure_var_in_reg(op.getarg(1)) + args = op.getarglist() + loc0 = self.xrm.force_result_in_reg(op.result, op.getarg(0), args) + self.Perform(op, [loc0, loc1], loc0) + self.xrm.possibly_free_vars_for_op(op) + + consider_float_vector_add = _consider_float_vector_op + def _consider_float_cmp(self, op, guard_op): vx = op.getarg(0) vy = op.getarg(1) @@ -1240,7 +1259,7 @@ scale = self._get_unicode_item_scale() if not (isinstance(length_loc, ImmedLoc) or isinstance(length_loc, RegLoc)): - self.assembler.mov(length_loc, bytes_loc) + self.assembler.mov(args[4], ength_loc, bytes_loc) length_loc = bytes_loc self.assembler.load_effective_addr(length_loc, 0, scale, bytes_loc) length_box = bytes_box @@ -1347,6 +1366,7 @@ # Build the four lists for i in range(op.numargs()): box = op.getarg(i) + assert not isinstance(box, BoxVector) src_loc = self.loc(box) dst_loc = arglocs[i] if box.type != FLOAT: diff --git a/pypy/jit/backend/x86/regloc.py b/pypy/jit/backend/x86/regloc.py --- a/pypy/jit/backend/x86/regloc.py +++ b/pypy/jit/backend/x86/regloc.py @@ -557,6 +557,7 @@ MOVSD = _binaryop('MOVSD') MOVAPD = _binaryop('MOVAPD') MOVDQA = _binaryop('MOVDQA') + MOVDQU = _binaryop('MOVDQU') ADDSD = _binaryop('ADDSD') ADDPD = _binaryop('ADDPD') SUBSD = _binaryop('SUBSD') diff --git a/pypy/jit/backend/x86/rx86.py b/pypy/jit/backend/x86/rx86.py --- a/pypy/jit/backend/x86/rx86.py +++ b/pypy/jit/backend/x86/rx86.py @@ -726,6 +726,10 @@ regtype='XMM') define_modrm_modes('MOVDQA_*x', ['\x66', rex_nw, '\x0F\x7F', register(2, 8)], regtype='XMM') +define_modrm_modes('MOVDQU_x*', ['\xF3', rex_nw, '\x0F\x6F', register(1, 8)], + regtype='XMM') +define_modrm_modes('MOVDQU_*x', ['\xF3', rex_nw, '\x0F\x7F', register(2, 8)], + regtype='XMM') define_modrm_modes('SQRTSD_x*', ['\xF2', rex_nw, '\x0F\x51', register(1,8)], regtype='XMM') diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -518,6 +518,58 @@ assert self.cpu.get_latest_value_int(3) == 42 + def test_vector_spill(self): + A = lltype.Array(lltype.Float, hints={'nolength': True, + 'memory_position_alignment': 16}) + descr0 = self.cpu.arraydescrof(A) + looptoken = JitCellToken() + ops = parse(""" + [p0, p1] + vec0 = getarrayitem_vector_raw(p0, 0, descr=descr0) + vec1 = getarrayitem_vector_raw(p1, 2, descr=descr0) + vec2 = getarrayitem_vector_raw(p1, 4, descr=descr0) + vec3 = getarrayitem_vector_raw(p1, 6, descr=descr0) + vec4 = getarrayitem_vector_raw(p1, 8, descr=descr0) + vec5 = getarrayitem_vector_raw(p1, 10, descr=descr0) + vec6 = getarrayitem_vector_raw(p1, 12, descr=descr0) + vec7 = getarrayitem_vector_raw(p1, 14, descr=descr0) + vec8 = getarrayitem_vector_raw(p1, 16, descr=descr0) + vec9 = getarrayitem_vector_raw(p1, 18, descr=descr0) + vec10 = getarrayitem_vector_raw(p1, 20, descr=descr0) + vec11 = getarrayitem_vector_raw(p1, 22, descr=descr0) + vec12 = getarrayitem_vector_raw(p1, 24, descr=descr0) + vec13 = getarrayitem_vector_raw(p1, 26, descr=descr0) + vec14 = getarrayitem_vector_raw(p1, 28, descr=descr0) + vec15 = getarrayitem_vector_raw(p1, 30, descr=descr0) + vec16 = float_vector_add(vec0, vec1) + vec17 = float_vector_add(vec16, vec2) + vec18 = float_vector_add(vec17, vec3) + vec19 = float_vector_add(vec18, vec4) + vec20 = float_vector_add(vec19, vec5) + vec21 = float_vector_add(vec20, vec6) + vec22 = float_vector_add(vec21, vec7) + vec23 = float_vector_add(vec22, vec8) + vec24 = float_vector_add(vec23, vec9) + vec25 = float_vector_add(vec24, vec10) + vec26 = float_vector_add(vec25, vec11) + vec27 = float_vector_add(vec26, vec12) + vec28 = float_vector_add(vec27, vec13) + vec29 = float_vector_add(vec28, vec14) + vec30 = float_vector_add(vec29, vec15) + setarrayitem_vector_raw(p0, 0, vec30, descr=descr0) + finish() + """, namespace=locals()) + self.cpu.compile_loop(ops.inputargs, ops.operations, looptoken) + a = lltype.malloc(A, 32, flavor='raw') + assert rffi.cast(lltype.Signed, a) % 16 == 0 + for i in range(32): + a[i] = float(i) + self.cpu.execute_token(looptoken, a, a) + assert a[0] == 16 * 15 + assert a[1] == 16 * 16 + lltype.free(a, flavor='raw') + + class TestDebuggingAssembler(object): def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) From noreply at buildbot.pypy.org Wed Feb 1 14:57:40 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 14:57:40 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix test_assembler Message-ID: <20120201135740.9E15F82B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52017:e64e4871ab14 Date: 2012-02-01 15:57 +0200 http://bitbucket.org/pypy/pypy/changeset/e64e4871ab14/ Log: fix test_assembler diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -3,7 +3,7 @@ from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.metainterp.history import Const, Box, BoxInt, ConstInt, BoxVector from pypy.jit.metainterp.history import AbstractFailDescr, INT, REF, FLOAT -from pypy.jit.metainterp.history import JitCellToken +from pypy.jit.metainterp.history import JitCellToken, BoxPtr, BoxFloat from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper @@ -1802,7 +1802,7 @@ def rebuild_faillocs_from_descr(self, bytecode): from pypy.jit.backend.x86.regalloc import X86FrameManager - descr_to_box_type = [REF, INT, FLOAT] + descr_to_box_type = [BoxPtr(), BoxInt(), BoxFloat()] bytecode = rffi.cast(rffi.UCHARP, bytecode) arglocs = [] code_inputarg = False diff --git a/pypy/jit/backend/x86/test/test_assembler.py b/pypy/jit/backend/x86/test/test_assembler.py --- a/pypy/jit/backend/x86/test/test_assembler.py +++ b/pypy/jit/backend/x86/test/test_assembler.py @@ -33,12 +33,12 @@ failargs = [BoxInt(), BoxPtr(), BoxFloat()] * 3 failargs.insert(6, None) failargs.insert(7, None) - locs = [X86FrameManager.frame_pos(0, INT), - X86FrameManager.frame_pos(1, REF), - X86FrameManager.frame_pos(10, FLOAT), - X86FrameManager.frame_pos(100, INT), - X86FrameManager.frame_pos(101, REF), - X86FrameManager.frame_pos(110, FLOAT), + locs = [X86FrameManager.frame_pos(0, BoxInt()), + X86FrameManager.frame_pos(1, BoxPtr()), + X86FrameManager.frame_pos(10, BoxFloat()), + X86FrameManager.frame_pos(100, BoxInt()), + X86FrameManager.frame_pos(101, BoxPtr()), + X86FrameManager.frame_pos(110, BoxFloat()), None, None, ebx, @@ -272,7 +272,7 @@ def test_simple(self): def callback(asm): - asm.mov(imm(42), edx) + asm.mov(BoxInt(), imm(42), edx) asm.regalloc_push(edx) asm.regalloc_pop(eax) res = self.do_test(callback) @@ -280,9 +280,9 @@ def test_push_stack(self): def callback(asm): - loc = self.fm.frame_pos(5, INT) + loc = self.fm.frame_pos(5, BoxInt()) asm.mc.SUB_ri(esp.value, 64) - asm.mov(imm(42), loc) + asm.mov(BoxInt(), imm(42), loc) asm.regalloc_push(loc) asm.regalloc_pop(eax) asm.mc.ADD_ri(esp.value, 64) @@ -291,12 +291,12 @@ def test_pop_stack(self): def callback(asm): - loc = self.fm.frame_pos(5, INT) + loc = self.fm.frame_pos(5, BoxInt()) asm.mc.SUB_ri(esp.value, 64) - asm.mov(imm(42), edx) + asm.mov(BoxInt(), imm(42), edx) asm.regalloc_push(edx) asm.regalloc_pop(loc) - asm.mov(loc, eax) + asm.mov(BoxInt(), loc, eax) asm.mc.ADD_ri(esp.value, 64) res = self.do_test(callback) assert res == 42 @@ -305,7 +305,7 @@ def callback(asm): c = ConstFloat(longlong.getfloatstorage(-42.5)) loc = self.xrm.convert_to_imm(c) - asm.mov(loc, xmm5) + asm.mov(BoxInt(), loc, xmm5) asm.regalloc_push(xmm5) asm.regalloc_pop(xmm0) asm.mc.CVTTSD2SI(eax, xmm0) @@ -316,10 +316,10 @@ def callback(asm): c = ConstFloat(longlong.getfloatstorage(-42.5)) loc = self.xrm.convert_to_imm(c) - loc2 = self.fm.frame_pos(4, FLOAT) + loc2 = self.fm.frame_pos(4, BoxFloat()) asm.mc.SUB_ri(esp.value, 64) - asm.mov(loc, xmm5) - asm.mov(xmm5, loc2) + asm.mov(BoxInt(), loc, xmm5) + asm.mov(BoxInt(), xmm5, loc2) asm.regalloc_push(loc2) asm.regalloc_pop(xmm0) asm.mc.ADD_ri(esp.value, 64) @@ -331,12 +331,12 @@ def callback(asm): c = ConstFloat(longlong.getfloatstorage(-42.5)) loc = self.xrm.convert_to_imm(c) - loc2 = self.fm.frame_pos(4, FLOAT) + loc2 = self.fm.frame_pos(4, BoxFloat()) asm.mc.SUB_ri(esp.value, 64) - asm.mov(loc, xmm5) + asm.mov(BoxInt(), loc, xmm5) asm.regalloc_push(xmm5) asm.regalloc_pop(loc2) - asm.mov(loc2, xmm0) + asm.mov(BoxInt(), loc2, xmm0) asm.mc.ADD_ri(esp.value, 64) asm.mc.CVTTSD2SI(eax, xmm0) res = self.do_test(callback) From noreply at buildbot.pypy.org Wed Feb 1 14:59:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 14:59:32 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: pass None here, jumps have no vectors Message-ID: <20120201135932.69C9982B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52018:607bec7b7a95 Date: 2012-02-01 15:59 +0200 http://bitbucket.org/pypy/pypy/changeset/607bec7b7a95/ Log: pass None here, jumps have no vectors diff --git a/pypy/jit/backend/x86/jump.py b/pypy/jit/backend/x86/jump.py --- a/pypy/jit/backend/x86/jump.py +++ b/pypy/jit/backend/x86/jump.py @@ -76,9 +76,9 @@ assembler.regalloc_push(src) assembler.regalloc_pop(dst) return - assembler.regalloc_mov(src, tmpreg) + assembler.regalloc_mov(None, src, tmpreg) src = tmpreg - assembler.regalloc_mov(src, dst) + assembler.regalloc_mov(None, src, dst) def remap_frame_layout_mixed(assembler, src_locations1, dst_locations1, tmpreg1, From noreply at buildbot.pypy.org Wed Feb 1 15:03:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 15:03:50 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix calling convention Message-ID: <20120201140350.6F6A382B67@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52019:73c3fd1e59c0 Date: 2012-02-01 16:03 +0200 http://bitbucket.org/pypy/pypy/changeset/73c3fd1e59c0/ Log: fix calling convention diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -233,7 +233,7 @@ cur_frame_pos -= 2 else: cur_frame_pos -= 1 - loc = self.fm.frame_pos(cur_frame_pos, box.type) + loc = self.fm.frame_pos(cur_frame_pos, box) self.fm.set_binding(box, loc) def _set_initial_bindings_regs_64(self, inputargs): From noreply at buildbot.pypy.org Wed Feb 1 15:08:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 15:08:48 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: another test fix Message-ID: <20120201140848.7FBBD82B6F@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52020:6444c851ac68 Date: 2012-02-01 16:08 +0200 http://bitbucket.org/pypy/pypy/changeset/6444c851ac68/ Log: another test fix diff --git a/pypy/jit/backend/x86/test/test_regalloc.py b/pypy/jit/backend/x86/test/test_regalloc.py --- a/pypy/jit/backend/x86/test/test_regalloc.py +++ b/pypy/jit/backend/x86/test/test_regalloc.py @@ -54,7 +54,7 @@ def dump(self, *args): pass - def regalloc_mov(self, from_loc, to_loc): + def regalloc_mov(self, box, from_loc, to_loc): self.movs.append((from_loc, to_loc)) def regalloc_perform(self, op, arglocs, resloc): From noreply at buildbot.pypy.org Wed Feb 1 15:13:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 15:13:05 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix test jump Message-ID: <20120201141305.1841B71069B@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52021:0fef229172ef Date: 2012-02-01 16:12 +0200 http://bitbucket.org/pypy/pypy/changeset/0fef229172ef/ Log: fix test jump diff --git a/pypy/jit/backend/x86/test/test_jump.py b/pypy/jit/backend/x86/test/test_jump.py --- a/pypy/jit/backend/x86/test/test_jump.py +++ b/pypy/jit/backend/x86/test/test_jump.py @@ -3,15 +3,20 @@ from pypy.jit.backend.x86.regalloc import X86FrameManager from pypy.jit.backend.x86.jump import remap_frame_layout from pypy.jit.backend.x86.jump import remap_frame_layout_mixed -from pypy.jit.metainterp.history import INT +from pypy.jit.metainterp.history import INT, BoxInt, BoxFloat -frame_pos = X86FrameManager.frame_pos +def frame_pos(pos, tp): + if tp == INT: + box = BoxInt() + else: + box = BoxFloat() + return X86FrameManager.frame_pos(pos, box) class MockAssembler: def __init__(self): self.ops = [] - def regalloc_mov(self, from_loc, to_loc): + def regalloc_mov(self, v, from_loc, to_loc): self.ops.append(('mov', from_loc, to_loc)) def regalloc_push(self, loc): @@ -396,7 +401,7 @@ (-488, -488), # - one self-application of -488 ] class FakeAssembler: - def regalloc_mov(self, src, dst): + def regalloc_mov(self, v, src, dst): print "mov", src, dst def regalloc_push(self, x): print "push", x From noreply at buildbot.pypy.org Wed Feb 1 15:24:20 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 1 Feb 2012 15:24:20 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: ate a letter Message-ID: <20120201142420.E9AB371073C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52022:a39f6d580a09 Date: 2012-02-01 16:23 +0200 http://bitbucket.org/pypy/pypy/changeset/a39f6d580a09/ Log: ate a letter diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1259,7 +1259,7 @@ scale = self._get_unicode_item_scale() if not (isinstance(length_loc, ImmedLoc) or isinstance(length_loc, RegLoc)): - self.assembler.mov(args[4], ength_loc, bytes_loc) + self.assembler.mov(args[4], length_loc, bytes_loc) length_loc = bytes_loc self.assembler.load_effective_addr(length_loc, 0, scale, bytes_loc) length_box = bytes_box diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.optimizeopt.simplify import OptSimplify from pypy.jit.metainterp.optimizeopt.pure import OptPure from pypy.jit.metainterp.optimizeopt.earlyforce import OptEarlyForce +from pypy.jit.metainterp.optimizeopt.vectorize import OptVectorize from pypy.rlib.jit import PARAMETERS from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -21,6 +22,7 @@ ('earlyforce', OptEarlyForce), ('pure', OptPure), ('heap', OptHeap), + ('vectorize', OptVectorize), # XXX check if CPU supports that maybe ('ffi', None), ('unroll', None)] # no direct instantiation of unroll From noreply at buildbot.pypy.org Wed Feb 1 16:53:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 1 Feb 2012 16:53:58 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Started work on the STM GC. Message-ID: <20120201155358.603FB7106B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52023:beaddf970c53 Date: 2012-02-01 16:53 +0100 http://bitbucket.org/pypy/pypy/changeset/beaddf970c53/ Log: Started work on the STM GC. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py new file mode 100644 --- /dev/null +++ b/pypy/rpython/memory/gc/stmgc.py @@ -0,0 +1,92 @@ +from pypy.rpython.lltypesystem import lltype, llmemory, llarena +from pypy.rpython.memory.gc.base import GCBase +from pypy.rlib.rarithmetic import LONG_BIT + + +WORD = LONG_BIT // 8 +NULL = llmemory.NULL + +first_gcflag = 1 << (LONG_BIT//2) + +GCFLAG_GLOBAL = first_gcflag << 0 +GCFLAG_WAS_COPIED = first_gcflag << 1 + + +def always_inline(fn): + fn._always_inline_ = True + return fn +def dont_inline(fn): + fn._dont_inline_ = True + return fn + + +class StmOperations(object): + def _freeze_(self): + return True + + +class StmGC(GCBase): + _alloc_flavor_ = "raw" + inline_simple_malloc = True + inline_simple_malloc_varsize = True + needs_write_barrier = "stm" + prebuilt_gc_objects_are_static_roots = False + malloc_zero_filled = True # xxx? + + HDR = lltype.Struct('header', ('tid', lltype.Signed), + ('version', lltype.Signed)) + typeid_is_in_field = 'tid' + withhash_flag_is_in_field = 'tid', 'XXX' + + GCTLS = lltype.Struct('GCTLS', ('nursery_free', llmemory.Address), + ('nursery_top', llmemory.Address), + ('nursery_start', llmemory.Address), + ('nursery_size', lltype.Signed)) + + + def __init__(self, config, stm_operations, + max_nursery_size=1024, + **kwds): + GCBase.__init__(self, config, **kwds) + self.stm_operations = stm_operations + self.max_nursery_size = max_nursery_size + + + def setup(self): + """Called at run-time to initialize the GC.""" + GCBase.setup(self) + self.setup_thread() + + def _alloc_nursery(self): + nursery = llarena.arena_malloc(self.max_nursery_size, 1) + if not nursery: + raise MemoryError("cannot allocate nursery") + return nursery + + def setup_thread(self): + tls = lltype.malloc(self.GCTLS, flavor='raw') + self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls)) + tls.nursery_start = self._alloc_nursery() + tls.nursery_size = self.max_nursery_size + tls.nursery_free = tls.nursery_start + tls.nursery_top = tls.nursery_start + tls.nursery_size + + @always_inline + def get_tls(self): + tls = self.stm_operations.get_tls() + return llmemory.cast_adr_to_ptr(tls, lltype.Ptr(self.GCTLS)) + + @always_inline + def allocate_bump_pointer(self, size): + tls = self.get_tls() + free = tls.nursery_free + top = tls.nursery_top + new = free + size + tls.nursery_free = new + if new > top: + free = self.local_collection(free) + return free + + @dont_inline + def local_collection(self, oldfree): + raise MemoryError("nursery exhausted") # XXX for now diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py new file mode 100644 --- /dev/null +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -0,0 +1,35 @@ +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.memory.gc.stmgc import StmGC + + +class FakeStmOperations: + def set_tls(self, tls): + assert lltype.typeOf(tls) == llmemory.Address + self._tls = tls + + def get_tls(self): + return self._tls + + +class TestBasic: + GCClass = StmGC + + def setup_method(self, meth): + from pypy.config.pypyoption import get_pypy_config + config = get_pypy_config(translating=True).translation + self.gc = self.GCClass(config, FakeStmOperations(), + translated_to_c=False) + self.gc.DEBUG = True + self.gc.setup() + + def test_gc_creation_works(self): + pass + + def test_allocate_bump_pointer(self): + a3 = self.gc.allocate_bump_pointer(3) + a4 = self.gc.allocate_bump_pointer(4) + a5 = self.gc.allocate_bump_pointer(5) + a6 = self.gc.allocate_bump_pointer(6) + assert a4 - a3 == 3 + assert a5 - a4 == 4 + assert a6 - a5 == 5 From noreply at buildbot.pypy.org Wed Feb 1 17:20:37 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 1 Feb 2012 17:20:37 +0100 (CET) Subject: [pypy-commit] pypy default: Rename in_recursion to portal_call_depth in the metainterp, which makes much more sense. Message-ID: <20120201162037.34A0B7106B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52024:f054c58ba588 Date: 2012-02-01 11:20 -0500 http://bitbucket.org/pypy/pypy/changeset/f054c58ba588/ Log: Rename in_recursion to portal_call_depth in the metainterp, which makes much more sense. diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1552,7 +1552,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1587,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1603,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1662,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2183,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2198,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] From noreply at buildbot.pypy.org Thu Feb 2 04:30:32 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 2 Feb 2012 04:30:32 +0100 (CET) Subject: [pypy-commit] pypy default: Added ndarray.{itemsize, nbytes} Message-ID: <20120202033032.203827106B3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52025:e2ff34308249 Date: 2012-02-01 22:30 -0500 http://bitbucket.org/pypy/pypy/changeset/e2ff34308249/ Log: Added ndarray.{itemsize, nbytes} diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -267,7 +267,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +280,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +513,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -1289,11 +1295,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1349,8 +1357,8 @@ return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1388,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1416,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1424,7 +1432,6 @@ W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), @@ -1434,7 +1441,6 @@ __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -173,7 +173,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +306,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -1121,14 +1121,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1163,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,13 +1477,13 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange From notifications-noreply at bitbucket.org Thu Feb 2 11:38:03 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 02 Feb 2012 10:38:03 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120202103803.15588.44788@bitbucket13.managed.contegix.com> You have received a notification from Sirius Dely. Hi, I forked pypy. My fork is at https://bitbucket.org/riousdelie/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Feb 2 13:17:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 13:17:23 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: a first pass at vectorizing, a simple test passes Message-ID: <20120202121723.C8688710739@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52026:dfaccda61176 Date: 2012-02-02 14:16 +0200 http://bitbucket.org/pypy/pypy/changeset/dfaccda61176/ Log: a first pass at vectorizing, a simple test passes diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -274,7 +274,7 @@ IGNORED = ['FLOAT_VECTOR_ADD', 'GETARRAYITEM_VECTOR_RAW', - 'SETARRAYITEM_VECTOR_RAW'] + 'SETARRAYITEM_VECTOR_RAW', 'ASSERT_ALIGNED'] def _make_execute_list(): if 0: # enable this to trace calls to do_xxx diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -0,0 +1,37 @@ + +from pypy.jit.metainterp.optimizeopt.test.test_optimizebasic import BaseTestBasic, LLtypeMixin + +class TestVectorize(BaseTestBasic, LLtypeMixin): + enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll:vectorize" + + def test_vectorize_basic(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + vec0 = getarrayitem_vector_raw(p0, i0, descr=arraydescr) + vec1 = getarrayitem_vector_raw(p1, i1, descr=arraydescr) + vec2 = float_vector_add(vec0, vec1) + setarrayitem_vector_raw(p2, i2, vec2, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -0,0 +1,184 @@ + +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.history import BoxVector + +VECTOR_SIZE = 2 +VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD} + +class BaseTrack(object): + pass + +class Read(BaseTrack): + def __init__(self, arr, index, op): + self.arr = arr + self.op = op + self.index = index + + def match(self, other): + if not isinstance(other, Read): + return False + return self.arr == other.arr + + def emit(self, optimizer): + box = BoxVector() + op = ResOperation(rop.GETARRAYITEM_VECTOR_RAW, [self.arr.box, + self.index.val.box], + box, descr=self.op.getdescr()) + optimizer.emit_operation(op) + return box + +class Write(BaseTrack): + def __init__(self, arr, index, v, op): + self.arr = arr + self.index = index + self.v = v + self.op = op + + def match(self, other): + if not isinstance(other, Write): + return False + return self.v.match(other.v) + + def emit(self, optimizer): + arg = self.v.emit(optimizer) + op = ResOperation(rop.SETARRAYITEM_VECTOR_RAW, [self.arr.box, + self.index.box, arg], + None, descr=self.op.getdescr()) + optimizer.emit_operation(op) + +class BinOp(BaseTrack): + def __init__(self, left, right, op): + self.op = op + self.left = left + self.right = right + + def match(self, other): + if not isinstance(other, BinOp): + return False + if self.op.getopnum() != other.op.getopnum(): + return False + return self.left.match(other.left) and self.right.match(other.right) + + def emit(self, optimizer): + left_box = self.left.emit(optimizer) + right_box = self.right.emit(optimizer) + res_box = BoxVector() + op = ResOperation(VEC_MAP[self.op.getopnum()], [left_box, right_box], + res_box) + optimizer.emit_operation(op) + return res_box + +class TrackIndex(object): + def __init__(self, val, index): + self.val = val + self.index = index + + def advance(self): + return TrackIndex(self.val, self.index + 1) + +class OptVectorize(Optimization): + def __init__(self): + self.ops_so_far = [] + self.reset() + + def reset(self): + # deal with reset + for op in self.ops_so_far: + self.emit_operation(op) + self.ops_so_far = [] + self.track = {} + self.tracked_indexes = {} + self.full = {} + + def new(self): + return OptVectorize() + + def optimize_ASSERT_ALIGNED(self, op): + index = self.getvalue(op.getarg(1)) + self.tracked_indexes[index] = TrackIndex(index, 0) + + def optimize_GETARRAYITEM_RAW(self, op): + arr = self.getvalue(op.getarg(0)) + index = self.getvalue(op.getarg(1)) + track = self.tracked_indexes.get(index) + if track is None: + self.emit_operation(op) + else: + self.ops_so_far.append(op) + self.track[self.getvalue(op.result)] = Read(arr, track, op) + + def optimize_INT_ADD(self, op): + # only for += 1 + one = self.getvalue(op.getarg(0)) + two = self.getvalue(op.getarg(1)) + self.emit_operation(op) + if (one.is_constant() and one.box.getint() == 1 and + two in self.tracked_indexes): + index = two + elif (two.is_constant() and two.box.getint() == 1 and + one in self.tracked_indexes): + index = one + else: + return + self.tracked_indexes[self.getvalue(op.result)] = self.tracked_indexes[index].advance() + + def optimize_FLOAT_ADD(self, op): + left = self.getvalue(op.getarg(0)) + right = self.getvalue(op.getarg(1)) + if left not in self.track or right not in self.track: + self.emit_operation(op) + else: + self.ops_so_far.append(op) + lt = self.track[left] + rt = self.track[right] + self.track[self.getvalue(op.result)] = BinOp(lt, rt, op) + + def optimize_SETARRAYITEM_RAW(self, op): + index = self.getvalue(op.getarg(1)) + val = self.getvalue(op.getarg(2)) + if index not in self.tracked_indexes or val not in self.track: + self.emit_operation(op) + return + v = self.track[val] + arr = self.getvalue(op.getarg(0)) + ti = self.tracked_indexes[index] + if arr not in self.full: + self.full[arr] = [None] * VECTOR_SIZE + self.full[arr][ti.index] = Write(arr, index, v, op) + + def emit_vector_ops(self, forbidden_boxes): + for arg in forbidden_boxes: + if arg in self.track: + self.reset() + return + if self.full: + for arr, items in self.full.iteritems(): + for item in items[1:]: + if item is None or not items[0].match(item): + self.reset() + return + # XXX Right now we blow up on any of the vectorizers not + # working. We need something more advanced in terms of ops + # tracking + for arr, items in self.full.iteritems(): + items[0].emit(self) + self.ops_so_far = [] + + def optimize_default(self, op): + # list operations that are fine, not that many + if op.opnum in [rop.JUMP, rop.FINISH, rop.LABEL]: + self.emit_vector_ops(op.getarglist()) + elif op.is_guard(): + xxx + else: + self.reset() + self.emit_operation(op) + + def propagate_forward(self, op): + dispatch_opt(self, op) + +dispatch_opt = make_dispatcher_method(OptVectorize, 'optimize_', + default=OptVectorize.optimize_default) + diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -376,6 +376,7 @@ '_FINAL_LAST', 'LABEL/*d', + 'ASSERT_ALIGNED/2', '_GUARD_FIRST', '_GUARD_FOLDABLE_FIRST', From noreply at buildbot.pypy.org Thu Feb 2 13:24:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 13:24:07 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: a test and a fix Message-ID: <20120202122407.CAA48710739@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52027:4be184bcabec Date: 2012-02-02 14:23 +0200 http://bitbucket.org/pypy/pypy/changeset/4be184bcabec/ Log: a test and a fix diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -35,3 +35,41 @@ finish(p0, p1, p2, i0_1, i1_1, i2_1) """ self.optimize_loop(ops, expected) + + def test_vectorize_unfit_trees(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i1_2 = int_add(1, i1_1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_2, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i1_2 = int_add(1, i1_1) + i2_1 = int_add(i2, 1) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_2, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -16,10 +16,10 @@ self.op = op self.index = index - def match(self, other): + def match(self, other, i): if not isinstance(other, Read): return False - return self.arr == other.arr + return self.arr == other.arr and other.index.index == i def emit(self, optimizer): box = BoxVector() @@ -36,10 +36,10 @@ self.v = v self.op = op - def match(self, other): + def match(self, other, i): if not isinstance(other, Write): return False - return self.v.match(other.v) + return self.v.match(other.v, i) def emit(self, optimizer): arg = self.v.emit(optimizer) @@ -54,12 +54,13 @@ self.left = left self.right = right - def match(self, other): + def match(self, other, i): if not isinstance(other, BinOp): return False if self.op.getopnum() != other.op.getopnum(): return False - return self.left.match(other.left) and self.right.match(other.right) + return (self.left.match(other.left, i) and + self.right.match(other.right, i)) def emit(self, optimizer): left_box = self.left.emit(optimizer) @@ -141,6 +142,7 @@ if index not in self.tracked_indexes or val not in self.track: self.emit_operation(op) return + self.ops_so_far.append(op) v = self.track[val] arr = self.getvalue(op.getarg(0)) ti = self.tracked_indexes[index] @@ -155,8 +157,9 @@ return if self.full: for arr, items in self.full.iteritems(): - for item in items[1:]: - if item is None or not items[0].match(item): + for i in range(1, len(items)): + item = items[i] + if item is None or not items[0].match(item, i): self.reset() return # XXX Right now we blow up on any of the vectorizers not From noreply at buildbot.pypy.org Thu Feb 2 13:37:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 13:37:44 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: always pure operations don't have to reset anything. Add float vector_sub Message-ID: <20120202123744.D412182B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52028:9d52e03206ff Date: 2012-02-02 14:37 +0200 http://bitbucket.org/pypy/pypy/changeset/9d52e03206ff/ Log: always pure operations don't have to reset anything. Add float vector_sub diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -273,7 +273,8 @@ # ____________________________________________________________ -IGNORED = ['FLOAT_VECTOR_ADD', 'GETARRAYITEM_VECTOR_RAW', +IGNORED = ['FLOAT_VECTOR_ADD', 'FLOAT_VECTOR_SUB', + 'GETARRAYITEM_VECTOR_RAW', 'SETARRAYITEM_VECTOR_RAW', 'ASSERT_ALIGNED'] def _make_execute_list(): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -4,7 +4,7 @@ class TestVectorize(BaseTestBasic, LLtypeMixin): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll:vectorize" - def test_vectorize_basic(self): + def test_basic(self): ops = """ [p0, p1, p2, i0, i1, i2] assert_aligned(p0, i0) @@ -36,7 +36,41 @@ """ self.optimize_loop(ops, expected) - def test_vectorize_unfit_trees(self): + def test_basic_sub(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_sub(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i3 = int_sub(i1_1, 2) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_sub(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1, i3) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i3 = int_sub(i1_1, 2) + i2_1 = int_add(i2, 1) + vec0 = getarrayitem_vector_raw(p0, i0, descr=arraydescr) + vec1 = getarrayitem_vector_raw(p1, i1, descr=arraydescr) + vec2 = float_vector_sub(vec0, vec1) + setarrayitem_vector_raw(p2, i2, vec2, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1, i3) + """ + self.optimize_loop(ops, expected) + + def test_unfit_trees(self): ops = """ [p0, p1, p2, i0, i1, i2] assert_aligned(p0, i0) @@ -73,3 +107,72 @@ finish(p0, p1, p2, i0_1, i1_1, i2_1) """ self.optimize_loop(ops, expected) + + def test_unfit_trees_2(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + setarrayitem_raw(p2, i2_1, f0_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + setarrayitem_raw(p2, i2_1, f0_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) + + def test_unfit_trees_3(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_sub(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_sub(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) + diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -5,7 +5,8 @@ from pypy.jit.metainterp.history import BoxVector VECTOR_SIZE = 2 -VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD} +VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD, + rop.FLOAT_SUB: rop.FLOAT_VECTOR_SUB} class BaseTrack(object): pass @@ -125,7 +126,7 @@ return self.tracked_indexes[self.getvalue(op.result)] = self.tracked_indexes[index].advance() - def optimize_FLOAT_ADD(self, op): + def _optimize_binop(self, op): left = self.getvalue(op.getarg(0)) right = self.getvalue(op.getarg(1)) if left not in self.track or right not in self.track: @@ -136,6 +137,9 @@ rt = self.track[right] self.track[self.getvalue(op.result)] = BinOp(lt, rt, op) + optimize_FLOAT_ADD = _optimize_binop + optimize_FLOAT_SUB = _optimize_binop + def optimize_SETARRAYITEM_RAW(self, op): index = self.getvalue(op.getarg(1)) val = self.getvalue(op.getarg(2)) @@ -175,6 +179,10 @@ self.emit_vector_ops(op.getarglist()) elif op.is_guard(): xxx + elif op.is_always_pure(): + # in theory no side effect ops, but stuff like malloc + # can go in the way + pass else: self.reset() self.emit_operation(op) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -417,6 +417,7 @@ 'FLOAT_NEG/1', 'FLOAT_ABS/1', 'FLOAT_VECTOR_ADD/2', + 'FLOAT_VECTOR_SUB/2', 'CAST_FLOAT_TO_INT/1', # don't use for unsigned ints; we would 'CAST_INT_TO_FLOAT/1', # need some messy code in the backend 'CAST_FLOAT_TO_SINGLEFLOAT/1', From noreply at buildbot.pypy.org Thu Feb 2 13:44:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 13:44:51 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: guard support and a fix Message-ID: <20120202124451.14C9082B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52029:f51969a42502 Date: 2012-02-02 14:44 +0200 http://bitbucket.org/pypy/pypy/changeset/f51969a42502/ Log: guard support and a fix diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -176,3 +176,73 @@ """ self.optimize_loop(ops, expected) + def test_guard_forces(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + guard_true(i2_1) [p0, p1, p2, i0_1, i1_1, i2_1] + finish(p0, p1, p2, i0_1, i1_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + vec0 = getarrayitem_vector_raw(p0, i0, descr=arraydescr) + vec1 = getarrayitem_vector_raw(p1, i1, descr=arraydescr) + vec2 = float_vector_add(vec0, vec1) + setarrayitem_vector_raw(p2, i2, vec2, descr=arraydescr) + guard_true(i2_1) [p0, p1, p2, i0_1, i1_1, i2_1] + finish(p0, p1, p2, i0_1, i1_1) + """ + self.optimize_loop(ops, expected) + + def test_guard_prevents(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + assert_aligned(p0, i0) + assert_aligned(p1, i1) + assert_aligned(p1, i2) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, i1_1, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) + guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] + i0_1 = int_add(i0, 1) + i2_1 = int_add(i2, 1) + f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) + f1_1 = getarrayitem_raw(p1, 2, descr=arraydescr) + f2_1 = float_add(f0_1, f1_1) + setarrayitem_raw(p2, i2_1, f2_1, descr=arraydescr) + finish(p0, p1, p2, i0_1, i2_1) + """ + self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -172,13 +172,14 @@ for arr, items in self.full.iteritems(): items[0].emit(self) self.ops_so_far = [] - + self.reset() + def optimize_default(self, op): # list operations that are fine, not that many if op.opnum in [rop.JUMP, rop.FINISH, rop.LABEL]: self.emit_vector_ops(op.getarglist()) elif op.is_guard(): - xxx + self.emit_vector_ops(op.getarglist() + op.getfailargs()) elif op.is_always_pure(): # in theory no side effect ops, but stuff like malloc # can go in the way From noreply at buildbot.pypy.org Thu Feb 2 16:01:19 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 2 Feb 2012 16:01:19 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for 32 bits. Message-ID: <20120202150119.BD52F82B6A@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52030:f91dd3570b06 Date: 2012-02-02 10:01 -0500 http://bitbucket.org/pypy/pypy/changeset/f91dd3570b06/ Log: Fix for 32 bits. diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1169,7 +1169,7 @@ for obj in [float, bool, int]: assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize assert (ones(1) + ones(1)).itemsize == 8 - assert array(1).itemsize == 8 + assert array(1.0).itemsize == 8 assert ones(1)[:].itemsize == 8 def test_nbytes(self): @@ -1179,7 +1179,7 @@ assert ones((2, 2)).nbytes == 32 assert ones((2, 2))[1:,].nbytes == 16 assert (ones(1) + ones(1)).nbytes == 8 - assert array(3).nbytes == 8 + assert array(3.0).nbytes == 8 class AppTestMultiDim(BaseNumpyAppTest): From noreply at buildbot.pypy.org Thu Feb 2 16:08:08 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 16:08:08 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: this branch is meant to reduce the number of jit merge points required for numpy Message-ID: <20120202150808.2DA3F82B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52031:de5495b0f7fb Date: 2012-02-02 17:07 +0200 http://bitbucket.org/pypy/pypy/changeset/de5495b0f7fb/ Log: this branch is meant to reduce the number of jit merge points required for numpy to one. This should make it easier to do future optimizations diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -12,7 +12,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache @@ -750,22 +750,10 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + from pypy.module.micronumpy import loop + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -864,6 +852,27 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ComputationArray(BaseArray): + """ A base class for all objects that describe operations for computation + """ + +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +class Reduce(ComputationArray): + def __init__(self): + pass + + def create_sig(self): + return signature.ReduceSignature(self.func) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -883,10 +892,6 @@ lsig, rsig) class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ _immutable_fields_ = ['left', 'right'] def __init__(self, ufunc, name, shape, dtype, left, right, dim): @@ -1039,9 +1044,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1090,8 +1095,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1427,6 +1432,12 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,30 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver +from pypy.module.micronumpy import signature + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=signature.new_printable_location('numpy'), + name='numpy', +) + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -4,6 +4,7 @@ ConstantIterator, AxisIterator, ViewTransform,\ BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.tool.pairtype import extendabletype """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -113,6 +114,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -143,7 +146,6 @@ self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) - class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +184,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +219,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -359,6 +346,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) From noreply at buildbot.pypy.org Thu Feb 2 16:46:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 2 Feb 2012 16:46:38 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: malloc_fixedsize_clear(). Use a simple scheme to use the same Message-ID: <20120202154638.A746D82B6A@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52032:5e10a34f2e55 Date: 2012-02-01 18:02 +0100 http://bitbucket.org/pypy/pypy/changeset/5e10a34f2e55/ Log: malloc_fixedsize_clear(). Use a simple scheme to use the same code for both transactional and non-transactional modes. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -1,4 +1,6 @@ from pypy.rpython.lltypesystem import lltype, llmemory, llarena +from pypy.rpython.lltypesystem.lloperation import llop +from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase from pypy.rlib.rarithmetic import LONG_BIT @@ -41,7 +43,8 @@ GCTLS = lltype.Struct('GCTLS', ('nursery_free', llmemory.Address), ('nursery_top', llmemory.Address), ('nursery_start', llmemory.Address), - ('nursery_size', lltype.Signed)) + ('nursery_size', lltype.Signed), + ('malloc_flags', lltype.Signed)) def __init__(self, config, stm_operations, @@ -55,7 +58,7 @@ def setup(self): """Called at run-time to initialize the GC.""" GCBase.setup(self) - self.setup_thread() + self.main_thread_tls = self.setup_thread(True) def _alloc_nursery(self): nursery = llarena.arena_malloc(self.max_nursery_size, 1) @@ -63,22 +66,43 @@ raise MemoryError("cannot allocate nursery") return nursery - def setup_thread(self): + def _free_nursery(self, nursery): + llarena.arena_free(nursery) + + def setup_thread(self, in_main_thread): tls = lltype.malloc(self.GCTLS, flavor='raw') self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls)) tls.nursery_start = self._alloc_nursery() tls.nursery_size = self.max_nursery_size tls.nursery_free = tls.nursery_start tls.nursery_top = tls.nursery_start + tls.nursery_size + # + # XXX for now, we use as the "global area" the nursery of the + # main thread. So allocation in the main thread is the same as + # allocation in another thread, except that the new objects + # should be immediately marked as GCFLAG_GLOBAL. + if in_main_thread: + tls.malloc_flags = GCFLAG_GLOBAL + else: + tls.malloc_flags = 0 + return tls + + def teardown_thread(self): + tls = self.get_tls() + self.stm_operations.set_tls(NULL) + self._free_nursery(tls.nursery_start) + lltype.free(tls, flavor='raw') @always_inline def get_tls(self): tls = self.stm_operations.get_tls() return llmemory.cast_adr_to_ptr(tls, lltype.Ptr(self.GCTLS)) + def allocate_bump_pointer(self, size): + return self._allocate_bump_pointer(self.get_tls(), size) + @always_inline - def allocate_bump_pointer(self, size): - tls = self.get_tls() + def _allocate_bump_pointer(self, tls, size): free = tls.nursery_free top = tls.nursery_top new = free + size @@ -90,3 +114,39 @@ @dont_inline def local_collection(self, oldfree): raise MemoryError("nursery exhausted") # XXX for now + + + def malloc_fixedsize_clear(self, typeid, size, + needs_finalizer=False, + is_finalizer_light=False, + contains_weakptr=False): + assert not needs_finalizer, "XXX" + assert not contains_weakptr, "XXX" + # + # Check the mode: either in a transactional thread, or in + # the main thread. For now we do the same thing in both + # modes, but set different flags. + tls = self.get_tls() + flags = tls.malloc_flags + # + # Get the memory from the nursery. + size_gc_header = self.gcheaderbuilder.size_gc_header + totalsize = size_gc_header + size + result = self._allocate_bump_pointer(tls, totalsize) + # + # Build the object. + llarena.arena_reserve(result, totalsize) + obj = result + size_gc_header + self.init_gc_object(result, typeid, flags=flags) + # + return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) + + + @always_inline + def combine(self, typeid16, flags): + return llop.combine_ushort(lltype.Signed, typeid16, flags) + + @always_inline + def init_gc_object(self, addr, typeid16, flags=0): + hdr = llmemory.cast_adr_to_ptr(addr, lltype.Ptr(self.HDR)) + hdr.tid = self.combine(typeid16, flags) diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,5 +1,6 @@ from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.memory.gc.stmgc import StmGC +from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL class FakeStmOperations: @@ -33,3 +34,24 @@ assert a4 - a3 == 3 assert a5 - a4 == 4 assert a6 - a5 == 5 + + def test_malloc_fixedsize_clear(self): + S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) + gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) + s = lltype.cast_opaque_ptr(lltype.Ptr(S), gcref) + assert s.a == 0 + assert s.b == 0 + gcref2 = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) + assert gcref2 != gcref + + def test_malloc_main_vs_thread(self): + S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) + gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) + obj = llmemory.cast_ptr_to_adr(gcref) + assert (self.gc.header(obj).tid & GCFLAG_GLOBAL) != 0 + # + self.gc.setup_thread(False) + gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) + obj = llmemory.cast_ptr_to_adr(gcref) + assert (self.gc.header(obj).tid & GCFLAG_GLOBAL) == 0 + self.gc.teardown_thread() From noreply at buildbot.pypy.org Thu Feb 2 16:46:39 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 2 Feb 2012 16:46:39 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Read and write barriers. Message-ID: <20120202154639.E11D782B6A@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52033:28f4cfffe4db Date: 2012-02-02 16:46 +0100 http://bitbucket.org/pypy/pypy/changeset/28f4cfffe4db/ Log: Read and write barriers. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -3,6 +3,7 @@ from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase from pypy.rlib.rarithmetic import LONG_BIT +from pypy.rlib.debug import ll_assert WORD = LONG_BIT // 8 @@ -53,7 +54,9 @@ GCBase.__init__(self, config, **kwds) self.stm_operations = stm_operations self.max_nursery_size = max_nursery_size - + # + self.declare_readers() + self.declare_write_barrier() def setup(self): """Called at run-time to initialize the GC.""" @@ -71,7 +74,7 @@ def setup_thread(self, in_main_thread): tls = lltype.malloc(self.GCTLS, flavor='raw') - self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls)) + self.stm_operations.set_tls(self, llmemory.cast_ptr_to_adr(tls)) tls.nursery_start = self._alloc_nursery() tls.nursery_size = self.max_nursery_size tls.nursery_free = tls.nursery_start @@ -89,7 +92,7 @@ def teardown_thread(self): tls = self.get_tls() - self.stm_operations.set_tls(NULL) + self.stm_operations.set_tls(self, NULL) self._free_nursery(tls.nursery_start) lltype.free(tls, flavor='raw') @@ -142,6 +145,17 @@ return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) + def _malloc_local_raw(self, size): + # for _stm_write_barrier_global(): a version of malloc that does + # no initialization of the malloc'ed object + size_gc_header = self.gcheaderbuilder.size_gc_header + totalsize = size_gc_header + size + result = self.allocate_bump_pointer(totalsize) + llarena.arena_reserve(result, totalsize) + obj = result + size_gc_header + return obj + + @always_inline def combine(self, typeid16, flags): return llop.combine_ushort(lltype.Signed, typeid16, flags) @@ -150,3 +164,95 @@ def init_gc_object(self, addr, typeid16, flags=0): hdr = llmemory.cast_adr_to_ptr(addr, lltype.Ptr(self.HDR)) hdr.tid = self.combine(typeid16, flags) + + # ---------- + + def declare_readers(self): + # Reading functions. Defined here to avoid the extra burden of + # passing 'self' explicitly. + stm_operations = self.stm_operations + # + @always_inline + def read_signed(obj, offset): + if self.header(obj).tid & GCFLAG_GLOBAL == 0: + return (obj + offset).signed[0] # local obj: read directly + else: + return _read_word_global(obj, offset) # else: call a helper + self.read_signed = read_signed + # + @dont_inline + def _read_word_global(obj, offset): + hdr = self.header(obj) + if hdr.tid & GCFLAG_WAS_COPIED != 0: + # + # Look up in the thread-local dictionary. + localobj = stm_operations.tldict_lookup(obj) + if localobj: + ll_assert(self.header(localobj).tid & GCFLAG_GLOBAL == 0, + "stm_read: tldict_lookup() -> GLOBAL obj") + return (localobj + offset).signed[0] + # + return stm_operations.stm_read_word(obj, offset) + + + def declare_write_barrier(self): + # Write barrier. Defined here to avoid the extra burden of + # passing 'self' explicitly. + stm_operations = self.stm_operations + # + @always_inline + def write_barrier(obj): + if self.header(obj).tid & GCFLAG_GLOBAL != 0: + obj = _stm_write_barrier_global(obj) + return obj + self.write_barrier = write_barrier + # + @dont_inline + def _stm_write_barrier_global(obj): + # we need to find of make a local copy + hdr = self.header(obj) + if hdr.tid & GCFLAG_WAS_COPIED == 0: + # in this case, we are sure that we don't have a copy + hdr.tid |= GCFLAG_WAS_COPIED + # ^^^ non-protected write, but concurrent writes should + # have the same effect, so fine + else: + # in this case, we need to check first + localobj = stm_operations.tldict_lookup(obj) + if localobj: + hdr = self.header(localobj) + ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, + "stm_write: tldict_lookup() -> GLOBAL obj") + ll_assert(hdr.tid & GCFLAG_WAS_COPIED != 0, + "stm_write: tldict_lookup() -> non-COPIED obj") + return localobj + # + # Here, we need to really make a local copy + size = self.get_size(obj) + try: + localobj = self._malloc_local_raw(size) + except MemoryError: + # XXX + fatalerror("MemoryError in _stm_write_barrier_global -- sorry") + return llmemory.NULL + # + # Initialize the copy by doing an stm raw copy of the bytes + stm_operations.stm_copy_transactional_to_raw(obj, localobj, size) + # + # The raw copy done above includes all header fields. + # Check at least the gc flags of the copy. + hdr = self.header(obj) + localhdr = self.header(localobj) + GCFLAGS = (GCFLAG_GLOBAL | GCFLAG_WAS_COPIED) + ll_assert(hdr.tid & GCFLAGS == GCFLAGS, + "stm_write: bogus flags on source object") + ll_assert(localhdr.tid & GCFLAGS == GCFLAGS, + "stm_write: flags not copied!") + # + # Remove the GCFLAG_GLOBAL from the copy + localhdr.tid &= ~GCFLAG_GLOBAL + # + # Register the object as a valid copy + stm_operations.tldict_add(obj, localobj) + # + return localobj diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,15 +1,71 @@ from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.memory.gc.stmgc import StmGC -from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL +from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED + + +S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) +ofs_a = llmemory.offsetof(S, 'a') class FakeStmOperations: - def set_tls(self, tls): + threadnum = 0 # 0 = main thread; 1,2,3... = transactional threads + + def set_tls(self, gc, tls): assert lltype.typeOf(tls) == llmemory.Address - self._tls = tls + if self.threadnum == 0: + assert not hasattr(self, '_tls_dict') + assert not hasattr(self, '_gc') + self._tls_dict = {0: tls} + self._tldicts = {0: {}} + self._gc = gc + self._transactional_copies = [] + else: + assert self._gc is gc + self._tls_dict[self.threadnum] = tls + self._tldicts[self.threadnum] = {} def get_tls(self): - return self._tls + return self._tls_dict[self.threadnum] + + def tldict_lookup(self, obj): + assert lltype.typeOf(obj) == llmemory.Address + assert obj + tldict = self._tldicts[self.threadnum] + return tldict.get(obj, llmemory.NULL) + + def tldict_add(self, obj, localobj): + assert lltype.typeOf(obj) == llmemory.Address + tldict = self._tldicts[self.threadnum] + assert obj not in tldict + tldict[obj] = localobj + + class stm_read_word: + def __init__(self, obj, offset): + self.obj = obj + self.offset = offset + def __repr__(self): + return 'stm_read_word(%r, %r)' % (self.obj, self.offset) + def __eq__(self, other): + return (type(self) is type(other) and + self.__dict__ == other.__dict__) + def __ne__(self, other): + return not (self == other) + + def stm_copy_transactional_to_raw(self, srcobj, dstobj, size): + sizehdr = self._gc.gcheaderbuilder.size_gc_header + srchdr = srcobj - sizehdr + dsthdr = dstobj - sizehdr + llmemory.raw_memcopy(srchdr, dsthdr, sizehdr) + llmemory.raw_memcopy(srcobj, dstobj, size) + self._transactional_copies.append((srcobj, dstobj)) + + +def fake_get_size(obj): + TYPE = obj.ptr._TYPE.TO + if isinstance(TYPE, lltype.GcStruct): + return llmemory.sizeof(TYPE) + else: + assert 0 class TestBasic: @@ -21,8 +77,27 @@ self.gc = self.GCClass(config, FakeStmOperations(), translated_to_c=False) self.gc.DEBUG = True + self.gc.get_size = fake_get_size self.gc.setup() + def teardown_method(self, meth): + for key in self.gc.stm_operations._tls_dict.keys(): + if key != 0: + self.gc.stm_operations.threadnum = key + self.gc.teardown_thread() + + # ---------- + # test helpers + def malloc(self, STRUCT): + gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(STRUCT)) + realobj = lltype.cast_opaque_ptr(lltype.Ptr(STRUCT), gcref) + addr = llmemory.cast_ptr_to_adr(realobj) + return realobj, addr + def select_thread(self, threadnum): + self.gc.stm_operations.threadnum = threadnum + if threadnum not in self.gc.stm_operations._tls_dict: + self.gc.setup_thread(False) + def test_gc_creation_works(self): pass @@ -36,7 +111,6 @@ assert a6 - a5 == 5 def test_malloc_fixedsize_clear(self): - S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) s = lltype.cast_opaque_ptr(lltype.Ptr(S), gcref) assert s.a == 0 @@ -45,13 +119,77 @@ assert gcref2 != gcref def test_malloc_main_vs_thread(self): - S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) obj = llmemory.cast_ptr_to_adr(gcref) - assert (self.gc.header(obj).tid & GCFLAG_GLOBAL) != 0 + assert self.gc.header(obj).tid & GCFLAG_GLOBAL != 0 # - self.gc.setup_thread(False) + self.select_thread(1) gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(S)) obj = llmemory.cast_ptr_to_adr(gcref) - assert (self.gc.header(obj).tid & GCFLAG_GLOBAL) == 0 - self.gc.teardown_thread() + assert self.gc.header(obj).tid & GCFLAG_GLOBAL == 0 + + def test_reader_direct(self): + s, s_adr = self.malloc(S) + assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL != 0 + s.a = 42 + value = self.gc.read_signed(s_adr, ofs_a) + assert value == FakeStmOperations.stm_read_word(s_adr, ofs_a) + # + self.select_thread(1) + s, s_adr = self.malloc(S) + assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL == 0 + self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED # should be ignored + s.a = 42 + value = self.gc.read_signed(s_adr, ofs_a) + assert value == 42 + + def test_reader_through_dict(self): + s, s_adr = self.malloc(S) + s.a = 42 + # + self.select_thread(1) + t, t_adr = self.malloc(S) + t.a = 84 + # + self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED + self.gc.stm_operations._tldicts[1][s_adr] = t_adr + # + value = self.gc.read_signed(s_adr, ofs_a) + assert value == 84 + + def test_write_barrier_exists(self): + self.select_thread(1) + t, t_adr = self.malloc(S) + obj = self.gc.write_barrier(t_adr) # local object + assert obj == t_adr + # + self.select_thread(0) + s, s_adr = self.malloc(S) + # + self.select_thread(1) + self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED + self.gc.header(t_adr).tid |= GCFLAG_WAS_COPIED + self.gc.stm_operations._tldicts[1][s_adr] = t_adr + obj = self.gc.write_barrier(s_adr) # global copied object + assert obj == t_adr + assert self.gc.stm_operations._transactional_copies == [] + + def test_write_barrier_new(self): + self.select_thread(0) + s, s_adr = self.malloc(S) + s.a = 12 + s.b = 34 + # + self.select_thread(1) + t_adr = self.gc.write_barrier(s_adr) # global object, not copied so far + assert t_adr != s_adr + t = t_adr.ptr + assert t.a == 12 + assert t.b == 34 + assert self.gc.stm_operations._transactional_copies == [(s_adr, t_adr)] + # + u_adr = self.gc.write_barrier(s_adr) # again + assert u_adr == t_adr + # + u_adr = self.gc.write_barrier(u_adr) # local object + assert u_adr == t_adr From noreply at buildbot.pypy.org Thu Feb 2 16:46:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 16:46:51 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: good, I broke tests Message-ID: <20120202154651.0F30682B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52034:a8cc16d70a11 Date: 2012-02-02 17:46 +0200 http://bitbucket.org/pypy/pypy/changeset/a8cc16d70a11/ Log: good, I broke tests diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -239,7 +239,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +321,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +387,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +403,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) From noreply at buildbot.pypy.org Thu Feb 2 17:40:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 17:40:45 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: shuffle stuff around so reduce does not need it's own jitdriver. 16 LOC removed Message-ID: <20120202164045.E767182B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52035:ef9343793d60 Date: 2012-02-02 18:40 +0200 http://bitbucket.org/pypy/pypy/changeset/ef9343793d60/ Log: shuffle stuff around so reduce does not need it's own jitdriver. 16 LOC removed WIN! diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -17,13 +17,6 @@ from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], @@ -685,6 +678,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -811,7 +807,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -852,10 +849,6 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) -class ComputationArray(BaseArray): - """ A base class for all objects that describe operations for computation - """ - class ResultArray(Call2): def __init__(self, child, size, shape, dtype, res=None, order='C'): if res is None: @@ -866,12 +859,25 @@ return signature.ResultSignature(self.res_dtype, self.left.create_sig(), self.right.create_sig()) -class Reduce(ComputationArray): - def __init__(self): - pass +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + def create_sig(self): - return signature.ReduceSignature(self.func) + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,21 +3,13 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, +from pypy.module.micronumpy.signature import (find_sig, new_printable_location, AxisReduceSignature, ScalarSignature) from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - axisreduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], @@ -140,7 +132,9 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray + from pypy.module.micronumpy import loop + if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -166,17 +160,8 @@ if shapelen > 1 and dim >= 0: res = self.do_axis_reduce(obj, dtype, dim, keepdims) return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ @@ -229,19 +214,6 @@ arr.left.setitem(iterator.offset, value) frame.next(shapelen) - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - - class W_Ufunc1(W_Ufunc): argcount = 1 diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py --- a/pypy/module/micronumpy/loop.py +++ b/pypy/module/micronumpy/loop.py @@ -3,8 +3,55 @@ signatures """ -from pypy.rlib.jit import JitDriver +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote from pypy.module.micronumpy import signature +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays, identity=None): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = identity + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) @@ -27,4 +74,4 @@ frame=frame, arr=arr) sig.eval(frame, arr) frame.next(shapelen) - + return frame.cur_value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,7 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform from pypy.tool.pairtype import extendabletype """ Signature specifies both the numpy expression that has been constructed @@ -55,50 +53,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -141,10 +95,20 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + from pypy.module.micronumpy.interp_numarray import ReduceArray + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) + if isinstance(arr, ReduceArray): + identity = arr.identity + else: + identity = None + f = NumpyEvalFrame(iterlist, arraylist, identity) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -256,12 +220,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -398,20 +363,14 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): From noreply at buildbot.pypy.org Thu Feb 2 17:48:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 17:48:07 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: fix and shuffle stuff around a bit Message-ID: <20120202164807.7742E82B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52036:ad25c403e833 Date: 2012-02-02 18:47 +0200 http://bitbucket.org/pypy/pypy/changeset/ad25c403e833/ Log: fix and shuffle stuff around a bit diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -867,7 +867,7 @@ def compute_first_step(self, sig, frame): assert isinstance(sig, signature.ReduceSignature) if self.identity is None: - frame.cur_value = sig.right.eval(frame, self).convert_to( + frame.cur_value = sig.right.eval(frame, self.right).convert_to( self.calc_dtype) frame.next(len(self.right.shape)) else: @@ -879,6 +879,17 @@ signature.ScalarSignature(self.res_dtype), self.right.create_sig()) +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + +# def create_sig(self): +# return signature.AxisReduceSignature(self.ufunc + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -897,14 +908,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -158,8 +158,7 @@ raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) + return self.do_axis_reduce(obj, dtype, dim, keepdims) arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) return loop.compute(arr) From noreply at buildbot.pypy.org Thu Feb 2 18:20:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 18:20:52 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: axis reduce driver -> trash Message-ID: <20120202172052.6479482B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52037:6ae636ba7590 Date: 2012-02-02 19:20 +0200 http://bitbucket.org/pypy/pypy/changeset/6ae636ba7590/ Log: axis reduce driver -> trash diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -872,7 +872,6 @@ frame.next(len(self.right.shape)) else: frame.cur_value = self.identity.convert_to(self.calc_dtype) - def create_sig(self): return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, @@ -882,13 +881,20 @@ class AxisReduce(Call2): _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, shape, dtype, left, right, dim): + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) self.dim = dim + self.identity = identity -# def create_sig(self): -# return signature.AxisReduceSignature(self.ufunc + def compute_first_step(self, sig, frame): + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,22 +3,9 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -165,53 +152,17 @@ def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray + from pypy.module.micronumpy import loop if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py --- a/pypy/module/micronumpy/loop.py +++ b/pypy/module/micronumpy/loop.py @@ -12,7 +12,7 @@ 'value', 'identity', 'cur_value'] @unroll_safe - def __init__(self, iterators, arrays, identity=None): + def __init__(self, iterators, arrays): self = hint(self, access_directly=True, fresh_virtualizable=True) self.iterators = iterators[:] self.arrays = arrays[:] @@ -24,7 +24,7 @@ else: self.final_iter = -1 self.cur_value = None - self.identity = identity + self.identity = None def done(self): final_iter = promote(self.final_iter) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -96,16 +96,11 @@ def create_frame(self, arr): from pypy.module.micronumpy.loop import NumpyEvalFrame - from pypy.module.micronumpy.interp_numarray import ReduceArray iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - if isinstance(arr, ReduceArray): - identity = arr.identity - else: - identity = None - f = NumpyEvalFrame(iterlist, arraylist, identity) + f = NumpyEvalFrame(iterlist, arraylist) # hook for cur_value being used by reduce arr.compute_first_step(self, f) return f @@ -424,7 +419,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) From noreply at buildbot.pypy.org Thu Feb 2 18:22:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 2 Feb 2012 18:22:27 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: oops Message-ID: <20120202172227.C2FC682B6A@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52038:67dce8d83391 Date: 2012-02-02 19:22 +0200 http://bitbucket.org/pypy/pypy/changeset/67dce8d83391/ Log: oops diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -888,7 +888,8 @@ self.identity = identity def compute_first_step(self, sig, frame): - frame.identity = self.identity.convert_to(self.calc_dtype) + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) def create_sig(self): return signature.AxisReduceSignature(self.ufunc, self.name, From noreply at buildbot.pypy.org Fri Feb 3 00:34:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:50 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix test_class.py Message-ID: <20120202233450.9944A710757@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52040:87f8c6c473f8 Date: 2012-01-31 21:01 +0100 http://bitbucket.org/pypy/pypy/changeset/87f8c6c473f8/ Log: Fix test_class.py diff --git a/pypy/interpreter/test/test_class.py b/pypy/interpreter/test/test_class.py --- a/pypy/interpreter/test/test_class.py +++ b/pypy/interpreter/test/test_class.py @@ -9,41 +9,28 @@ assert c.__class__ == C def test_metaclass_explicit(self): + """ class M(type): pass - class C: - __metaclass__ = M + class C(metaclass=M): + pass assert C.__class__ == M c = C() assert c.__class__ == C + """ def test_metaclass_inherited(self): + """ class M(type): pass - class B: - __metaclass__ = M + class B(metaclass=M): + pass class C(B): pass assert C.__class__ == M c = C() assert c.__class__ == C - - def test_metaclass_global(self): - d = {} - metatest_text = """if 1: - class M(type): - pass - - __metaclass__ = M - - class C: - pass\n""" - exec metatest_text in d - C = d['C'] - M = d['M'] - assert C.__class__ == M - c = C() - assert c.__class__ == C + """ def test_method(self): class C(object): @@ -91,15 +78,6 @@ assert type(int(x)) == int assert int(x) == 5 - def test_long_subclass(self): - class R(long): - pass - x = R(5L) - assert type(x) == R - assert x == 5L - assert type(long(x)) == long - assert long(x) == 5L - def test_float_subclass(self): class R(float): pass From noreply at buildbot.pypy.org Fri Feb 3 00:34:51 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: remove import for sys.setdefaultencoding Message-ID: <20120202233451.DBC01710757@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52041:06083ead8c02 Date: 2012-02-02 21:12 +0100 http://bitbucket.org/pypy/pypy/changeset/06083ead8c02/ Log: remove import for sys.setdefaultencoding diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -11,7 +11,6 @@ PyObject, PyObjectP, Py_DecRef, make_ref, from_ref, track_reference, make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check -from pypy.module.sys.interp_encoding import setdefaultencoding from pypy.objspace.std import unicodeobject, unicodetype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer From noreply at buildbot.pypy.org Fri Feb 3 00:34:49 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fixes for test_exec Message-ID: <20120202233449.42DD6710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52039:ab6e952d3c52 Date: 2012-01-30 21:54 +0100 http://bitbucket.org/pypy/pypy/changeset/ab6e952d3c52/ Log: Fixes for test_exec diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1477,13 +1477,9 @@ if not isinstance(prog, codetype): filename = '' - if not isinstance(prog, str): - if isinstance(prog, file): - filename = prog.name - prog = prog.read() - else: - raise TypeError("exec: arg 1 must be a string, file, " - "or code object") + if not isinstance(prog, (str, bytes)): + raise TypeError("exec: arg 1 must be a string, bytes, " + "or code object") prog = compile(prog, filename, 'exec', compile_flags, 1) return (prog, globals, locals) ''', filename=__file__) diff --git a/pypy/interpreter/test/test_exec.py b/pypy/interpreter/test/test_exec.py --- a/pypy/interpreter/test/test_exec.py +++ b/pypy/interpreter/test/test_exec.py @@ -5,74 +5,61 @@ from pypy.tool.udir import udir -def test_file(space): - fn = udir.join('test_exec_file') - fn.write('abc=1\ncba=2\n') - space.appexec([space.wrap(str(fn))], ''' - (filename): - fo = open(filename, 'r') - g = {} - exec fo in g - assert 'abc' in g - assert 'cba' in g - ''') - - class AppTestExecStmt: def test_string(self): g = {} l = {} - exec "a = 3" in g, l + exec("a = 3", g, l) assert l['a'] == 3 def test_localfill(self): g = {} - exec "a = 3" in g + exec("a = 3", g) assert g['a'] == 3 def test_builtinsupply(self): g = {} - exec "pass" in g - assert g.has_key('__builtins__') + exec("pass", g) + assert '__builtins__' in g def test_invalidglobal(self): def f(): - exec 'pass' in 1 - raises(TypeError,f) + exec('pass', 1) + raises(TypeError, f) def test_invalidlocal(self): def f(): - exec 'pass' in {}, 2 - raises(TypeError,f) + exec('pass', {}, 2) + raises(TypeError, f) def test_codeobject(self): co = compile("a = 3", '', 'exec') g = {} l = {} - exec co in g, l + exec(co, g, l) assert l['a'] == 3 def test_implicit(self): a = 4 - exec "a = 3" + exec("a = 3") assert a == 3 def test_tuplelocals(self): g = {} l = {} - exec ("a = 3", g, l) + exec("a = 3", g, l) assert l['a'] == 3 def test_tupleglobals(self): g = {} - exec ("a = 3", g) + exec("a = 3", g) assert g['a'] == 3 def test_exceptionfallthrough(self): def f(): - exec 'raise TypeError' in {} - raises(TypeError,f) + exec('raise TypeError', {}) + raises(TypeError, f) def test_global_stmt(self): g = {} @@ -80,41 +67,24 @@ co = compile("global a; a=5", '', 'exec') #import dis #dis.dis(co) - exec co in g, l + exec(co, g, l) assert l == {} assert g['a'] == 5 def test_specialcase_free_load(self): - exec """if 1: + exec("""if 1: def f(): - exec 'a=3' + exec('a=3') return a - x = f()\n""" - assert x == 3 + raises(NameError, f)\n""") def test_specialcase_free_load2(self): - exec """if 1: + exec("""if 1: def f(a): - exec 'a=3' + exec('a=3') return a - x = f(4)\n""" - assert x == 3 - - def test_specialcase_globals_and_exec(self): - d = {} - exec """if 1: - b = 2 - c = 3 - d = 4 - def f(a): - global b - exec 'd=42 ; b=7' - return a,b,c,d - #import dis - #dis.dis(f) - res = f(3)\n""" in d - r = d['res'] - assert r == (3,2,3,42) + x = f(4)\n""") + assert eval("x") == 3 def test_nested_names_are_not_confused(self): def get_nested_class(): @@ -134,46 +104,25 @@ assert t.test() == 'var' assert t.method_and_var() == 'method' - def test_import_star_shadows_global(self): - d = {'platform' : 3} - exec """if 1: - def f(): - from sys import * - return platform - res = f()\n""" in d - import sys - assert d['res'] == sys.platform - - def test_import_global_takes_precendence(self): - d = {'platform' : 3} - exec """if 1: - def f(): - global platform - from sys import * - return platform - res = f()\n""" in d - import sys - assert d['platform'] == 3 - def test_exec_load_name(self): d = {'x': 2} - exec """if 1: + exec("""if 1: def f(): save = x - exec "x=3" + exec("x=3") return x,save - \n""" in d + \n""", d) res = d['f']() - assert res == (3, 2) + assert res == (2, 2) def test_space_bug(self): d = {} - exec "x=5 " in d + exec("x=5 ", d) assert d['x'] == 5 def test_synerr(self): def x(): - exec "1 2" + exec("1 2") raises(SyntaxError, x) def test_mapping_as_locals(self): @@ -189,43 +138,44 @@ assert key == '__builtins__' m = M() m.result = {} - exec "x=m" in {}, m + exec("x=m", {}, m) assert m.result == {'x': 'm'} - exec "y=n" in m # NOTE: this doesn't work in CPython 2.4 + exec("y=n", m) assert m.result == {'x': 'm', 'y': 'n'} def test_filename(self): try: - exec "'unmatched_quote" - except SyntaxError, msg: + exec("'unmatched_quote") + except SyntaxError as msg: assert msg.filename == '' try: eval("'unmatched_quote") - except SyntaxError, msg: + except SyntaxError as msg: assert msg.filename == '' def test_exec_and_name_lookups(self): ns = {} - exec """def f(): - exec 'x=1' in locals() - return x -""" in ns + exec("""def f(): + exec('x=1', globals()) + return x\n""", ns) f = ns['f'] try: res = f() - except NameError, e: # keep py.test from exploding confused + except NameError as e: # keep py.test from exploding confused raise e assert res == 1 def test_exec_unicode(self): - # 's' is a string - s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + # 's' is a bytes string + s = b"x = '\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" # 'u' is a unicode u = s.decode('utf-8') - exec u + ns = {} + exec(u, ns) + x = ns['x'] assert len(x) == 6 assert ord(x[0]) == 0x0439 assert ord(x[1]) == 0x0446 @@ -235,14 +185,15 @@ assert ord(x[5]) == 0x043d def test_eval_unicode(self): - u = "u'%s'" % unichr(0x1234) + u = "'%s'" % chr(0x1234) v = eval(u) - assert v == unichr(0x1234) + assert v == chr(0x1234) - def test_compile_unicode(self): - s = "x = u'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" - u = s.decode('utf-8') - c = compile(u, '', 'exec') - exec c + def test_compile_bytes(self): + s = b"x = '\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd'" + c = compile(s, '', 'exec') + ns = {} + exec(c, ns) + x = ns['x'] assert len(x) == 6 assert ord(x[0]) == 0x0439 From noreply at buildbot.pypy.org Fri Feb 3 00:34:53 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:53 +0100 (CET) Subject: [pypy-commit] pypy py3k: Assign correct scope for expressions used in the class header Message-ID: <20120202233453.27FD5710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52042:873f1a85f982 Date: 2012-02-02 21:47 +0100 http://bitbucket.org/pypy/pypy/changeset/873f1a85f982/ Log: Assign correct scope for expressions used in the class header class X(__metaclass=expr1, *expr2, **expr3) Remember to do the same thing when we add argument annotations! diff --git a/pypy/interpreter/astcompiler/symtable.py b/pypy/interpreter/astcompiler/symtable.py --- a/pypy/interpreter/astcompiler/symtable.py +++ b/pypy/interpreter/astcompiler/symtable.py @@ -370,6 +370,11 @@ def visit_ClassDef(self, clsdef): self.note_symbol(clsdef.name, SYM_ASSIGNED) self.visit_sequence(clsdef.bases) + self.visit_sequence(clsdef.keywords) + if clsdef.starargs: + clsdef.starargs.walkabout(self) + if clsdef.kwargs: + clsdef.kwargs.walkabout(self) self.visit_sequence(clsdef.decorator_list) self.push_scope(ClassScope(clsdef), clsdef) self.note_symbol('@__class__', SYM_ASSIGNED) diff --git a/pypy/interpreter/astcompiler/test/test_symtable.py b/pypy/interpreter/astcompiler/test/test_symtable.py --- a/pypy/interpreter/astcompiler/test/test_symtable.py +++ b/pypy/interpreter/astcompiler/test/test_symtable.py @@ -217,6 +217,15 @@ xscp = cscp.children[1] assert xscp.lookup("n") == symtable.SCOPE_FREE + def test_class_kwargs(self): + scp = self.func_scope("""def f(n): + class X(meta=Z, *args, **kwargs): + pass""") + assert scp.lookup("X") == symtable.SCOPE_LOCAL + assert scp.lookup("Z") == symtable.SCOPE_GLOBAL_IMPLICIT + assert scp.lookup("args") == symtable.SCOPE_GLOBAL_IMPLICIT + assert scp.lookup("kwargs") == symtable.SCOPE_GLOBAL_IMPLICIT + def test_lambda(self): scp = self.mod_scope("lambda x: y") self.check_unknown(scp, "x", "y") diff --git a/pypy/interpreter/astcompiler/tools/Python.asdl b/pypy/interpreter/astcompiler/tools/Python.asdl --- a/pypy/interpreter/astcompiler/tools/Python.asdl +++ b/pypy/interpreter/astcompiler/tools/Python.asdl @@ -1,4 +1,4 @@ --- ASDL's five builtin types are identifier, int, string, object, bool +-- ASDL's four builtin types are identifier, int, string, object module Python version "$Revision: 43614 $" { @@ -103,7 +103,7 @@ -- not sure what to call the first argument for raise and except excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body) - attributes(int lineno, int col_offset) + attributes (int lineno, int col_offset) arguments = (expr* args, identifier? vararg, expr* kwonlyargs, identifier? kwarg, expr* defaults) @@ -114,3 +114,4 @@ -- import name with optional 'as' alias. alias = (identifier name, identifier? asname) } + diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -204,7 +204,8 @@ filename. """ name = name.encode() - init = init.encode() + if init is not None: + init = init.encode() body = body.encode() if init is not None: code = """ From noreply at buildbot.pypy.org Fri Feb 3 00:34:54 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:54 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix various segfaults and internal error while testing the cpyext module. Message-ID: <20120202233454.8385D710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52043:d718067d0780 Date: 2012-02-02 23:17 +0100 http://bitbucket.org/pypy/pypy/changeset/d718067d0780/ Log: Fix various segfaults and internal error while testing the cpyext module. Many failures remain though. diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -910,7 +910,7 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", - source_dir / "stringobject.c", + source_dir / "unicodeobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -126,6 +126,7 @@ version since the definition of the bytecode changes often.""" return space.wrap(PyCode(space, argcount=rffi.cast(lltype.Signed, argcount), + kwonlyargcount = 0, # XXX fix signature nlocals=rffi.cast(lltype.Signed, nlocals), stacksize=rffi.cast(lltype.Signed, stacksize), flags=rffi.cast(lltype.Signed, flags), diff --git a/pypy/module/cpyext/include/pyconfig.h b/pypy/module/cpyext/include/pyconfig.h --- a/pypy/module/cpyext/include/pyconfig.h +++ b/pypy/module/cpyext/include/pyconfig.h @@ -25,6 +25,10 @@ #define Py_UNICODE_SIZE 2 #endif +#ifndef _WIN32 +#define VA_LIST_IS_ARRAY +#endif + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pyport.h b/pypy/module/cpyext/include/pyport.h --- a/pypy/module/cpyext/include/pyport.h +++ b/pypy/module/cpyext/include/pyport.h @@ -64,4 +64,14 @@ # error "Python needs a typedef for Py_uintptr_t in pyport.h." #endif /* HAVE_UINTPTR_T */ +#ifdef VA_LIST_IS_ARRAY +#define Py_VA_COPY(x, y) Py_MEMCPY((x), (y), sizeof(va_list)) +#else +#ifdef __va_copy +#define Py_VA_COPY __va_copy +#else +#define Py_VA_COPY(x, y) (x) = (y) +#endif +#endif + #endif /* Py_PYPORT_H */ diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -18,9 +18,6 @@ Py_ssize_t size; } PyStringObject; -PyObject *PyString_FromFormatV(const char *format, va_list vargs); -PyObject *PyString_FromFormat(const char *format, ...); - #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/unicodeobject.h b/pypy/module/cpyext/include/unicodeobject.h --- a/pypy/module/cpyext/include/unicodeobject.h +++ b/pypy/module/cpyext/include/unicodeobject.h @@ -26,6 +26,9 @@ } PyUnicodeObject; +PyObject *PyUnicode_FromFormatV(const char *format, va_list vargs); +PyObject *PyUnicode_FromFormat(const char *format, ...); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -229,6 +229,15 @@ return space.repr(w_obj) @cpython_api([PyObject], PyObject) +def PyObject_ASCII(space, w_obj): + r"""As PyObject_Repr(), compute a string representation of object + o, but escape the non-ASCII characters in the string returned by + PyObject_Repr() with \x, \u or \U escapes. This generates a + string similar to that returned by PyObject_Repr() in Python 2. + Called by the ascii() built-in function.""" + return operation.ascii(space, w_obj) + + at cpython_api([PyObject], PyObject) def PyObject_Unicode(space, w_obj): """Compute a Unicode string representation of object o. Returns the Unicode string representation on success, NULL on failure. This is the equivalent of diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { PyErr_SetString(PyExc_TypeError, "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,19 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + return NULL;*/ + + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +248,100 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyUnicode_FromFormat( + "<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyUnicode_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +349,374 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - assert(count <= PY_SIZE_MAX - size); + assert(count <= PY_SIZE_MAX - size); - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; - return ob; + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; + + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +724,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +790,67 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { PyObject_HEAD_INIT(NULL) 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + "buffer", + sizeof(PyBufferObject), + 0, + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; diff --git a/pypy/module/cpyext/src/capsule.c b/pypy/module/cpyext/src/capsule.c --- a/pypy/module/cpyext/src/capsule.c +++ b/pypy/module/cpyext/src/capsule.c @@ -279,7 +279,7 @@ name = "NULL"; } - return PyString_FromFormat("", + return PyUnicode_FromFormat("", quote, name, quote, capsule); } @@ -298,27 +298,27 @@ PyTypeObject PyCapsule_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "PyCapsule", /*tp_name*/ - sizeof(PyCapsule), /*tp_basicsize*/ - 0, /*tp_itemsize*/ + "PyCapsule", /*tp_name*/ + sizeof(PyCapsule), /*tp_basicsize*/ + 0, /*tp_itemsize*/ /* methods */ capsule_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_reserved*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_reserved*/ capsule_repr, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - 0, /*tp_flags*/ - PyCapsule_Type__doc__ /*tp_doc*/ + 0, /*tp_as_number*/ + 0, /*tp_as_sequence*/ + 0, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + 0, /*tp_as_buffer*/ + 0, /*tp_flags*/ + PyCapsule_Type__doc__ /*tp_doc*/ }; void init_capsule() diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,75 +4,79 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyUnicode_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + const char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyUnicode_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * -PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) +PyErr_NewExceptionWithDoc(const char *name, const char *doc, + PyObject *base, PyObject *dict) { int result; PyObject *ret = NULL; @@ -87,7 +91,7 @@ } if (doc != NULL) { - docobj = PyString_FromString(doc); + docobj = PyUnicode_FromString(doc); if (docobj == NULL) goto failure; result = PyDict_SetItemString(dict, "__doc__", docobj); diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/unicodeobject.c rename from pypy/module/cpyext/src/stringobject.c rename to pypy/module/cpyext/src/unicodeobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/unicodeobject.c @@ -1,249 +1,522 @@ - #include "Python.h" +#if defined(Py_ISDIGIT) || defined(Py_ISALPHA) +#error remove these definitions +#endif +#define Py_ISDIGIT isdigit +#define Py_ISALPHA isalpha + +#define PyObject_Malloc malloc +#define PyObject_Free free + +static void +makefmt(char *fmt, int longflag, int longlongflag, int size_tflag, + int zeropad, int width, int precision, char c) +{ + *fmt++ = '%'; + if (width) { + if (zeropad) + *fmt++ = '0'; + fmt += sprintf(fmt, "%d", width); + } + if (precision) + fmt += sprintf(fmt, ".%d", precision); + if (longflag) + *fmt++ = 'l'; + else if (longlongflag) { + /* longlongflag should only ever be nonzero on machines with + HAVE_LONG_LONG defined */ +#ifdef HAVE_LONG_LONG + char *f = PY_FORMAT_LONG_LONG; + while (*f) + *fmt++ = *f++; +#else + /* we shouldn't ever get here */ + assert(0); + *fmt++ = 'l'; +#endif + } + else if (size_tflag) { + char *f = PY_FORMAT_SIZE_T; + while (*f) + *fmt++ = *f++; + } + *fmt++ = c; + *fmt = '\0'; +} + +#define appendstring(string) {for (copy = string;*copy;) *s++ = *copy++;} + +/* size of fixed-size buffer for formatting single arguments */ +#define ITEM_BUFFER_LEN 21 +/* maximum number of characters required for output of %ld. 21 characters + allows for 64-bit integers (in decimal) and an optional sign. */ +#define MAX_LONG_CHARS 21 +/* maximum number of characters required for output of %lld. + We need at most ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper bound for log10(256). */ +#define MAX_LONG_LONG_CHARS (2 + (SIZEOF_LONG_LONG*53-1) / 22) + PyObject * -PyString_FromFormatV(const char *format, va_list vargs) +PyUnicode_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t callcount = 0; + PyObject **callresults = NULL; + PyObject **callresult = NULL; + Py_ssize_t n = 0; + int width = 0; + int precision = 0; + int zeropad; + const char* f; + Py_UNICODE *s; + PyObject *string; + /* used by sprintf */ + char buffer[ITEM_BUFFER_LEN+1]; + /* use abuffer instead of buffer, if we need more space + * (which can happen if there's a format specifier with width). */ + char *abuffer = NULL; + char *realbuffer; + Py_ssize_t abuffersize = 0; + char fmt[61]; /* should be enough for %0width.precisionlld */ + const char *copy; -#ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_VA_COPY(count, vargs); + /* step 1: count the number of %S/%R/%A/%s format specifications + * (we call PyObject_Str()/PyObject_Repr()/PyObject_ASCII()/ + * PyUnicode_DecodeUTF8() for these objects once during step 3 and put the + * result in an array) */ + for (f = format; *f; f++) { + if (*f == '%') { + if (*(f+1)=='%') + continue; + if (*(f+1)=='S' || *(f+1)=='R' || *(f+1)=='A' || *(f+1) == 'V') + ++callcount; + while (Py_ISDIGIT((unsigned)*f)) + width = (width*10) + *f++ - '0'; + while (*++f && *f != '%' && !Py_ISALPHA((unsigned)*f)) + ; + if (*f == 's') + ++callcount; + } + else if (128 <= (unsigned char)*f) { + PyErr_Format(PyExc_ValueError, + "PyUnicode_FromFormatV() expects an ASCII-encoded format " + "string, got a non-ASCII byte: 0x%02x", + (unsigned char)*f); + return NULL; + } + } + /* step 2: allocate memory for the results of + * PyObject_Str()/PyObject_Repr()/PyUnicode_DecodeUTF8() calls */ + if (callcount) { + callresults = PyObject_Malloc(sizeof(PyObject *)*callcount); + if (!callresults) { + PyErr_NoMemory(); + return NULL; + } + callresult = callresults; + } + /* step 3: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { +#ifdef HAVE_LONG_LONG + int longlongflag = 0; +#endif + const char* p = f; + width = 0; + while (Py_ISDIGIT((unsigned)*f)) + width = (width*10) + *f++ - '0'; + while (*++f && *f != '%' && !Py_ISALPHA((unsigned)*f)) + ; + + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } +#ifdef HAVE_LONG_LONG + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } +#endif + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } + + switch (*f) { + case 'c': + { +#ifndef Py_UNICODE_WIDE + int ordinal = va_arg(count, int); + if (ordinal > 0xffff) + n += 2; + else + n++; #else -#ifdef __va_copy - __va_copy(count, vargs); -#else - count = vargs; + (void)va_arg(count, int); + n++; #endif + break; + } + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); +#ifdef HAVE_LONG_LONG + if (longlongflag) { + if (width < MAX_LONG_LONG_CHARS) + width = MAX_LONG_LONG_CHARS; + } + else #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* MAX_LONG_CHARS is enough to hold a 64-bit integer, + including sign. Decimal takes the most space. This + isn't enough for octal. If a width is specified we + need more (which we allocate later). */ + if (width < MAX_LONG_CHARS) + width = MAX_LONG_CHARS; + n += width; + /* XXX should allow for large precision here too. */ + if (abuffersize < width) + abuffersize = width; + break; + case 's': + { + /* UTF-8 */ + const char *s = va_arg(count, const char*); + PyObject *str = PyUnicode_DecodeUTF8(s, strlen(s), "replace"); + if (!str) + goto fail; + n += PyUnicode_GET_SIZE(str); + /* Remember the str and switch to the next slot */ + *callresult++ = str; + break; + } + case 'U': + { + PyObject *obj = va_arg(count, PyObject *); + assert(obj && PyUnicode_Check(obj)); + n += PyUnicode_GET_SIZE(obj); + break; + } + case 'V': + { + PyObject *obj = va_arg(count, PyObject *); + const char *str = va_arg(count, const char *); + PyObject *str_obj; + assert(obj || str); + assert(!obj || PyUnicode_Check(obj)); + if (obj) { + n += PyUnicode_GET_SIZE(obj); + *callresult++ = NULL; + } + else { + str_obj = PyUnicode_DecodeUTF8(str, strlen(str), "replace"); + if (!str_obj) + goto fail; + n += PyUnicode_GET_SIZE(str_obj); + *callresult++ = str_obj; + } + break; + } + case 'S': + { + PyObject *obj = va_arg(count, PyObject *); + PyObject *str; + assert(obj); + str = PyObject_Str(obj); + if (!str) + goto fail; + n += PyUnicode_GET_SIZE(str); + /* Remember the str and switch to the next slot */ + *callresult++ = str; + break; + } + case 'R': + { + PyObject *obj = va_arg(count, PyObject *); + PyObject *repr; + assert(obj); + repr = PyObject_Repr(obj); + if (!repr) + goto fail; + n += PyUnicode_GET_SIZE(repr); + /* Remember the repr and switch to the next slot */ + *callresult++ = repr; + break; + } + case 'A': + { + PyObject *obj = va_arg(count, PyObject *); + PyObject *ascii; + assert(obj); + ascii = PyObject_ASCII(obj); + if (!ascii) + goto fail; + n += PyUnicode_GET_SIZE(ascii); + /* Remember the repr and switch to the next slot */ + *callresult++ = ascii; + break; + } + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } + expand: + if (abuffersize > ITEM_BUFFER_LEN) { + /* add 1 for sprintf's trailing null byte */ + abuffer = PyObject_Malloc(abuffersize + 1); + if (!abuffer) { + PyErr_NoMemory(); + goto fail; + } + realbuffer = abuffer; + } + else + realbuffer = buffer; + /* step 4: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + we don't have to resize the string. + There can be no errors beyond this point. */ + string = PyUnicode_FromUnicode(NULL, n); + if (!string) + goto fail; + + s = PyUnicode_AS_UNICODE(string); + callresult = callresults; + + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + int longflag = 0; + int longlongflag = 0; + int size_tflag = 0; + zeropad = (*f == '0'); + /* parse the width.precision part */ + width = 0; + while (Py_ISDIGIT((unsigned)*f)) + width = (width*10) + *f++ - '0'; + precision = 0; + if (*f == '.') { + f++; + while (Py_ISDIGIT((unsigned)*f)) + precision = (precision*10) + *f++ - '0'; + } + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - int longlongflag = 0; + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + } + /* handle the size_t flag. */ + if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + switch (*f) { + case 'c': + { + int ordinal = va_arg(vargs, int); +#ifndef Py_UNICODE_WIDE + if (ordinal > 0xffff) { + ordinal -= 0x10000; + *s++ = 0xD800 | (ordinal >> 10); + *s++ = 0xDC00 | (ordinal & 0x3FF); + } else +#endif + *s++ = ordinal; + break; + } + case 'd': + makefmt(fmt, longflag, longlongflag, size_tflag, zeropad, + width, precision, 'd'); + if (longflag) + sprintf(realbuffer, fmt, va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (longlongflag) + sprintf(realbuffer, fmt, va_arg(vargs, PY_LONG_LONG)); #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + else if (size_tflag) + sprintf(realbuffer, fmt, va_arg(vargs, Py_ssize_t)); + else + sprintf(realbuffer, fmt, va_arg(vargs, int)); + appendstring(realbuffer); + break; + case 'u': + makefmt(fmt, longflag, longlongflag, size_tflag, zeropad, + width, precision, 'u'); + if (longflag) + sprintf(realbuffer, fmt, va_arg(vargs, unsigned long)); +#ifdef HAVE_LONG_LONG + else if (longlongflag) + sprintf(realbuffer, fmt, va_arg(vargs, + unsigned PY_LONG_LONG)); +#endif + else if (size_tflag) + sprintf(realbuffer, fmt, va_arg(vargs, size_t)); + else + sprintf(realbuffer, fmt, va_arg(vargs, unsigned int)); + appendstring(realbuffer); + break; + case 'i': + makefmt(fmt, 0, 0, 0, zeropad, width, precision, 'i'); + sprintf(realbuffer, fmt, va_arg(vargs, int)); + appendstring(realbuffer); + break; + case 'x': + makefmt(fmt, 0, 0, 0, zeropad, width, precision, 'x'); + sprintf(realbuffer, fmt, va_arg(vargs, int)); + appendstring(realbuffer); + break; + case 's': + { + /* unused, since we already have the result */ + (void) va_arg(vargs, char *); + Py_UNICODE_COPY(s, PyUnicode_AS_UNICODE(*callresult), + PyUnicode_GET_SIZE(*callresult)); + s += PyUnicode_GET_SIZE(*callresult); + /* We're done with the unicode()/repr() => forget it */ + Py_DECREF(*callresult); + /* switch to next unicode()/repr() result */ + ++callresult; + break; + } + case 'U': + { + PyObject *obj = va_arg(vargs, PyObject *); + Py_ssize_t size = PyUnicode_GET_SIZE(obj); + Py_UNICODE_COPY(s, PyUnicode_AS_UNICODE(obj), size); + s += size; + break; + } + case 'V': + { + PyObject *obj = va_arg(vargs, PyObject *); + va_arg(vargs, const char *); + if (obj) { + Py_ssize_t size = PyUnicode_GET_SIZE(obj); + Py_UNICODE_COPY(s, PyUnicode_AS_UNICODE(obj), size); + s += size; + } else { + Py_UNICODE_COPY(s, PyUnicode_AS_UNICODE(*callresult), + PyUnicode_GET_SIZE(*callresult)); + s += PyUnicode_GET_SIZE(*callresult); + Py_DECREF(*callresult); + } + ++callresult; + break; + } + case 'S': + case 'R': + case 'A': + { + Py_UNICODE *ucopy; + Py_ssize_t usize; + Py_ssize_t upos; + /* unused, since we already have the result */ + (void) va_arg(vargs, PyObject *); + ucopy = PyUnicode_AS_UNICODE(*callresult); + usize = PyUnicode_GET_SIZE(*callresult); + for (upos = 0; upos forget it */ + Py_DECREF(*callresult); + /* switch to next unicode()/repr() result */ + ++callresult; + break; + } + case 'p': + sprintf(buffer, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (buffer[1] == 'X') + buffer[1] = 'x'; + else if (buffer[1] != 'x') { + memmove(buffer+2, buffer, strlen(buffer)+1); + buffer[0] = '0'; + buffer[1] = 'x'; + } + appendstring(buffer); + break; + case '%': + *s++ = '%'; + break; + default: + appendstring(p); + goto end; + } + } + else + *s++ = *f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); -#ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else -#endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; - - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } - expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; - - s = PyString_AsString(string); - - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; -#ifdef HAVE_LONG_LONG - int longlongflag = 0; -#endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } -#ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } -#endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } - - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); -#ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); -#endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); -#ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); -#endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } - - end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + end: + if (callresults) + PyObject_Free(callresults); + if (abuffer) + PyObject_Free(abuffer); + PyUnicode_Resize(&string, s - PyUnicode_AS_UNICODE(string)); + return string; + fail: + if (callresults) { + PyObject **callresult2 = callresults; + while (callresult2 < callresult) { + Py_XDECREF(*callresult2); + ++callresult2; + } + PyObject_Free(callresults); + } + if (abuffer) + PyObject_Free(abuffer); + return NULL; } PyObject * -PyString_FromFormat(const char *format, ...) +PyUnicode_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyUnicode_FromFormatV(format, vargs); + va_end(vargs); + return ret; } + diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -690,7 +690,7 @@ mod = self.import_extension('foo', [ ('newexc', 'METH_VARARGS', ''' - char *name = PyString_AsString(PyTuple_GetItem(args, 0)); + char *name = _PyUnicode_AsString(PyTuple_GetItem(args, 0)); return PyErr_NewException(name, PyTuple_GetItem(args, 1), PyTuple_GetItem(args, 2)); ''' diff --git a/pypy/module/cpyext/test/test_frameobject.py b/pypy/module/cpyext/test/test_frameobject.py --- a/pypy/module/cpyext/test/test_frameobject.py +++ b/pypy/module/cpyext/test/test_frameobject.py @@ -6,10 +6,10 @@ module = self.import_extension('foo', [ ("raise_exception", "METH_NOARGS", """ - PyObject *py_srcfile = PyString_FromString("filename"); - PyObject *py_funcname = PyString_FromString("funcname"); + PyObject *py_srcfile = PyUnicode_FromString("filename"); + PyObject *py_funcname = PyUnicode_FromString("funcname"); PyObject *py_globals = PyDict_New(); - PyObject *empty_string = PyString_FromString(""); + PyObject *empty_bytes = PyString_FromString(""); PyObject *empty_tuple = PyTuple_New(0); PyCodeObject *py_code; PyFrameObject *py_frame; @@ -22,7 +22,7 @@ 0, /*int nlocals,*/ 0, /*int stacksize,*/ 0, /*int flags,*/ - empty_string, /*PyObject *code,*/ + empty_bytes, /*PyObject *code,*/ empty_tuple, /*PyObject *consts,*/ empty_tuple, /*PyObject *names,*/ empty_tuple, /*PyObject *varnames,*/ @@ -31,7 +31,7 @@ py_srcfile, /*PyObject *filename,*/ py_funcname, /*PyObject *name,*/ 42, /*int firstlineno,*/ - empty_string /*PyObject *lnotab*/ + empty_bytes /*PyObject *lnotab*/ ); if (!py_code) goto bad; @@ -48,7 +48,7 @@ bad: Py_XDECREF(py_srcfile); Py_XDECREF(py_funcname); - Py_XDECREF(empty_string); + Py_XDECREF(empty_bytes); Py_XDECREF(empty_tuple); Py_XDECREF(py_globals); Py_XDECREF(py_code); diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -130,42 +130,6 @@ ]) module.getstring() - def test_format_v(self): - module = self.import_extension('foo', [ - ("test_string_format_v", "METH_VARARGS", - ''' - return helper("bla %d ble %s\\n", - PyInt_AsLong(PyTuple_GetItem(args, 0)), - PyString_AsString(PyTuple_GetItem(args, 1))); - ''' - ) - ], prologue=''' - PyObject* helper(char* fmt, ...) - { - va_list va; - PyObject* res; - va_start(va, fmt); - res = PyString_FromFormatV(fmt, va); - va_end(va); - return res; - } - ''') - res = module.test_string_format_v(1, b"xyz") - assert res == "bla 1 ble xyz\n" - - def test_format(self): - module = self.import_extension('foo', [ - ("test_string_format", "METH_VARARGS", - ''' - return PyString_FromFormat("bla %d ble %s\\n", - PyInt_AsLong(PyTuple_GetItem(args, 0)), - PyString_AsString(PyTuple_GetItem(args, 1))); - ''' - ) - ]) - res = module.test_string_format(1, b"xyz") - assert res == "bla 1 ble xyz\n" - class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -74,6 +74,41 @@ assert len(s) == 4 assert s == u'a�\x00c' + def test_format_v(self): + module = self.import_extension('foo', [ + ("test_unicode_format_v", "METH_VARARGS", + ''' + return helper("bla %d ble %s\\n", + PyInt_AsLong(PyTuple_GetItem(args, 0)), + _PyUnicode_AsString(PyTuple_GetItem(args, 1))); + ''' + ) + ], prologue=''' + PyObject* helper(char* fmt, ...) + { + va_list va; + PyObject* res; + va_start(va, fmt); + res = PyUnicode_FromFormatV(fmt, va); + va_end(va); + return res; + } + ''') + res = module.test_unicode_format_v(1, "xyz") + assert res == "bla 1 ble xyz\n" + + def test_format(self): + module = self.import_extension('foo', [ + ("test_unicode_format", "METH_VARARGS", + ''' + return PyUnicode_FromFormat("bla %d ble %s\\n", + PyInt_AsLong(PyTuple_GetItem(args, 0)), + _PyUnicode_AsString(PyTuple_GetItem(args, 1))); + ''' + ) + ]) + res = module.test_unicode_format(1, "xyz") + assert res == "bla 1 ble xyz\n" class TestUnicode(BaseApiTest): diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -383,7 +383,7 @@ @cpython_api([CONST_STRING], PyObject) def PyUnicode_FromString(space, s): """Create a Unicode object from an UTF-8 encoded null-terminated char buffer""" - w_str = space.wrap(rffi.charp2str(s)) + w_str = space.wrapbytes(rffi.charp2str(s)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) @cpython_api([CONST_STRING, Py_ssize_t], PyObject) From noreply at buildbot.pypy.org Fri Feb 3 00:34:55 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:34:55 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix most failures in module/_rawffi Message-ID: <20120202233455.CD8FF710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52044:561c7c51de58 Date: 2012-02-02 23:52 +0100 http://bitbucket.org/pypy/pypy/changeset/561c7c51de58/ Log: Fix most failures in module/_rawffi diff --git a/pypy/module/_rawffi/array.py b/pypy/module/_rawffi/array.py --- a/pypy/module/_rawffi/array.py +++ b/pypy/module/_rawffi/array.py @@ -198,7 +198,7 @@ start, stop = self.decodeslice(space, w_slice) ll_buffer = self.ll_buffer result = [ll_buffer[i] for i in range(start, stop)] - return space.wrap(''.join(result)) + return space.wrapbytes(''.join(result)) def setslice(self, space, w_slice, w_value): start, stop = self.decodeslice(space, w_slice) diff --git a/pypy/module/_rawffi/callback.py b/pypy/module/_rawffi/callback.py --- a/pypy/module/_rawffi/callback.py +++ b/pypy/module/_rawffi/callback.py @@ -17,7 +17,7 @@ def tbprint(tb, err): import traceback, sys traceback.print_tb(tb) - print >>sys.stderr, err + print(err, file=sys.stderr) ''', filename=__file__) tbprint = app.interphook("tbprint") diff --git a/pypy/module/_rawffi/interp_rawffi.py b/pypy/module/_rawffi/interp_rawffi.py --- a/pypy/module/_rawffi/interp_rawffi.py +++ b/pypy/module/_rawffi/interp_rawffi.py @@ -91,7 +91,7 @@ def unpack_simple_shape(space, w_shape): # 'w_shape' must be either a letter or a tuple (struct, 1). - if space.is_true(space.isinstance(w_shape, space.w_str)): + if space.is_true(space.isinstance(w_shape, space.w_unicode)): letter = space.str_w(w_shape) return letter2tp(space, letter) else: @@ -102,7 +102,7 @@ def unpack_shape_with_length(space, w_shape): # Allow 'w_shape' to be a letter or any (shape, number). # The result is always a W_Array. - if space.is_true(space.isinstance(w_shape, space.w_str)): + if space.is_true(space.isinstance(w_shape, space.w_unicode)): letter = space.str_w(w_shape) return letter2tp(space, letter) else: @@ -166,7 +166,7 @@ else: ffi_restype = ffi_type_void - if space.is_true(space.isinstance(w_name, space.w_str)): + if space.is_true(space.isinstance(w_name, space.w_unicode)): name = space.str_w(w_name) try: @@ -327,7 +327,7 @@ push_func(add_arg, argdesc, rffi.cast(rffi.LONGDOUBLE, space.float_w(w_arg))) elif letter == "c": - s = space.str_w(w_arg) + s = space.bytes_w(w_arg) if len(s) != 1: raise OperationError(space.w_TypeError, w( "Expected string of length one as character")) @@ -360,7 +360,9 @@ if c in TYPEMAP_PTR_LETTERS: res = func(add_arg, argdesc, rffi.VOIDP) return space.wrap(rffi.cast(lltype.Unsigned, res)) - elif c == 'q' or c == 'Q' or c == 'L' or c == 'c' or c == 'u': + elif c == 'c': + return space.wrapbytes(func(add_arg, argdesc, ll_type)) + elif c == 'q' or c == 'Q' or c == 'L' or c == 'u': return space.wrap(func(add_arg, argdesc, ll_type)) elif c == 'f' or c == 'd' or c == 'g': return space.wrap(float(func(add_arg, argdesc, ll_type))) diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -222,8 +222,8 @@ import _rawffi try: _rawffi.CDLL("xxxxx_this_name_does_not_exist_xxxxx") - except OSError, e: - print e + except OSError as e: + print(e) assert str(e).startswith("xxxxx_this_name_does_not_exist_xxxxx: ") else: raise AssertionError("did not fail??") @@ -267,13 +267,13 @@ get_char = lib.ptr('get_char', ['P', 'H'], 'c') A = _rawffi.Array('c') B = _rawffi.Array('H') - dupa = A(5, 'dupa') + dupa = A(5, b'dupa') dupaptr = dupa.byptr() for i in range(4): intptr = B(1) intptr[0] = i res = get_char(dupaptr, intptr) - assert res[0] == 'dupa'[i] + assert res[0] == 'dupa'[i:i+1] intptr.free() dupaptr.free() dupa.free() @@ -283,11 +283,11 @@ import _rawffi A = _rawffi.Array('c') buf = A(10, autofree=True) - buf[0] = '*' - assert buf[1:5] == '\x00' * 4 - buf[7:] = 'abc' - assert buf[9] == 'c' - assert buf[:8] == '*' + '\x00'*6 + 'a' + buf[0] = b'*' + assert buf[1:5] == b'\x00' * 4 + buf[7:] = b'abc' + assert buf[9] == b'c' + assert buf[:8] == b'*' + b'\x00'*6 + b'a' def test_returning_str(self): import _rawffi @@ -296,17 +296,17 @@ A = _rawffi.Array('c') arg1 = A(1) arg2 = A(1) - arg1[0] = 'y' - arg2[0] = 'x' + arg1[0] = b'y' + arg2[0] = b'x' res = char_check(arg1, arg2) - assert _rawffi.charp2string(res[0]) == 'xxxxxx' - assert _rawffi.charp2rawstring(res[0]) == 'xxxxxx' - assert _rawffi.charp2rawstring(res[0], 3) == 'xxx' - a = A(6, 'xx\x00\x00xx') - assert _rawffi.charp2string(a.buffer) == 'xx' - assert _rawffi.charp2rawstring(a.buffer, 4) == 'xx\x00\x00' - arg1[0] = 'x' - arg2[0] = 'y' + assert _rawffi.charp2string(res[0]) == b'xxxxxx' + assert _rawffi.charp2rawstring(res[0]) == b'xxxxxx' + assert _rawffi.charp2rawstring(res[0], 3) == b'xxx' + a = A(6, b'xx\x00\x00xx') + assert _rawffi.charp2string(a.buffer) == b'xx' + assert _rawffi.charp2rawstring(a.buffer, 4) == b'xx\x00\x00' + arg1[0] = b'x' + arg2[0] = b'y' res = char_check(arg1, arg2) assert res[0] == 0 assert _rawffi.charp2string(res[0]) is None @@ -317,10 +317,10 @@ def test_returning_unicode(self): import _rawffi A = _rawffi.Array('u') - a = A(6, u'xx\x00\x00xx') + a = A(6, 'xx\x00\x00xx') res = _rawffi.wcharp2unicode(a.buffer) - assert isinstance(res, unicode) - assert res == u'xx' + assert isinstance(res, str) + assert res == 'xx' a.free() def test_raw_callable(self): @@ -450,13 +450,13 @@ X = _rawffi.Structure([('x1', 'i'), ('x2', 'h'), ('x3', 'c'), ('next', 'P')]) next = X() next.next = 0 - next.x3 = 'x' + next.x3 = b'x' x = X() x.next = next x.x1 = 1 x.x2 = 2 - x.x3 = 'x' - assert X.fromaddress(x.next).x3 == 'x' + x.x3 = b'x' + assert X.fromaddress(x.next).x3 == b'x' x.free() next.free() create_double_struct = lib.ptr("create_double_struct", [], 'P') @@ -585,7 +585,7 @@ def compare(a, b): a1 = _rawffi.Array('i').fromaddress(_rawffi.Array('P').fromaddress(a, 1)[0], 1) a2 = _rawffi.Array('i').fromaddress(_rawffi.Array('P').fromaddress(b, 1)[0], 1) - print "comparing", a1[0], "with", a2[0] + print("comparing", a1[0], "with", a2[0]) if a1[0] not in [1,2,3,4] or a2[0] not in [1,2,3,4]: bogus_args.append((a1[0], a2[0])) if a1[0] > a2[0]: @@ -641,9 +641,9 @@ def test_raising_callback(self): import _rawffi, sys - import StringIO + from io import StringIO lib = _rawffi.CDLL(self.lib_name) - err = StringIO.StringIO() + err = StringIO() orig = sys.stderr sys.stderr = err try: @@ -659,7 +659,7 @@ val = err.getvalue() assert 'ZeroDivisionError' in val assert 'callback' in val - assert res[0] == 0L + assert res[0] == 0 finally: sys.stderr = orig @@ -761,23 +761,23 @@ import _rawffi, sys A = _rawffi.Array('u') a = A(3) - a[0] = u'x' - a[1] = u'y' - a[2] = u'z' - assert a[0] == u'x' + a[0] = 'x' + a[1] = 'y' + a[2] = 'z' + assert a[0] == 'x' b = _rawffi.Array('c').fromaddress(a.buffer, 38) if sys.maxunicode > 65535: # UCS4 build - assert b[0] == 'x' - assert b[1] == '\x00' - assert b[2] == '\x00' - assert b[3] == '\x00' - assert b[4] == 'y' + assert b[0] == b'x' + assert b[1] == b'\x00' + assert b[2] == b'\x00' + assert b[3] == b'\x00' + assert b[4] == b'y' else: # UCS2 build - assert b[0] == 'x' - assert b[1] == '\x00' - assert b[2] == 'y' + assert b[0] == b'x' + assert b[1] == b'\x00' + assert b[2] == b'y' a.free() def test_truncate(self): @@ -785,7 +785,7 @@ a = _rawffi.Array('b')(1) a[0] = -5 assert a[0] == -5 - a[0] = 123L + a[0] = 123 assert a[0] == 123 a[0] = 0x97817182ab128111111111111171817d042 assert a[0] == 0x42 @@ -798,7 +798,7 @@ a.free() a = _rawffi.Array('B')(1) - a[0] = 123L + a[0] = 123 assert a[0] == 123 a[0] = 0x18329b1718b97d89b7198db817d042 assert a[0] == 0x42 @@ -811,7 +811,7 @@ a.free() a = _rawffi.Array('h')(1) - a[0] = 123L + a[0] = 123 assert a[0] == 123 a[0] = 0x9112cbc91bd91db19aaaaaaaaaaaaaa8170d42 assert a[0] == 0x0d42 @@ -824,7 +824,7 @@ a.free() a = _rawffi.Array('H')(1) - a[0] = 123L + a[0] = 123 assert a[0] == 123 a[0] = 0xeeeeeeeeeeeeeeeeeeeeeeeeeeeee817d042 assert a[0] == 0xd042 @@ -834,7 +834,7 @@ maxptr = (256 ** struct.calcsize("P")) - 1 a = _rawffi.Array('P')(1) - a[0] = 123L + a[0] = 123 assert a[0] == 123 a[0] = 0xeeeeeeeeeeeeeeeeeeeeeeeeeeeee817d042 assert a[0] == 0xeeeeeeeeeeeeeeeeeeeeeeeeeeeee817d042 & maxptr @@ -889,7 +889,7 @@ f = lib.ptr('SetLastError', [], 'i') try: f() - except ValueError, e: + except ValueError as e: assert "Procedure called with not enough arguments" in e.message else: assert 0, "Did not raise" @@ -900,7 +900,7 @@ arg[0] = 1 try: f(arg) - except ValueError, e: + except ValueError as e: assert "Procedure called with too many arguments" in e.message else: assert 0, "Did not raise" @@ -911,13 +911,13 @@ X_Y = _rawffi.Structure([('x', 'l'), ('y', 'l')]) x_y = X_Y() lib = _rawffi.CDLL(self.lib_name) - print >> sys.stderr, "getting..." + print("getting...") sum_x_y = lib.ptr('sum_x_y', [(X_Y, 1)], 'l') x_y.x = 200 x_y.y = 220 - print >> sys.stderr, "calling..." + print("calling...") res = sum_x_y(x_y) - print >> sys.stderr, "done" + print("done") assert res[0] == 420 x_y.free() @@ -994,21 +994,21 @@ s = S(autofree=True) b = buffer(s) assert len(b) == 40 - b[4] = 'X' - b[:3] = 'ABC' - assert b[:6] == 'ABC\x00X\x00' + b[4] = b'X' + b[:3] = b'ABC' + assert b[:6] == b'ABC\x00X\x00' A = _rawffi.Array('c') a = A(10, autofree=True) - a[3] = 'x' + a[3] = b'x' b = buffer(a) assert len(b) == 10 - assert b[3] == 'x' - b[6] = 'y' - assert a[6] == 'y' - b[3:5] = 'zt' - assert a[3] == 'z' - assert a[4] == 't' + assert b[3] == b'x' + b[6] = b'y' + assert a[6] == b'y' + b[3:5] = b'zt' + assert a[3] == b'z' + assert a[4] == b't' def test_union(self): import _rawffi @@ -1054,7 +1054,7 @@ oldnum = _rawffi._num_of_allocated_objects() A = _rawffi.Array('c') - a = A(6, 'xxyxx\x00', autofree=True) + a = A(6, b'xxyxx\x00', autofree=True) assert _rawffi.charp2string(a.buffer) == 'xxyxx' a = None gc.collect() diff --git a/pypy/module/_rawffi/test/test_nested.py b/pypy/module/_rawffi/test/test_nested.py --- a/pypy/module/_rawffi/test/test_nested.py +++ b/pypy/module/_rawffi/test/test_nested.py @@ -47,14 +47,14 @@ assert S.fieldoffset('x') == 0 assert S.fieldoffset('s1') == S1.alignment s = S() - s.x = 'G' + s.x = b'G' raises(TypeError, 's.s1') assert s.fieldaddress('s1') == s.buffer + S.fieldoffset('s1') s1 = S1.fromaddress(s.fieldaddress('s1')) - s1.c = 'H' + s1.c = b'H' rawbuf = _rawffi.Array('c').fromaddress(s.buffer, S.size) - assert rawbuf[0] == 'G' - assert rawbuf[S1.alignment + S1.fieldoffset('c')] == 'H' + assert rawbuf[0] == b'G' + assert rawbuf[S1.alignment + S1.fieldoffset('c')] == b'H' s.free() def test_array_of_structures(self): @@ -64,17 +64,17 @@ a = A(3) raises(TypeError, "a[0]") s0 = S.fromaddress(a.buffer) - s0.c = 'B' + s0.c = b'B' assert a.itemaddress(1) == a.buffer + S.size s1 = S.fromaddress(a.itemaddress(1)) - s1.c = 'A' + s1.c = b'A' s2 = S.fromaddress(a.itemaddress(2)) - s2.c = 'Z' + s2.c = b'Z' rawbuf = _rawffi.Array('c').fromaddress(a.buffer, S.size * len(a)) ofs = S.fieldoffset('c') - assert rawbuf[0*S.size+ofs] == 'B' - assert rawbuf[1*S.size+ofs] == 'A' - assert rawbuf[2*S.size+ofs] == 'Z' + assert rawbuf[0*S.size+ofs] == b'B' + assert rawbuf[1*S.size+ofs] == b'A' + assert rawbuf[2*S.size+ofs] == b'Z' a.free() def test_array_of_array(self): @@ -107,13 +107,13 @@ assert S.fieldoffset('x') == 0 assert S.fieldoffset('ar') == A5alignment s = S() - s.x = 'G' + s.x = b'G' raises(TypeError, 's.ar') assert s.fieldaddress('ar') == s.buffer + S.fieldoffset('ar') a1 = A.fromaddress(s.fieldaddress('ar'), 5) a1[4] = 33 rawbuf = _rawffi.Array('c').fromaddress(s.buffer, S.size) - assert rawbuf[0] == 'G' + assert rawbuf[0] == b'G' sizeofint = struct.calcsize("i") v = 0 for i in range(sizeofint): From noreply at buildbot.pypy.org Fri Feb 3 00:35:06 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:35:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120202233506.37FDA710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52045:ab27fe8ad919 Date: 2012-02-03 00:30 +0100 http://bitbucket.org/pypy/pypy/changeset/ab27fe8ad919/ Log: hg merge default diff too long, truncating to 10000 out of 144186 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
\n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Fri Feb 3 00:35:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:35:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: Remove deprecated max_buffer_size from buffered IO. Message-ID: <20120202233507.90750710757@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52046:72c719815c26 Date: 2012-02-03 00:34 +0100 http://bitbucket.org/pypy/pypy/changeset/72c719815c26/ Log: Remove deprecated max_buffer_size from buffered IO. (and also the slowest test in module/_io) diff --git a/pypy/module/_io/interp_bufferedio.py b/pypy/module/_io/interp_bufferedio.py --- a/pypy/module/_io/interp_bufferedio.py +++ b/pypy/module/_io/interp_bufferedio.py @@ -74,10 +74,6 @@ def _check_init(self, space): raise NotImplementedError - def _deprecated_max_buffer_size(self, space): - space.warn("max_buffer_size is deprecated", - space.w_DeprecationWarning) - def read_w(self, space, w_size=None): self._unsupportedoperation(space, "read") @@ -821,12 +817,8 @@ ) class W_BufferedWriter(BufferedMixin, W_BufferedIOBase): - @unwrap_spec(buffer_size=int, max_buffer_size=int) - def descr_init(self, space, w_raw, buffer_size=DEFAULT_BUFFER_SIZE, - max_buffer_size=-234): - if max_buffer_size != -234: - self._deprecated_max_buffer_size(space) - + @unwrap_spec(buffer_size=int) + def descr_init(self, space, w_raw, buffer_size=DEFAULT_BUFFER_SIZE): self.state = STATE_ZERO check_writable_w(space, w_raw) @@ -885,12 +877,9 @@ w_reader = None w_writer = None - @unwrap_spec(buffer_size=int, max_buffer_size=int) - def descr_init(self, space, w_reader, w_writer, - buffer_size=DEFAULT_BUFFER_SIZE, max_buffer_size=-234): - if max_buffer_size != -234: - self._deprecated_max_buffer_size(space) - + @unwrap_spec(buffer_size=int) + def descr_init(self, space, w_reader, w_writer, + buffer_size=DEFAULT_BUFFER_SIZE): try: self.w_reader = W_BufferedReader(space) self.w_reader.descr_init(space, w_reader, buffer_size) @@ -943,12 +932,8 @@ ) class W_BufferedRandom(BufferedMixin, W_BufferedIOBase): - @unwrap_spec(buffer_size=int, max_buffer_size=int) - def descr_init(self, space, w_raw, buffer_size=DEFAULT_BUFFER_SIZE, - max_buffer_size = -234): - if max_buffer_size != -234: - self._deprecated_max_buffer_size(space) - + @unwrap_spec(buffer_size=int) + def descr_init(self, space, w_raw, buffer_size=DEFAULT_BUFFER_SIZE): self.state = STATE_ZERO check_readable_w(space, w_raw) diff --git a/pypy/module/_io/test/test_bufferedio.py b/pypy/module/_io/test/test_bufferedio.py --- a/pypy/module/_io/test/test_bufferedio.py +++ b/pypy/module/_io/test/test_bufferedio.py @@ -255,17 +255,6 @@ raises(ValueError, b.flush) raises(ValueError, b.close) - def test_deprecated_max_buffer_size(self): - import _io, warnings - raw = _io.FileIO(self.tmpfile, 'w') - with warnings.catch_warnings(record=True) as w: - warnings.simplefilter("always") - f = _io.BufferedWriter(raw, max_buffer_size=8192) - f.close() - assert len(w) == 1 - assert str(w[0].message) == "max_buffer_size is deprecated" - assert w[0].category is DeprecationWarning - def test_check_several_writes(self): import _io raw = _io.FileIO(self.tmpfile, 'w') From noreply at buildbot.pypy.org Fri Feb 3 00:35:56 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 3 Feb 2012 00:35:56 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120202233556.DEC1D710756@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52047:772300a58841 Date: 2012-02-03 00:35 +0100 http://bitbucket.org/pypy/pypy/changeset/772300a58841/ Log: hg merge default diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1552,7 +1552,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1587,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1603,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1662,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2183,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2198,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -267,7 +267,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +280,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +513,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -1289,11 +1295,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1349,8 +1357,8 @@ return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1388,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1416,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1424,7 +1432,6 @@ W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), @@ -1434,7 +1441,6 @@ __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -173,7 +173,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +306,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -1121,14 +1121,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1163,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1.0).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3.0).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,13 +1477,13 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -127,6 +127,7 @@ l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, + op.getarg(1).getint(), w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, @@ -163,14 +164,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + at unwrap_spec(repr=str, jd_name=str, call_depth=int) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, call_depth, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_greenkey) + repr, jd_name, call_depth, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,10 +206,11 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + def __init__(self, space, op, repr_of_resop, jd_name, call_depth, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) + self.jd_name = jd_name + self.call_depth = call_depth self.w_greenkey = w_greenkey - self.jd_name = jd_name def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: @@ -243,6 +245,7 @@ greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + call_depth = interp_attrproperty("call_depth", cls=DebugMergePoint), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -122,7 +122,8 @@ assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code assert dmp.greenkey == (self.f.func_code, 0, False) - #assert int_add.name == 'int_add' + assert dmp.call_depth == 0 + assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 @@ -223,11 +224,13 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num - op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + assert op.call_depth == 2 + op = DebugMergePoint([Box(0)], 'repr', 'notmain', 5, ('str',)) raises(AttributeError, 'op.pycode') + assert op.call_depth == 5 From noreply at buildbot.pypy.org Fri Feb 3 04:45:29 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 3 Feb 2012 04:45:29 +0100 (CET) Subject: [pypy-commit] pypy default: Skip this test on appdirect Message-ID: <20120203034529.190B78203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52048:e254dd2774a7 Date: 2012-02-02 22:45 -0500 http://bitbucket.org/pypy/pypy/changeset/e254dd2774a7/ Log: Skip this test on appdirect diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -1759,10 +1761,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1776,6 +1779,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str From noreply at buildbot.pypy.org Fri Feb 3 09:53:13 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 09:53:13 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: expose logical ops, not lazy yet. Message-ID: <20120203085313.4A34C8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52049:0d51aa842503 Date: 2012-02-03 10:52 +0200 http://bitbucket.org/pypy/pypy/changeset/0d51aa842503/ Log: expose logical ops, not lazy yet. diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -404,6 +404,11 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True}), + ('logical_or', 'logical_or', 2, {'comparison_func': True}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -433,3 +433,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) From noreply at buildbot.pypy.org Fri Feb 3 11:03:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:03:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: some fixes Message-ID: <20120203100330.E5F6682B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52050:16f093b4c8be Date: 2012-02-03 12:03 +0200 http://bitbucket.org/pypy/pypy/changeset/16f093b4c8be/ Log: some fixes diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -17,20 +17,6 @@ from pypy.module.micronumpy.appbridge import get_appbridge_cache -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], @@ -166,6 +152,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -205,40 +193,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -746,7 +700,6 @@ raise NotImplementedError def compute(self): - from pypy.module.micronumpy import loop ra = ResultArray(self, self.size, self.shape, self.res_dtype) loop.compute(ra) return ra.left @@ -859,6 +812,12 @@ return signature.ResultSignature(self.res_dtype, self.left.create_sig(), self.right.create_sig()) +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + class ReduceArray(Call2): def __init__(self, func, name, identity, child, dtype): self.identity = identity @@ -874,9 +833,15 @@ frame.cur_value = self.identity.convert_to(self.calc_dtype) def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, signature.ScalarSignature(self.res_dtype), - self.right.create_sig()) + self.right.create_sig(), done_func) class AxisReduce(Call2): _immutable_fields_ = ['left', 'right'] diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,7 +2,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -120,8 +120,6 @@ keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ Scalar, ReduceArray - from pypy.module.micronumpy import loop - if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -132,14 +130,16 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " @@ -152,8 +152,6 @@ def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - from pypy.module.micronumpy import loop - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: @@ -234,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -404,8 +401,10 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), - ('logical_and', 'logical_and', 2, {'comparison_func': True}), - ('logical_or', 'logical_or', 2, {'comparison_func': True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), ('logical_not', 'logical_not', 1, {'bool_result': True}), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py --- a/pypy/module/micronumpy/loop.py +++ b/pypy/module/micronumpy/loop.py @@ -4,7 +4,6 @@ """ from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote -from pypy.module.micronumpy import signature from pypy.module.micronumpy.interp_iter import ConstantIterator class NumpyEvalFrame(object): @@ -60,18 +59,25 @@ greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'arr'], - get_printable_location=signature.new_printable_location('numpy'), + get_printable_location=get_printable_location, name='numpy', ) +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + def compute(arr): sig = arr.find_sig() shapelen = len(arr.shape) frame = sig.create_frame(arr) - while not frame.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - frame=frame, arr=arr) - sig.eval(frame, arr) - frame.next(shapelen) - return frame.cur_value + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -3,6 +3,7 @@ from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ ViewTransform, BroadcastTransform from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -358,10 +359,20 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import ReduceArray assert isinstance(arr, ReduceArray) rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 From noreply at buildbot.pypy.org Fri Feb 3 11:14:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:14:28 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: fix the test, I think it makes sense Message-ID: <20120203101428.8517882B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52051:7a9d0664b7f2 Date: 2012-02-03 12:14 +0200 http://bitbucket.org/pypy/pypy/changeset/7a9d0664b7f2/ Log: fix the test, I think it makes sense diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'convert_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) From noreply at buildbot.pypy.org Fri Feb 3 11:38:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:38:46 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: boring :) Message-ID: <20120203103846.D667982B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52052:ce3a929dc501 Date: 2012-02-03 12:38 +0200 http://bitbucket.org/pypy/pypy/changeset/ce3a929dc501/ Log: boring :) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -17,13 +17,6 @@ from pypy.module.micronumpy.appbridge import get_appbridge_cache -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -965,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -989,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) From noreply at buildbot.pypy.org Fri Feb 3 11:46:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:46:57 +0100 (CET) Subject: [pypy-commit] pypy default: Merge numpy-single-jitdriver. This branch refactors a bit jitdrivers around so Message-ID: <20120203104657.6DB5382B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52053:06c10732a5bd Date: 2012-02-03 12:45 +0200 http://bitbucket.org/pypy/pypy/changeset/06c10732a5bd/ Log: Merge numpy-single-jitdriver. This branch refactors a bit jitdrivers around so we have (mostly) one important jitdriver. This would be useful for future optimizations like vectorizing. The rest can be incorporated, but "later" if at all, since those are not amenable to "easy' vectorization. diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -685,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -750,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -823,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -864,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -882,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -979,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -1003,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1039,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1090,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1427,6 +1395,12 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -482,6 +401,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'convert_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) From noreply at buildbot.pypy.org Fri Feb 3 11:46:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:46:58 +0100 (CET) Subject: [pypy-commit] pypy numpy-single-jitdriver: close merged branch Message-ID: <20120203104658.9A93A82B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-single-jitdriver Changeset: r52054:df5b775bc528 Date: 2012-02-03 12:45 +0200 http://bitbucket.org/pypy/pypy/changeset/df5b775bc528/ Log: close merged branch From noreply at buildbot.pypy.org Fri Feb 3 11:46:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 3 Feb 2012 11:46:59 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20120203104659.D866482B20@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52055:e7d6f39ba721 Date: 2012-02-03 12:46 +0200 http://bitbucket.org/pypy/pypy/changeset/e7d6f39ba721/ Log: merge default diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -1169,7 +1171,7 @@ for obj in [float, bool, int]: assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize assert (ones(1) + ones(1)).itemsize == 8 - assert array(1).itemsize == 8 + assert array(1.0).itemsize == 8 assert ones(1)[:].itemsize == 8 def test_nbytes(self): @@ -1179,7 +1181,7 @@ assert ones((2, 2)).nbytes == 32 assert ones((2, 2))[1:,].nbytes == 16 assert (ones(1) + ones(1)).nbytes == 8 - assert array(3).nbytes == 8 + assert array(3.0).nbytes == 8 class AppTestMultiDim(BaseNumpyAppTest): @@ -1759,10 +1761,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1776,6 +1779,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str From noreply at buildbot.pypy.org Fri Feb 3 12:02:36 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 3 Feb 2012 12:02:36 +0100 (CET) Subject: [pypy-commit] pypy default: shrink test Message-ID: <20120203110236.D44B482B20@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r52056:5bf9a08deeb4 Date: 2012-01-29 08:25 +0200 http://bitbucket.org/pypy/pypy/changeset/5bf9a08deeb4/ Log: shrink test diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -938,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot From noreply at buildbot.pypy.org Fri Feb 3 12:33:36 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:36 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: partially revert db27ab55d51b Message-ID: <20120203113336.9736A82B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52057:651ccc6e4f1a Date: 2012-01-26 12:01 +0100 http://bitbucket.org/pypy/pypy/changeset/651ccc6e4f1a/ Log: partially revert db27ab55d51b diff --git a/pypy/jit/backend/arm/helper/assembler.py b/pypy/jit/backend/arm/helper/assembler.py --- a/pypy/jit/backend/arm/helper/assembler.py +++ b/pypy/jit/backend/arm/helper/assembler.py @@ -143,9 +143,8 @@ class saved_registers(object): - def __init__(self, assembler, regs_to_save, vfp_regs_to_save=None): - self.assembler = assembler - self.supports_floats = assembler.cpu.supports_floats + def __init__(self, cb, regs_to_save, vfp_regs_to_save=None): + self.cb = cb if vfp_regs_to_save is None: vfp_regs_to_save = [] self.regs = regs_to_save @@ -153,15 +152,15 @@ def __enter__(self): if len(self.regs) > 0: - self.assembler.PUSH([r.value for r in self.regs]) - if self.supports_floats and len(self.vfp_regs) > 0: - self.assembler.VPUSH([r.value for r in self.vfp_regs]) + self.cb.PUSH([r.value for r in self.regs]) + if len(self.vfp_regs) > 0: + self.cb.VPUSH([r.value for r in self.vfp_regs]) def __exit__(self, *args): - if self.supports_floats and len(self.vfp_regs) > 0: - self.assembler.VPOP([r.value for r in self.vfp_regs]) + if len(self.vfp_regs) > 0: + self.cb.VPOP([r.value for r in self.vfp_regs]) if len(self.regs) > 0: - self.assembler.POP([r.value for r in self.regs]) + self.cb.POP([r.value for r in self.regs]) def count_reg_args(args): reg_args = 0 From noreply at buildbot.pypy.org Fri Feb 3 12:33:37 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:37 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: initial implementation of card_marking in cond_call_gc_wb_array Message-ID: <20120203113337.CFB4082B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52058:f34972c516b0 Date: 2012-01-26 15:36 +0100 http://bitbucket.org/pypy/pypy/changeset/f34972c516b0/ Log: initial implementation of card_marking in cond_call_gc_wb_array diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -524,10 +524,12 @@ if opnum == rop.COND_CALL_GC_WB: N = 2 addr = descr.get_write_barrier_fn(self.cpu) + card_marking = False elif opnum == rop.COND_CALL_GC_WB_ARRAY: N = 3 addr = descr.get_write_barrier_from_array_fn(self.cpu) assert addr != 0 + card_marking = descr.jit_wb_cards_set != 0 else: raise AssertionError(opnum) loc_base = arglocs[0] @@ -545,6 +547,21 @@ jz_location = self.mc.currpos() self.mc.BKPT() + # for cond_call_gc_wb_array, also add another fast path: + # if GCFLAG_CARDS_SET, then we can just set one bit and be done + if card_marking: + # calculate the shift value to rotate the ofs according to the ARM + # shifted imm values + ofs = (((4 - descr.jit_wb_cards_set_byteofs) * 4) & 0xF) << 8 + ofs |= descr.jit_wb_cards_set_singlebyte + self.mc.TST_ri(r.ip.value, imm=ofs) + # + jnz_location = self.mc.currpos() + self.mc.BKPT() + # + else: + jnz_location = 0 + # the following is supposed to be the slow path, so whenever possible # we choose the most compact encoding over the most efficient one. with saved_registers(self.mc, r.caller_resp): @@ -558,6 +575,53 @@ # barrier is not going to call anything more. self.mc.BL(func) + # if GCFLAG_CARDS_SET, then we can do the whole thing that would + # be done in the CALL above with just four instructions, so here + # is an inline copy of them + if card_marking: + jmp_location = self.mc.get_relative_pos() + self.mc.BKPT() # jump to the exit, patched later + # patch the JNZ above + offset = self.mc.currpos() + pmc = OverwritingBuilder(self.mc, jnz_location, WORD) + pmc.B_offs(offset, c.NE) #NZ? + # + loc_index = arglocs[1] + if loc_index.is_reg(): + tmp1 = loc_index + # store additional scratch reg + self.mc.PUSH([tmp1.value]) + # byte_index + self.mc.LSR_ri(tmp1.value, tmp1.value, + imm=descr.jit_wb_card_page_shift) + #byteofs + self.mc.LSR_ri(r.lr.value, tmp1.value, imm=3) + self.mc.MVN_rr(r.lr.value, r.lr.value) + #byteval + self.mc.MOV_ri(r.ip.value, imm=1) + self.mc.AND_ri(tmp1.value, tmp1.value, imm=7) + self.mc.LSL_rr(tmp1.value, r.ip.value, tmp1.value) + + # set the bit + self.mc.LDRB_rr(r.ip.value, loc_base.value, r.lr.value) + self.mc.ORR_rr(r.ip.value, r.ip.value, tmp1.value) + self.mc.STRB_rr(r.ip.value, loc_base.value, r.lr.value) + # done + self.mc.POP([tmp1.value]) + elif loc_index.is_imm(): + byte_index = loc_index.value >> descr.jit_wb_card_page_shift + byte_ofs = ~(byte_index >> 3) + byte_val = 1 << (byte_index & 7) + self.mc.LDRB_ri(r.ip.value, loc_base.value, byte_ofs) + self.mc.ORR_ri(r.ip.value, r.ip.value, imm=byte_val) + self.mc.STRB_ri(r.ip.value, loc_base.value, byte_ofs) + else: + raise AssertionError("index is neither RegLoc nor ImmedLoc") + # patch the JMP above + offset = self.mc.currpos() + pmc = OverwritingBuilder(self.mc, jmp_location, WORD) + pmc.B_offs(offset) + # # patch the JZ above offset = self.mc.currpos() pmc = OverwritingBuilder(self.mc, jz_location, WORD) diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -112,9 +112,6 @@ self.cpu.execute_token(lt1, 11) assert self.cpu.get_latest_value_int(0) == 10 - def test_cond_call_gc_wb_array_card_marking_fast_path(self): - py.test.skip('ignore this fast path for now') - SFloat = lltype.GcForwardReference() SFloat.become(lltype.GcStruct('SFloat', ('parent', rclass.OBJECT), ('v1', lltype.Signed), ('v2', lltype.Signed), From noreply at buildbot.pypy.org Fri Feb 3 12:33:39 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:39 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: use shifted immediate and reg arguments for the operations Message-ID: <20120203113339.0C8A982B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52059:fe97ecdb2301 Date: 2012-01-26 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/fe97ecdb2301/ Log: use shifted immediate and reg arguments for the operations diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -591,15 +591,15 @@ tmp1 = loc_index # store additional scratch reg self.mc.PUSH([tmp1.value]) + #byteofs + s = 3 + descr.jit_wb_card_page_shift + self.mc.MVN_rr(r.lr.value, tmp1.value, + imm=s, shifttype=shift.LSR) # byte_index - self.mc.LSR_ri(tmp1.value, tmp1.value, - imm=descr.jit_wb_card_page_shift) - #byteofs - self.mc.LSR_ri(r.lr.value, tmp1.value, imm=3) - self.mc.MVN_rr(r.lr.value, r.lr.value) - #byteval + self.mc.MOV_ri(r.ip.value, imm=7) + self.mc.AND_rr(tmp1.value, r.ip.value, tmp1.value, + imm=descr.jit_wb_card_page_shift, shifttype=shift.LSR) self.mc.MOV_ri(r.ip.value, imm=1) - self.mc.AND_ri(tmp1.value, tmp1.value, imm=7) self.mc.LSL_rr(tmp1.value, r.ip.value, tmp1.value) # set the bit From noreply at buildbot.pypy.org Fri Feb 3 12:33:40 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:40 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: use more implicit shifts Message-ID: <20120203113340.4345F82B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52060:c76d9a617a80 Date: 2012-01-26 17:04 +0100 http://bitbucket.org/pypy/pypy/changeset/c76d9a617a80/ Log: use more implicit shifts diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -588,26 +588,25 @@ # loc_index = arglocs[1] if loc_index.is_reg(): - tmp1 = loc_index + tmp1 = regalloc.get_scratch_reg(INT, [loc_index, loc_base]) + tmp2 = regalloc.get_scratch_reg(INT, [tmp1, loc_base]) # store additional scratch reg - self.mc.PUSH([tmp1.value]) #byteofs s = 3 + descr.jit_wb_card_page_shift - self.mc.MVN_rr(r.lr.value, tmp1.value, + self.mc.MVN_rr(r.lr.value, loc_index.value, imm=s, shifttype=shift.LSR) # byte_index self.mc.MOV_ri(r.ip.value, imm=7) - self.mc.AND_rr(tmp1.value, r.ip.value, tmp1.value, - imm=descr.jit_wb_card_page_shift, shifttype=shift.LSR) - self.mc.MOV_ri(r.ip.value, imm=1) - self.mc.LSL_rr(tmp1.value, r.ip.value, tmp1.value) + self.mc.AND_rr(tmp1.value, r.ip.value, loc_index.value, + imm=descr.jit_wb_card_page_shift, shifttype=shift.LSR) # set the bit + self.mc.MOV_ri(tmp2.value, imm=1) self.mc.LDRB_rr(r.ip.value, loc_base.value, r.lr.value) - self.mc.ORR_rr(r.ip.value, r.ip.value, tmp1.value) + self.mc.ORR_rr_sr(r.ip.value, r.ip.value, tmp2.value, + tmp1.value, shifttype=shift.LSL) self.mc.STRB_rr(r.ip.value, loc_base.value, r.lr.value) # done - self.mc.POP([tmp1.value]) elif loc_index.is_imm(): byte_index = loc_index.value >> descr.jit_wb_card_page_shift byte_ofs = ~(byte_index >> 3) From noreply at buildbot.pypy.org Fri Feb 3 12:33:41 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:41 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Explicitely use immediate values where possible Message-ID: <20120203113341.74AD482B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52061:d41a0ed47ef8 Date: 2012-01-26 18:22 +0100 http://bitbucket.org/pypy/pypy/changeset/d41a0ed47ef8/ Log: Explicitely use immediate values where possible diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -33,9 +33,9 @@ imm_a1 = check_imm_box(a1, imm_size, allow_zero=allow_zero) if not imm_a0 and imm_a1: l0 = self._ensure_value_is_boxed(a0) - l1 = self._ensure_value_is_boxed(a1, boxes) + l1 = self.convert_to_imm(a1) elif commutative and imm_a0 and not imm_a1: - l1 = self._ensure_value_is_boxed(a0, boxes) + l1 = self.convert_to_imm(a0) l0 = self._ensure_value_is_boxed(a1, boxes) else: l0 = self._ensure_value_is_boxed(a0, boxes) @@ -90,17 +90,15 @@ assert fcond is not None a0 = op.getarg(0) a1 = op.getarg(1) - assert isinstance(a0, Box) - assert isinstance(a1, Box) arg1 = self.rm.make_sure_var_in_reg(a0, selected_reg=r.r0) arg2 = self.rm.make_sure_var_in_reg(a1, selected_reg=r.r1) assert arg1 == r.r0 assert arg2 == r.r1 if isinstance(a0, Box) and self.stays_alive(a0): self.force_spill_var(a0) - self.possibly_free_var(a0) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() self.after_call(op.result) - self.possibly_free_var(a1) self.possibly_free_var(op.result) return [] f.__name__ = name @@ -115,7 +113,7 @@ l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1: - l1 = self._ensure_value_is_boxed(arg1, boxes) + l1 = self.convert_to_imm(arg1) else: l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -408,9 +408,9 @@ imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: l0 = self._ensure_value_is_boxed(a0, boxes) - l1 = self._ensure_value_is_boxed(a1, boxes) + l1 = self.convert_to_imm(a1) elif imm_a0 and not imm_a1: - l0 = self._ensure_value_is_boxed(a0, boxes) + l0 = self.convert_to_imm(a0) l1 = self._ensure_value_is_boxed(a1, boxes) else: l0 = self._ensure_value_is_boxed(a0, boxes) @@ -430,9 +430,9 @@ imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: l0 = self._ensure_value_is_boxed(a0, boxes) - l1 = self._ensure_value_is_boxed(a1, boxes) + l1 = self.convert_to_imm(a1) elif imm_a0 and not imm_a1: - l0 = self._ensure_value_is_boxed(a0, boxes) + l0 = self.convert_to_imm(a0) l1 = self._ensure_value_is_boxed(a1, boxes) else: l0 = self._ensure_value_is_boxed(a0, boxes) @@ -612,7 +612,7 @@ if not imm_a1: l1 = self._ensure_value_is_boxed(a1, boxes) else: - l1 = self._ensure_value_is_boxed(a1, boxes) + l1 = self.convert_to_imm(a1) assert op.result is None arglocs = self._prepare_guard(op, [l0, l1]) self.possibly_free_vars(op.getarglist()) @@ -868,7 +868,7 @@ a1 = boxes[1] imm_a1 = check_imm_box(a1) if imm_a1: - ofs_loc = self._ensure_value_is_boxed(a1, boxes) + ofs_loc = self.convert_to_imm(a1) else: ofs_loc = self._ensure_value_is_boxed(a1, boxes) @@ -940,7 +940,7 @@ arg = op.getarg(0) imm_arg = check_imm_box(arg) if imm_arg: - argloc = self._ensure_value_is_boxed(arg) + argloc = self.convert_to_imm(arg) else: argloc = self._ensure_value_is_boxed(arg) self.possibly_free_vars_for_op(op) From noreply at buildbot.pypy.org Fri Feb 3 12:33:42 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:42 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove the case where loc_index is an immediate as it does not happen in our case and move the allocation of the scratch registers to the register allocator Message-ID: <20120203113342.A988882B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52062:001a7119b6ba Date: 2012-01-26 19:36 +0100 http://bitbucket.org/pypy/pypy/changeset/001a7119b6ba/ Log: remove the case where loc_index is an immediate as it does not happen in our case and move the allocation of the scratch registers to the register allocator diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -584,38 +584,29 @@ # patch the JNZ above offset = self.mc.currpos() pmc = OverwritingBuilder(self.mc, jnz_location, WORD) - pmc.B_offs(offset, c.NE) #NZ? + pmc.B_offs(offset, c.NE) # loc_index = arglocs[1] - if loc_index.is_reg(): - tmp1 = regalloc.get_scratch_reg(INT, [loc_index, loc_base]) - tmp2 = regalloc.get_scratch_reg(INT, [tmp1, loc_base]) - # store additional scratch reg - #byteofs - s = 3 + descr.jit_wb_card_page_shift - self.mc.MVN_rr(r.lr.value, loc_index.value, - imm=s, shifttype=shift.LSR) - # byte_index - self.mc.MOV_ri(r.ip.value, imm=7) - self.mc.AND_rr(tmp1.value, r.ip.value, loc_index.value, - imm=descr.jit_wb_card_page_shift, shifttype=shift.LSR) + assert loc_index.is_reg() + tmp1 = arglocs[-2] + tmp2 = arglocs[-1] + #byteofs + s = 3 + descr.jit_wb_card_page_shift + self.mc.MVN_rr(r.lr.value, loc_index.value, + imm=s, shifttype=shift.LSR) + # byte_index + self.mc.MOV_ri(r.ip.value, imm=7) + self.mc.AND_rr(tmp1.value, r.ip.value, loc_index.value, + imm=descr.jit_wb_card_page_shift, shifttype=shift.LSR) - # set the bit - self.mc.MOV_ri(tmp2.value, imm=1) - self.mc.LDRB_rr(r.ip.value, loc_base.value, r.lr.value) - self.mc.ORR_rr_sr(r.ip.value, r.ip.value, tmp2.value, - tmp1.value, shifttype=shift.LSL) - self.mc.STRB_rr(r.ip.value, loc_base.value, r.lr.value) - # done - elif loc_index.is_imm(): - byte_index = loc_index.value >> descr.jit_wb_card_page_shift - byte_ofs = ~(byte_index >> 3) - byte_val = 1 << (byte_index & 7) - self.mc.LDRB_ri(r.ip.value, loc_base.value, byte_ofs) - self.mc.ORR_ri(r.ip.value, r.ip.value, imm=byte_val) - self.mc.STRB_ri(r.ip.value, loc_base.value, byte_ofs) - else: - raise AssertionError("index is neither RegLoc nor ImmedLoc") + # set the bit + self.mc.MOV_ri(tmp2.value, imm=1) + self.mc.LDRB_rr(r.ip.value, loc_base.value, r.lr.value) + self.mc.ORR_rr_sr(r.ip.value, r.ip.value, tmp2.value, + tmp1.value, shifttype=shift.LSL) + self.mc.STRB_rr(r.ip.value, loc_base.value, r.lr.value) + # done + # patch the JMP above offset = self.mc.currpos() pmc = OverwritingBuilder(self.mc, jmp_location, WORD) diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -988,14 +988,22 @@ def prepare_op_cond_call_gc_wb(self, op, fcond): assert op.result is None N = op.numargs() - # we force all arguments in a reg (unless they are Consts), - # because it will be needed anyway by the following setfield_gc - # or setarrayitem_gc. It avoids loading it twice from the memory. + # we force all arguments in a reg because it will be needed anyway by + # the following setfield_gc or setarrayitem_gc. It avoids loading it + # twice from the memory. arglocs = [] args = op.getarglist() for i in range(N): loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) + card_marking = False + if op.getopnum() == rop.COND_CALL_GC_WB_ARRAY: + card_marking = op.getdescr().jit_wb_cards_set != 0 + if card_marking: # allocate scratch registers + tmp1 = self.get_scratch_reg(INT) + tmp2 = self.get_scratch_reg(INT) + arglocs.append(tmp1) + arglocs.append(tmp2) return arglocs prepare_op_cond_call_gc_wb_array = prepare_op_cond_call_gc_wb From noreply at buildbot.pypy.org Fri Feb 3 12:33:43 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:43 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: translation fix Message-ID: <20120203113343.DDE9B82B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52063:9dbd5fb612df Date: 2012-02-02 14:23 +0100 http://bitbucket.org/pypy/pypy/changeset/9dbd5fb612df/ Log: translation fix diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -27,6 +27,7 @@ from pypy.jit.backend.llsupport.descr import unpack_arraydescr from pypy.jit.backend.llsupport.descr import unpack_fielddescr from pypy.jit.backend.llsupport.descr import unpack_interiorfielddescr +from pypy.rlib.objectmodel import we_are_translated # xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know @@ -998,7 +999,11 @@ arglocs.append(loc) card_marking = False if op.getopnum() == rop.COND_CALL_GC_WB_ARRAY: - card_marking = op.getdescr().jit_wb_cards_set != 0 + descr = op.getdescr() + if we_are_translated(): + cls = self.cpu.gc_ll_descr.has_write_barrier_class() + assert cls is not None and isinstance(descr, cls) + card_marking = descr.jit_wb_cards_set != 0 if card_marking: # allocate scratch registers tmp1 = self.get_scratch_reg(INT) tmp2 = self.get_scratch_reg(INT) From noreply at buildbot.pypy.org Fri Feb 3 12:33:45 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:45 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove obsolete imports Message-ID: <20120203113345.1715482B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52064:c36242ce9db3 Date: 2012-02-03 10:12 +0100 http://bitbucket.org/pypy/pypy/changeset/c36242ce9db3/ Log: remove obsolete imports diff --git a/pypy/jit/backend/arm/test/test_helper.py b/pypy/jit/backend/arm/test/test_helper.py --- a/pypy/jit/backend/arm/test/test_helper.py +++ b/pypy/jit/backend/arm/test/test_helper.py @@ -1,6 +1,4 @@ -from pypy.jit.backend.arm.helper.assembler import count_reg_args, \ - decode32, encode32, \ - decode64, encode64 +from pypy.jit.backend.arm.helper.assembler import count_reg_args from pypy.jit.metainterp.history import (BoxInt, BoxPtr, BoxFloat, INT, REF, FLOAT) from pypy.jit.backend.arm.test.support import skip_unless_arm From noreply at buildbot.pypy.org Fri Feb 3 12:33:46 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:46 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: move get_fp_offset function to locations module Message-ID: <20120203113346.4B7D082B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52065:c20da47745b5 Date: 2012-02-03 10:17 +0100 http://bitbucket.org/pypy/pypy/changeset/c20da47745b5/ Log: move get_fp_offset function to locations module diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -6,10 +6,10 @@ from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD, FUNC_ALIGN, \ N_REGISTERS_SAVED_BY_MALLOC from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder +from pypy.jit.backend.arm.locations import get_fp_offset from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, ARMv7RegisterManager, check_imm_arg, operations as regalloc_operations, - get_fp_offset, operations_with_guard as regalloc_operations_with_guard) from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.backend.model import CompiledLoopToken diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py --- a/pypy/jit/backend/arm/locations.py +++ b/pypy/jit/backend/arm/locations.py @@ -134,3 +134,11 @@ def imm(i): return ImmLocation(i) + + +def get_fp_offset(i): + if i >= 0: + # Take the FORCE_TOKEN into account + return (1 + i) * WORD + else: + return i * WORD diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -2,7 +2,7 @@ RegisterManager, TempBox, compute_vars_longevity from pypy.jit.backend.arm import registers as r from pypy.jit.backend.arm import locations -from pypy.jit.backend.arm.locations import imm +from pypy.jit.backend.arm.locations import imm, get_fp_offset from pypy.jit.backend.arm.helper.regalloc import (prepare_op_by_helper_call, prepare_op_unary_cmp, prepare_op_ri, @@ -55,13 +55,6 @@ return "" % (id(self),) -def get_fp_offset(i): - if i >= 0: - # Take the FORCE_TOKEN into account - return (1 + i) * WORD - else: - return i * WORD - class ARMFrameManager(FrameManager): From noreply at buildbot.pypy.org Fri Feb 3 12:33:47 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:47 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: improve backend logging Message-ID: <20120203113347.7FEA682B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52066:383a3f05a654 Date: 2012-02-03 10:18 +0100 http://bitbucket.org/pypy/pypy/changeset/383a3f05a654/ Log: improve backend logging diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -566,7 +566,7 @@ self.gen_func_prolog() # cpu interface - def assemble_loop(self, inputargs, operations, looptoken, log): + def assemble_loop(self, loopname, inputargs, operations, looptoken, log): clt = CompiledLoopToken(self.cpu, looptoken.number) clt.allgcrefs = [] looptoken.compiled_loop_token = clt @@ -580,7 +580,6 @@ if log: operations = self._inject_debugging_code(looptoken, operations, 'e', looptoken.number) - self._dump(operations) self._call_header() sp_patch_location = self._prepare_sp_patch_position() @@ -607,14 +606,20 @@ self.fixup_target_tokens(rawstart) if log and not we_are_translated(): - print 'Loop', inputargs, operations self.mc._dump_trace(rawstart, 'loop_%s.asm' % self.cpu.total_compiled_loops) - print 'Done assembling loop with token %r' % looptoken ops_offset = self.mc.ops_offset self.teardown() + debug_start("jit-backend-addr") + debug_print("Loop %d (%s) has address %x to %x (bootstrap %x)" % ( + looptoken.number, loopname, + rawstart + loop_head, + rawstart + size_excluding_failure_stuff, + rawstart)) + debug_stop("jit-backend-addr") + return AsmInfo(ops_offset, rawstart + loop_head, size_excluding_failure_stuff - loop_head) @@ -635,7 +640,6 @@ if log: operations = self._inject_debugging_code(faildescr, operations, 'b', descr_number) - self._dump(operations, 'bridge') assert isinstance(faildescr, AbstractFailDescr) code = self._find_failure_recovery_bytecode(faildescr) frame_depth = faildescr._arm_current_frame_depth @@ -670,13 +674,18 @@ # for the benefit of tests faildescr._arm_bridge_frame_depth = frame_depth if log: - print 'Bridge', inputargs, operations self.mc._dump_trace(rawstart, 'bridge_%d.asm' % self.cpu.total_compiled_bridges) self.current_clt.frame_depth = max(self.current_clt.frame_depth, frame_depth) ops_offset = self.mc.ops_offset self.teardown() + + debug_start("jit-backend-addr") + debug_print("bridge out of Guard %d has address %x to %x" % + (descr_number, rawstart, rawstart + codeendpos)) + debug_stop("jit-backend-addr") + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def _find_failure_recovery_bytecode(self, faildescr): diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -194,9 +194,6 @@ descr = op.getdescr() assert isinstance(descr, AbstractFailDescr) - if not we_are_translated() and hasattr(op, 'getfailargs'): - print 'Failargs: ', op.getfailargs() - pos = self.mc.currpos() # For all guards that are not GUARD_NOT_INVALIDATED we emit a # breakpoint to ensure the location is patched correctly. In the case diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -35,7 +35,7 @@ def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): - return self.assembler.assemble_loop(inputargs, operations, + return self.assembler.assemble_loop(name, inputargs, operations, looptoken, log=log) def compile_bridge(self, faildescr, inputargs, operations, From noreply at buildbot.pypy.org Fri Feb 3 12:33:48 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:48 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: insert checks only when running tests Message-ID: <20120203113348.B1C7182B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52067:83eb16c29d36 Date: 2012-02-03 11:15 +0100 http://bitbucket.org/pypy/pypy/changeset/83eb16c29d36/ Log: insert checks only when running tests diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -854,7 +854,7 @@ return True def _insert_checks(self, mc=None): - if self._debug: + if not we_are_translated() and self._debug: if mc is None: mc = self.mc mc.CMP_rr(r.fp.value, r.sp.value) From noreply at buildbot.pypy.org Fri Feb 3 12:33:49 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:49 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: rename objdump.py to viewcode.py in arm/tool Message-ID: <20120203113349.E71DF82B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52068:a2c367d64c5c Date: 2012-02-03 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/a2c367d64c5c/ Log: rename objdump.py to viewcode.py in arm/tool diff --git a/pypy/jit/backend/arm/tool/objdump.py b/pypy/jit/backend/arm/tool/viewcode.py rename from pypy/jit/backend/arm/tool/objdump.py rename to pypy/jit/backend/arm/tool/viewcode.py --- a/pypy/jit/backend/arm/tool/objdump.py +++ b/pypy/jit/backend/arm/tool/viewcode.py @@ -1,14 +1,18 @@ #!/usr/bin/env python """ Try: - ./objdump.py file.asm - ./objdump.py --decode dumpfile + ./viewcode.py file.asm + ./viewcode.py --decode dumpfile """ import os, sys, py import subprocess def machine_code_dump(data, originaddr, backend_name, label_list=None): - assert backend_name == 'arm_32' + objdump_backend_option = { + 'arm': 'arm', + 'arm_32': 'arm', + } + assert backend_name in objdump_backend_option tmpfile = get_tmp_file() objdump = 'objdump -M reg-names-std --adjust-vma=%(origin)d -D --architecture=arm --target=binary %(file)s' # From noreply at buildbot.pypy.org Fri Feb 3 12:33:51 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:51 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add a viecode module in jit/backend/tool that detects and imports the Message-ID: <20120203113351.28CA282B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52069:c8e4776519d5 Date: 2012-02-03 12:15 +0100 http://bitbucket.org/pypy/pypy/changeset/c8e4776519d5/ Log: add a viecode module in jit/backend/tool that detects and imports the functionality (for now machine_code_dump) from the backend corresponding to the current machine diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -30,10 +30,6 @@ add_loop_instructions = ['mov', 'adds', 'cmp', 'beq', 'b'] bridge_loop_instructions = ['movw', 'movt', 'bx'] - def get_machine_code_dump_func(self): - from pypy.jit.backend.arm.tool.objdump import machine_code_dump - return machine_code_dump - def setup_method(self, meth): self.cpu = ArmCPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3219,7 +3219,7 @@ from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU if not isinstance(self.cpu, AbstractLLCPU): py.test.skip("pointless test on non-asm") - machine_code_dump = self.get_machine_code_dump_func() + from pypy.jit.backend.tool.viewcode import machine_code_dump import ctypes ops = """ [i3, i2] diff --git a/pypy/jit/backend/tool/__init__.py b/pypy/jit/backend/tool/__init__.py new file mode 100644 diff --git a/pypy/jit/backend/tool/viewcode.py b/pypy/jit/backend/tool/viewcode.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/tool/viewcode.py @@ -0,0 +1,11 @@ +from pypy.jit.backend.detect_cpu import autodetect_main_model +import sys + + +def get_module(mod): + __import__(mod) + return sys.modules[mod] + +cpu = autodetect_main_model() +viewcode = get_module("pypy.jit.backend.%s.tool.viewcode" % cpu) +machine_code_dump = getattr(viewcode, 'machine_code_dump') diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -40,10 +40,6 @@ # the 'mov' is part of the 'jmp' so far bridge_loop_instructions = ['lea', 'mov', 'jmp'] - def get_machine_code_dump_func(self): - from pypy.jit.backend.x86.tool.viewcode import machine_code_dump - return machine_code_dump - def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -95,7 +95,7 @@ return loop def _asm_disassemble(self, d, origin_addr, tp): - from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + from pypy.jit.backend.tool.viewcode import machine_code_dump return list(machine_code_dump(d, tp, origin_addr)) @classmethod From noreply at buildbot.pypy.org Fri Feb 3 12:33:52 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:52 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Extract names from loop comments that contain parenthesis before the first colon Message-ID: <20120203113352.58B3482B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52070:b2a50d3300dc Date: 2012-02-03 12:16 +0100 http://bitbucket.org/pypy/pypy/changeset/b2a50d3300dc/ Log: Extract names from loop comments that contain parenthesis before the first colon diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -386,6 +386,8 @@ if comm.startswith('# bridge'): m = re.search('guard \d+', comm) name = m.group(0) + elif "(" in name: + name = comm[2:comm.find('(')-1] else: name = comm[2:comm.find(':')-1] if name in dumps: From noreply at buildbot.pypy.org Fri Feb 3 12:33:53 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 3 Feb 2012 12:33:53 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: modify the jit log parser to also work with the ARM output of objdump Message-ID: <20120203113353.97D2C82B20@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52071:d05d55cbe71a Date: 2012-02-03 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/d05d55cbe71a/ Log: modify the jit log parser to also work with the ARM output of objdump diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -65,9 +65,11 @@ asm = [] start = 0 for elem in raw_asm: - if len(elem.split("\t")) != 3: + if len(elem.split("\t")) < 3: continue - adr, _, v = elem.split("\t") + e = elem.split("\t") + adr = e[0] + v = " ".join(e[2:]) if not start: start = int(adr.strip(":"), 16) ofs = int(adr.strip(":"), 16) - start diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -4,6 +4,7 @@ parse_log_counts) from pypy.tool.jitlogparser.storage import LoopStorage import py, sys +from pypy.jit.backend.detect_cpu import autodetect_main_model def parse(input, **kwds): return SimpleParser.parse_from_input(input, **kwds) @@ -188,6 +189,8 @@ assert chunk.bytecode_name.startswith('StrLiteralSearch') def test_parsing_assembler(): + if not autodetect_main_model() == 'x86': + py.test.skip('x86 only test') backend_dump = "554889E5534154415541564157488DA500000000488B042590C5540148C7042590C554010000000048898570FFFFFF488B042598C5540148C7042598C554010000000048898568FFFFFF488B0425A0C5540148C70425A0C554010000000048898560FFFFFF488B0425A8C5540148C70425A8C554010000000048898558FFFFFF4C8B3C2550525B0149BB30E06C96FC7F00004D8B334983C60149BB30E06C96FC7F00004D89334981FF102700000F8D000000004983C7014C8B342580F76A024983EE014C89342580F76A024983FE000F8C00000000E9AEFFFFFF488B042588F76A024829E0483B042580EC3C01760D49BB05F30894FC7F000041FFD3554889E5534154415541564157488DA550FFFFFF4889BD70FFFFFF4889B568FFFFFF48899560FFFFFF48898D58FFFFFF4D89C7E954FFFFFF49BB00F00894FC7F000041FFD34440484C3D030300000049BB00F00894FC7F000041FFD34440484C3D070304000000" dump_start = 0x7f3b0b2e63d5 loop = parse(""" @@ -214,7 +217,63 @@ assert '0x2710' in cmp.asm assert 'jmp' in loop.operations[-1].asm +def test_parsing_arm_assembler(): + if not autodetect_main_model() == 'arm': + py.test.skip('ARM only test') + backend_dump = "F04F2DE9108B2DED2CD04DE20DB0A0E17CC302E3DFC040E300409CE5085084E2086000E3006084E504B084E500508CE508D04BE20000A0E10000A0E1B0A10DE30EA044E300A09AE501A08AE2B0910DE30E9044E300A089E5C0910DE30E9044E3009099E5019089E2C0A10DE30EA044E300908AE5010050E1700020E124A092E500C08AE00C90DCE5288000E3090058E10180A0030080A013297000E3090057E10170A0030070A013077088E1200059E30180A0030080A013099049E2050059E30190A0330090A023099088E1000059E30190A0130090A003099087E1000059E3700020E1010080E204200BE5D0210DE30E2044E3002092E5012082E2D0910DE30E9044E3002089E5010050E1700020E100C08AE00C90DCE5282000E3090052E10120A0030020A013297000E3090057E10170A0030070A013077082E1200059E30120A0030020A013099049E2050059E30190A0330090A023099082E1000059E30190A0130090A003099087E1000059E3700020E1010080E20D005BE10FF0A0A1700020E1D8FFFFEA68C100E301C04BE33CFF2FE105010803560000000000000068C100E301C04BE33CFF2FE105010803570000000000000068C100E301C04BE33CFF2FE105014003580000000000000068C100E301C04BE33CFF2FE1050140035900000000000000" + dump_start = int(-0x4ffee930) + loop = parse(""" +# Loop 5 (re StrMatchIn at 92 [17, 4, 0, 20, 393237, 21, 0, 29, 9, 1, 65535, 15, 4, 9, 3, 0, 1, 21, 1, 29, 9, 1, 65535, 15, 4, 9, 2, 0, 1, 1...) : loop with 38 ops +[i0, i1, p2] ++88: label(i0, i1, p2, descr=TargetToken(1081858608)) +debug_merge_point(0, 're StrMatchIn at 92 [17. 4. 0. 20. 393237. 21. 0. 29. 9. 1. 65535. 15. 4. 9. 3. 0. 1. 21. 1. 29. 9. 1. 65535. 15. 4. 9. 2. 0. 1. 1...') ++116: i3 = int_lt(i0, i1) +guard_true(i3, descr=) [i1, i0, p2] ++124: p4 = getfield_gc(p2, descr=) ++128: i5 = strgetitem(p4, i0) ++136: i7 = int_eq(40, i5) ++152: i9 = int_eq(41, i5) ++168: i10 = int_or(i7, i9) ++172: i12 = int_eq(i5, 32) ++184: i14 = int_sub(i5, 9) ++188: i16 = uint_lt(i14, 5) ++200: i17 = int_or(i12, i16) ++204: i18 = int_is_true(i17) ++216: i19 = int_or(i10, i18) ++220: i20 = int_is_true(i19) +guard_false(i20, descr=) [i1, i0, p2] ++228: i22 = int_add(i0, 1) +debug_merge_point(0, 're StrMatchIn at 92 [17. 4. 0. 20. 393237. 21. 0. 29. 9. 1. 65535. 15. 4. 9. 3. 0. 1. 21. 1. 29. 9. 1. 65535. 15. 4. 9. 2. 0. 1. 1...') ++232: label(i22, i1, p2, p4, descr=TargetToken(1081858656)) +debug_merge_point(0, 're StrMatchIn at 92 [17. 4. 0. 20. 393237. 21. 0. 29. 9. 1. 65535. 15. 4. 9. 3. 0. 1. 21. 1. 29. 9. 1. 65535. 15. 4. 9. 2. 0. 1. 1...') ++264: i23 = int_lt(i22, i1) +guard_true(i23, descr=) [i1, i22, p2] ++272: i24 = strgetitem(p4, i22) ++280: i25 = int_eq(40, i24) ++296: i26 = int_eq(41, i24) ++312: i27 = int_or(i25, i26) ++316: i28 = int_eq(i24, 32) ++328: i29 = int_sub(i24, 9) ++332: i30 = uint_lt(i29, 5) ++344: i31 = int_or(i28, i30) ++348: i32 = int_is_true(i31) ++360: i33 = int_or(i27, i32) ++364: i34 = int_is_true(i33) +guard_false(i34, descr=) [i1, i22, p2] ++372: i35 = int_add(i22, 1) +debug_merge_point(0, 're StrMatchIn at 92 [17. 4. 0. 20. 393237. 21. 0. 29. 9. 1. 65535. 15. 4. 9. 3. 0. 1. 21. 1. 29. 9. 1. 65535. 15. 4. 9. 2. 0. 1. 1...') ++376: jump(i35, i1, p2, p4, descr=TargetToken(1081858656)) ++392: --end of the loop--""", backend_dump=backend_dump, + dump_start=dump_start, + backend_tp='arm_32') + cmp = loop.operations[2] + assert 'cmp' in cmp.asm + assert 'bkpt' in loop.operations[-1].asm # the guard that would be patched + + def test_import_log(): + if not autodetect_main_model() == 'x86': + py.test.skip('x86 only test') _, loops = import_log(str(py.path.local(__file__).join('..', 'logtest.log'))) for loop in loops: @@ -222,6 +281,8 @@ assert 'jge' in loops[0].operations[3].asm def test_import_log_2(): + if not autodetect_main_model() == 'x86': + py.test.skip('x86 only test') _, loops = import_log(str(py.path.local(__file__).join('..', 'logtest2.log'))) for loop in loops: From noreply at buildbot.pypy.org Fri Feb 3 14:51:39 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 3 Feb 2012 14:51:39 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: branch to handle "out" arg, first stab at tests and implementation for reduce functions Message-ID: <20120203135139.5472271069C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52072:620d218f4ddf Date: 2012-02-03 15:45 +0200 http://bitbucket.org/pypy/pypy/changeset/620d218f4ddf/ Log: branch to handle "out" arg, first stab at tests and implementation for reduce functions diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -131,13 +131,20 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_axis=None): + def impl(self, space, w_axis=None, w_out=None): if space.is_w(w_axis, space.w_None): axis = -1 else: axis = space.int_w(w_axis) + if space.is_w(w_out, space.w_None): + out = None + elif not isinstance(w_out, W_NDimArray): + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) + else: + out = w_out return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, axis) + self, True, promote_to_largest, axis, False, out) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -484,14 +491,14 @@ ) return w_result - def descr_mean(self, space, w_axis=None): + def descr_mean(self, space, w_axis=None, w_out=None): if space.is_w(w_axis, space.w_None): w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_axis), w_denom) + return space.div(self.descr_sum_promote(space, w_axis, w_out), w_denom) def descr_var(self, space, w_axis=None): return get_appbridge_cache(space).call_method(space, '_var', self, diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -105,28 +105,32 @@ array([[ 1, 5], [ 9, 13]]) """ - if not space.is_w(w_out, space.w_None): - raise OperationError(space.w_NotImplementedError, space.wrap( - "out not supported")) if w_axis is None: axis = 0 elif space.is_w(w_axis, space.w_None): axis = -1 else: axis = space.int_w(w_axis) - return self.reduce(space, w_obj, False, False, axis, keepdims) + if space.is_w(w_out, space.w_None): + out = None + elif not isinstance(w_out, W_NDimArray): + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) + else: + out = w_out + return self.reduce(space, w_obj, False, False, axis, keepdims, out) - def reduce(self, space, w_obj, multidim, promote_to_largest, dim, - keepdims=False): + def reduce(self, space, w_obj, multidim, promote_to_largest, axis, + keepdims=False, out=None): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar, ReduceArray + Scalar, ReduceArray, W_NDimArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) - if dim >= len(obj.shape): - raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) + if axis >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % axis)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -144,21 +148,32 @@ if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) - if shapelen > 1 and dim >= 0: - return self.do_axis_reduce(obj, dtype, dim, keepdims) + if shapelen > 1 and axis >= 0: + if keepdims: + shape = obj.shape[:axis] + [1] + obj.shape[axis + 1:] + else: + shape = obj.shape[:axis] + obj.shape[axis + 1:] + if out: + #Test for shape agreement + return self.do_axis_reduce(obj, dtype, axis, out) + else: + result = W_NDimArray(support.product(shape), shape, dtype) + return self.do_axis_reduce(obj, dtype, axis, result) arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) - return loop.compute(arr) + val = loop.compute(arr) + if out: + if len(out.shape)>0: + raise operationerrfmt(space.w_ValueError, "output parameter " + "for reduction operation %s has too many" + " dimensions",self.name) + out.setitem(0, out.dtype.coerce(space, val)) + return out + return val - def do_axis_reduce(self, obj, dtype, dim, keepdims): - from pypy.module.micronumpy.interp_numarray import AxisReduce,\ - W_NDimArray - if keepdims: - shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] - else: - shape = obj.shape[:dim] + obj.shape[dim + 1:] - result = W_NDimArray(support.product(shape), shape, dtype) + def do_axis_reduce(self, obj, dtype, axis, result): + from pypy.module.micronumpy.interp_numarray import AxisReduce arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, - result, obj, dim) + result, obj, axis) loop.compute(arr) return arr.left diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -785,7 +785,7 @@ assert (arange(10).reshape(5, 2).mean(axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_sum(self): - from _numpypy import array + from _numpypy import array,zeros a = array(range(5)) assert a.sum() == 10 assert a[:4].sum() == 6 @@ -794,6 +794,10 @@ assert a.sum() == 5 raises(TypeError, 'a.sum(2, 3)') + skip('fails since Scalar is not a subclass of W_NDimArray') + d = zeros(()) + b = a.sum(out=d) + assert b == d def test_reduce_nd(self): from numpypy import arange, array, multiply @@ -821,6 +825,14 @@ assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() + def test_reduce_out(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + b = arange(3) + c = a.sum(0, out=b) + assert (c == [30, 35, 40]).all() + assert (c == b).all() + def test_identity(self): from _numpypy import identity, array from _numpypy import int32, float64, dtype From noreply at buildbot.pypy.org Sat Feb 4 01:55:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 4 Feb 2012 01:55:22 +0100 (CET) Subject: [pypy-commit] pypy numpypy-complex: random progress Message-ID: <20120204005522.8A39C710770@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpypy-complex Changeset: r52073:e226e803f299 Date: 2012-02-03 19:54 -0500 http://bitbucket.org/pypy/pypy/changeset/e226e803f299/ Log: random progress diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -57,6 +57,8 @@ 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', + 'complexfloating': 'interp_boxes.W_ComplexFloatingBox', + 'complex128': 'interp_boxes.W_Complex128Box', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -2,6 +2,7 @@ from pypy.interpreter.error import operationerrfmt from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef +from pypy.objspace.std.complextype import complex_typedef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT @@ -29,6 +30,18 @@ def convert_to(self, dtype): return dtype.box(self.value) +class ComplexBox(object): + _mixin_ = True + + def __init__(self, real, imag): + self.real = real + self.imag = imag + + def convert_to(self, dtype): + if type(self) is dtype.itemtype.BoxType: + return self + raise NotImplementedError + class W_GenericBox(Wrappable): _attrs_ = () @@ -158,6 +171,12 @@ descr__new__, get_dtype = new_dtype_getter("float64") +class W_ComplexFloatingBox(W_InexactBox): + pass + +class W_Complex128Box(W_ComplexFloatingBox, ComplexBox): + descr__new__, get_dtype = new_dtype_getter("complex128") + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -271,13 +290,19 @@ W_Float32Box.typedef = TypeDef("float32", W_FloatingBox.typedef, __module__ = "numpypy", - __new__ = interp2app(W_Float32Box.descr__new__.im_func), ) W_Float64Box.typedef = TypeDef("float64", (W_FloatingBox.typedef, float_typedef), __module__ = "numpypy", - __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) +W_ComplexFloatingBox.typedef = TypeDef("complexfloating", W_InexactBox.typedef, + __module__ = "numpypy", +) + +W_Complex128Box.typedef = TypeDef("complex128", (W_ComplexFloatingBox.typedef, complex_typedef), + __module__ = "numpypy", + __new__ = interp2app(W_Complex128Box.descr__new__.im_func), +) \ No newline at end of file diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -13,6 +13,7 @@ SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" +COMPLEXLTR = "c" VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) @@ -235,17 +236,26 @@ kind=FLOATINGLTR, name="float64", char="d", - w_box_type = space.gettypefor(interp_boxes.W_Float64Box), + w_box_type=space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], aliases=["float"], ) + self.w_complex128dtype = W_Dtype( + types.Complex128(), + num=15, + kind=COMPLEXLTR, + name="complex128", + char="c", + w_box_type=space.gettypefor(interp_boxes.W_Complex128Box), + alternate_constructors=[space.w_complex], + ) self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, self.w_int64dtype, self.w_uint64dtype, self.w_float32dtype, - self.w_float64dtype + self.w_float64dtype, self.w_complex128dtype ] self.dtypes_by_num_bytes = sorted( (dtype.itemtype.get_element_size(), dtype) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -334,6 +334,7 @@ bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype int64_dtype = interp_dtype.get_dtype_cache(space).w_int64dtype + complex128_dtype = interp_dtype.get_dtype_cache(space).w_complex128dtype if isinstance(w_obj, interp_boxes.W_GenericBox): dtype = w_obj.get_dtype(space) @@ -355,6 +356,8 @@ current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype return current_guess + elif space.isinstance_w(w_obj, space.w_complex): + return complex128_dtype return interp_dtype.get_dtype_cache(space).w_float64dtype diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -164,6 +164,16 @@ for i in range(5): assert b[i] == i * 2 + def test_complex128(self): + from _numpypy import dtype, complex128 + + d = dtype("complex128") + assert d.num == 15 + assert d.kind== "c" + assert d.type is complex128 + assert d.name == "complex128" + assert d.itemsize == 16 + def test_shape(self): from _numpypy import dtype @@ -375,6 +385,19 @@ assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') + def test_complex128(self): + import _numpypy as numpy + + assert numpy.complex128.mro() == [numpy.complex128, numpy.complexfloating, numpy.inexact, numpy.number, numpy.generic, complex, object] + a = numpy.array([1j], numpy.complex128) + assert type(a[0]) is numpy.complex128 + assert numpy.dtype(complex).type is numpy.complex128 + assert a[0] == 1j + + assert numpy.complex128(2) == 2 + assert numpy.complex128(2.0 + 3j) == 2.0 + 3j + raises(TypeError, numpy.complex128, "23") + def test_subclass_type(self): import _numpypy as numpy diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -57,12 +57,11 @@ return dispatcher class BaseType(object): - def _unimplemented_ufunc(self, *args): - raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc + def coerce(self, space, w_item): + if isinstance(w_item, self.BoxType): + return w_item + return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) + class Primitive(object): _mixin_ = True @@ -78,11 +77,6 @@ assert isinstance(box, self.BoxType) return box.value - def coerce(self, space, w_item): - if isinstance(w_item, self.BoxType): - return w_item - return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) - def coerce_subtype(self, space, w_subtype, w_item): # XXX: ugly w_obj = space.allocate_instance(self.BoxType, w_subtype) @@ -513,3 +507,58 @@ T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box format_code = "d" + + +class CompositeType(BaseType): + def __init__(self, fields): + BaseType.__init__(self) + self.fields = fields + + def get_element_size(self): + s = 0 + for field in self.fields: + s += field.get_element_size() + return s + + def read(self, storage, width, i, offset): + subboxes = [] + for field in self.fields: + subboxes.append(field.read(storage, width, i, offset)) + offset += field.get_element_size() + return self.box(subboxes) + + def store(self, storage, width, i, offset, box): + subboxes = self.get_subboxes(box) + assert len(subboxes) == len(self.fields) + for field, subbox in zip(self.fields, subboxes): + field.store(storage, width, i, offset, subbox) + offset += field.get_element_size() + +class Complex128(CompositeType): + BoxType = interp_boxes.W_Complex128Box + COMPLEX_TYPE = Float64 + + def __init__(self): + self.type = self.COMPLEX_TYPE() + CompositeType.__init__(self, [self.type, self.type]) + + def box(self, subboxes): + [real, imag] = subboxes + return self.BoxType(self.type.unbox(real), self.type.unbox(imag)) + + def get_subboxes(self, box): + return [self.type.box(box.real), self.type.box(box.imag)] + + def coerce_subtype(self, space, w_subtype, w_item): + real, imag = space.unpackcomplex(w_item) + w_obj = space.allocate_instance(self.BoxType, w_subtype) + assert isinstance(w_obj, self.BoxType) + w_obj.__init__(real, imag) + return w_obj + + def for_computation(self, v): + pass + + @raw_binary_op + def eq(self, v1, v2): + return v1.real == v2.real and v1.imag == v2.imag \ No newline at end of file From noreply at buildbot.pypy.org Sat Feb 4 02:00:32 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 4 Feb 2012 02:00:32 +0100 (CET) Subject: [pypy-commit] pypy default: added flatiter.__len__ Message-ID: <20120204010032.E783E710770@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52074:2610a9ed0d7c Date: 2012-02-03 20:00 -0500 http://bitbucket.org/pypy/pypy/changeset/2610a9ed0d7c/ Log: added flatiter.__len__ diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -796,7 +796,7 @@ self.left.create_sig(), self.right.create_sig()) class ResultArray(Call2): - def __init__(self, child, size, shape, dtype, res=None, order='C'): + def __init__(self, child, size, shape, dtype, res=None, order='C'): if res is None: res = W_NDimArray(size, shape, dtype, order) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) @@ -824,7 +824,7 @@ frame.next(len(self.right.shape)) else: frame.cur_value = self.identity.convert_to(self.calc_dtype) - + def create_sig(self): if self.name == 'logical_and': done_func = done_if_false @@ -1321,6 +1321,9 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) @@ -1396,7 +1399,7 @@ return signature.FlatSignature(self.base.dtype) def create_iter(self, transforms=None): - return ViewIterator(self.base.start, self.base.strides, + return ViewIterator(self.base.start, self.base.strides, self.base.backstrides, self.base.shape).apply_transformations(self.base, transforms) @@ -1407,14 +1410,17 @@ W_FlatIterator.typedef = TypeDef( 'flatiter', __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1489,24 +1489,26 @@ def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros From noreply at buildbot.pypy.org Sat Feb 4 05:38:53 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 4 Feb 2012 05:38:53 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120204043853.5A873710770@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52075:ae06c687bf97 Date: 2011-09-28 10:25 -0700 http://bitbucket.org/pypy/pypy/changeset/ae06c687bf97/ Log: merge default into branch diff too long, truncating to 10000 out of 14654 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,2 +1,3 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -317,7 +317,7 @@ RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py'), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), diff --git a/lib-python/modified-2.7/gzip.py b/lib-python/modified-2.7/gzip.py deleted file mode 100644 --- a/lib-python/modified-2.7/gzip.py +++ /dev/null @@ -1,514 +0,0 @@ -"""Functions that read and write gzipped files. - -The user of the file doesn't have to worry about the compression, -but random access is not allowed.""" - -# based on Andrew Kuchling's minigzip.py distributed with the zlib module - -import struct, sys, time, os -import zlib -import io -import __builtin__ - -__all__ = ["GzipFile","open"] - -FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16 - -READ, WRITE = 1, 2 - -def write32u(output, value): - # The L format writes the bit pattern correctly whether signed - # or unsigned. - output.write(struct.pack("' - - def _check_closed(self): - """Raises a ValueError if the underlying file object has been closed. - - """ - if self.closed: - raise ValueError('I/O operation on closed file.') - - def _init_write(self, filename): - self.name = filename - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - self.writebuf = [] - self.bufsize = 0 - - def _write_gzip_header(self): - self.fileobj.write('\037\213') # magic header - self.fileobj.write('\010') # compression method - fname = os.path.basename(self.name) - if fname.endswith(".gz"): - fname = fname[:-3] - flags = 0 - if fname: - flags = FNAME - self.fileobj.write(chr(flags)) - mtime = self.mtime - if mtime is None: - mtime = time.time() - write32u(self.fileobj, long(mtime)) - self.fileobj.write('\002') - self.fileobj.write('\377') - if fname: - self.fileobj.write(fname + '\000') - - def _init_read(self): - self.crc = zlib.crc32("") & 0xffffffffL - self.size = 0 - - def _read_gzip_header(self): - magic = self.fileobj.read(2) - if magic != '\037\213': - raise IOError, 'Not a gzipped file' - method = ord( self.fileobj.read(1) ) - if method != 8: - raise IOError, 'Unknown compression method' - flag = ord( self.fileobj.read(1) ) - self.mtime = read32(self.fileobj) - # extraflag = self.fileobj.read(1) - # os = self.fileobj.read(1) - self.fileobj.read(2) - - if flag & FEXTRA: - # Read & discard the extra field, if present - xlen = ord(self.fileobj.read(1)) - xlen = xlen + 256*ord(self.fileobj.read(1)) - self.fileobj.read(xlen) - if flag & FNAME: - # Read and discard a null-terminated string containing the filename - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FCOMMENT: - # Read and discard a null-terminated string containing a comment - while True: - s = self.fileobj.read(1) - if not s or s=='\000': - break - if flag & FHCRC: - self.fileobj.read(2) # Read & discard the 16-bit header CRC - - def write(self,data): - self._check_closed() - if self.mode != WRITE: - import errno - raise IOError(errno.EBADF, "write() on read-only GzipFile object") - - if self.fileobj is None: - raise ValueError, "write() on closed GzipFile object" - - # Convert data type if called by io.BufferedWriter. - if isinstance(data, memoryview): - data = data.tobytes() - - if len(data) > 0: - self.size = self.size + len(data) - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - self.fileobj.write( self.compress.compress(data) ) - self.offset += len(data) - - return len(data) - - def read(self, size=-1): - self._check_closed() - if self.mode != READ: - import errno - raise IOError(errno.EBADF, "read() on write-only GzipFile object") - - if self.extrasize <= 0 and self.fileobj is None: - return '' - - readsize = 1024 - if size < 0: # get the whole thing - try: - while True: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - size = self.extrasize - elif size == 0: - return "" - else: # just get some more of it - try: - while size > self.extrasize: - self._read(readsize) - readsize = min(self.max_read_chunk, readsize * 2) - except EOFError: - if size > self.extrasize: - size = self.extrasize - - offset = self.offset - self.extrastart - chunk = self.extrabuf[offset: offset + size] - self.extrasize = self.extrasize - size - - self.offset += size - return chunk - - def _unread(self, buf): - self.extrasize = len(buf) + self.extrasize - self.offset -= len(buf) - - def _read(self, size=1024): - if self.fileobj is None: - raise EOFError, "Reached EOF" - - if self._new_member: - # If the _new_member flag is set, we have to - # jump to the next member, if there is one. - # - # First, check if we're at the end of the file; - # if so, it's time to stop; no more members to read. - pos = self.fileobj.tell() # Save current position - self.fileobj.seek(0, 2) # Seek to end of file - if pos == self.fileobj.tell(): - raise EOFError, "Reached EOF" - else: - self.fileobj.seek( pos ) # Return to original position - - self._init_read() - self._read_gzip_header() - self.decompress = zlib.decompressobj(-zlib.MAX_WBITS) - self._new_member = False - - # Read a chunk of data from the file - buf = self.fileobj.read(size) - - # If the EOF has been reached, flush the decompression object - # and mark this object as finished. - - if buf == "": - uncompress = self.decompress.flush() - self._read_eof() - self._add_read_data( uncompress ) - raise EOFError, 'Reached EOF' - - uncompress = self.decompress.decompress(buf) - self._add_read_data( uncompress ) - - if self.decompress.unused_data != "": - # Ending case: we've come to the end of a member in the file, - # so seek back to the start of the unused data, finish up - # this member, and read a new gzip header. - # (The number of bytes to seek back is the length of the unused - # data, minus 8 because _read_eof() will rewind a further 8 bytes) - self.fileobj.seek( -len(self.decompress.unused_data)+8, 1) - - # Check the CRC and file size, and set the flag so we read - # a new member on the next call - self._read_eof() - self._new_member = True - - def _add_read_data(self, data): - self.crc = zlib.crc32(data, self.crc) & 0xffffffffL - offset = self.offset - self.extrastart - self.extrabuf = self.extrabuf[offset:] + data - self.extrasize = self.extrasize + len(data) - self.extrastart = self.offset - self.size = self.size + len(data) - - def _read_eof(self): - # We've read to the end of the file, so we have to rewind in order - # to reread the 8 bytes containing the CRC and the file size. - # We check the that the computed CRC and size of the - # uncompressed data matches the stored values. Note that the size - # stored is the true file size mod 2**32. - self.fileobj.seek(-8, 1) - crc32 = read32(self.fileobj) - isize = read32(self.fileobj) # may exceed 2GB - if crc32 != self.crc: - raise IOError("CRC check failed %s != %s" % (hex(crc32), - hex(self.crc))) - elif isize != (self.size & 0xffffffffL): - raise IOError, "Incorrect length of data produced" - - # Gzip files can be padded with zeroes and still have archives. - # Consume all zero bytes and set the file position to the first - # non-zero byte. See http://www.gzip.org/#faq8 - c = "\x00" - while c == "\x00": - c = self.fileobj.read(1) - if c: - self.fileobj.seek(-1, 1) - - @property - def closed(self): - return self.fileobj is None - - def close(self): - if self.fileobj is None: - return - if self.mode == WRITE: - self.fileobj.write(self.compress.flush()) - write32u(self.fileobj, self.crc) - # self.size may exceed 2GB, or even 4GB - write32u(self.fileobj, self.size & 0xffffffffL) - self.fileobj = None - elif self.mode == READ: - self.fileobj = None - if self.myfileobj: - self.myfileobj.close() - self.myfileobj = None - - def flush(self,zlib_mode=zlib.Z_SYNC_FLUSH): - self._check_closed() - if self.mode == WRITE: - # Ensure the compressor's buffer is flushed - self.fileobj.write(self.compress.flush(zlib_mode)) - self.fileobj.flush() - - def fileno(self): - """Invoke the underlying file object's fileno() method. - - This will raise AttributeError if the underlying file object - doesn't support fileno(). - """ - return self.fileobj.fileno() - - def rewind(self): - '''Return the uncompressed stream file position indicator to the - beginning of the file''' - if self.mode != READ: - raise IOError("Can't rewind in write mode") - self.fileobj.seek(0) - self._new_member = True - self.extrabuf = "" - self.extrasize = 0 - self.extrastart = 0 - self.offset = 0 - - def readable(self): - return self.mode == READ - - def writable(self): - return self.mode == WRITE - - def seekable(self): - return True - - def seek(self, offset, whence=0): - if whence: - if whence == 1: - offset = self.offset + offset - else: - raise ValueError('Seek from end not supported') - if self.mode == WRITE: - if offset < self.offset: - raise IOError('Negative seek in write mode') - count = offset - self.offset - for i in range(count // 1024): - self.write(1024 * '\0') - self.write((count % 1024) * '\0') - elif self.mode == READ: - if offset == self.offset: - self.read(0) # to make sure that this file is open - return self.offset - if offset < self.offset: - # for negative seek, rewind and do positive seek - self.rewind() - count = offset - self.offset - for i in range(count // 1024): - self.read(1024) - self.read(count % 1024) - - return self.offset - - def readline(self, size=-1): - if size < 0: - # Shortcut common case - newline found in buffer. - offset = self.offset - self.extrastart - i = self.extrabuf.find('\n', offset) + 1 - if i > 0: - self.extrasize -= i - offset - self.offset += i - offset - return self.extrabuf[offset: i] - - size = sys.maxint - readsize = self.min_readsize - else: - readsize = size - bufs = [] - while size != 0: - c = self.read(readsize) - i = c.find('\n') - - # We set i=size to break out of the loop under two - # conditions: 1) there's no newline, and the chunk is - # larger than size, or 2) there is a newline, but the - # resulting line would be longer than 'size'. - if (size <= i) or (i == -1 and len(c) > size): - i = size - 1 - - if i >= 0 or c == '': - bufs.append(c[:i + 1]) # Add portion of last chunk - self._unread(c[i + 1:]) # Push back rest of chunk - break - - # Append chunk to list, decrease 'size', - bufs.append(c) - size = size - len(c) - readsize = min(size, readsize * 2) - if readsize > self.min_readsize: - self.min_readsize = min(readsize, self.min_readsize * 2, 512) - return ''.join(bufs) # Return resulting line - - -def _test(): - # Act like gzip; with -d, act like gunzip. - # The input file is not deleted, however, nor are any other gzip - # options or features supported. - args = sys.argv[1:] - decompress = args and args[0] == "-d" - if decompress: - args = args[1:] - if not args: - args = ["-"] - for arg in args: - if decompress: - if arg == "-": - f = GzipFile(filename="", mode="rb", fileobj=sys.stdin) - g = sys.stdout - else: - if arg[-3:] != ".gz": - print "filename doesn't end in .gz:", repr(arg) - continue - f = open(arg, "rb") - g = __builtin__.open(arg[:-3], "wb") - else: - if arg == "-": - f = sys.stdin - g = GzipFile(filename="", mode="wb", fileobj=sys.stdout) - else: - f = __builtin__.open(arg, "rb") - g = open(arg + ".gz", "wb") - while True: - chunk = f.read(1024) - if not chunk: - break - g.write(chunk) - if g is not sys.stdout: - g.close() - if f is not sys.stdin: - f.close() - -if __name__ == '__main__': - _test() diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/tarfile.py b/lib-python/modified-2.7/tarfile.py --- a/lib-python/modified-2.7/tarfile.py +++ b/lib-python/modified-2.7/tarfile.py @@ -252,8 +252,8 @@ the high bit set. So we calculate two checksums, unsigned and signed. """ - unsigned_chksum = 256 + sum(struct.unpack("148B8x356B", buf[:512])) - signed_chksum = 256 + sum(struct.unpack("148b8x356b", buf[:512])) + unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) + signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) return unsigned_chksum, signed_chksum def copyfileobj(src, dst, length=None): @@ -265,6 +265,7 @@ if length is None: shutil.copyfileobj(src, dst) return + BUFSIZE = 16 * 1024 blocks, remainder = divmod(length, BUFSIZE) for b in xrange(blocks): @@ -801,19 +802,19 @@ if self.closed: raise ValueError("I/O operation on closed file") + buf = "" if self.buffer: if size is None: - buf = self.buffer + self.fileobj.read() + buf = self.buffer self.buffer = "" else: buf = self.buffer[:size] self.buffer = self.buffer[size:] - buf += self.fileobj.read(size - len(buf)) + + if size is None: + buf += self.fileobj.read() else: - if size is None: - buf = self.fileobj.read() - else: - buf = self.fileobj.read(size) + buf += self.fileobj.read(size - len(buf)) self.position += len(buf) return buf diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -966,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -978,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib_pypy/_functools.py b/lib_pypy/_functools.py --- a/lib_pypy/_functools.py +++ b/lib_pypy/_functools.py @@ -14,10 +14,9 @@ raise TypeError("the first argument must be callable") self.func = func self.args = args - self.keywords = keywords + self.keywords = keywords or None def __call__(self, *fargs, **fkeywords): - newkeywords = self.keywords.copy() - newkeywords.update(fkeywords) - return self.func(*(self.args + fargs), **newkeywords) - + if self.keywords is not None: + fkeywords = dict(self.keywords, **fkeywords) + return self.func(*(self.args + fargs), **fkeywords) diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -48,23 +48,23 @@ def switch(self, *args): "Switch execution to this greenlet, optionally passing the values " "given as argument(s). Returns the value passed when switching back." - return self.__switch(_continulet.switch, args) + return self.__switch('switch', args) def throw(self, typ=GreenletExit, val=None, tb=None): "raise exception in greenlet, return value passed when switching back" - return self.__switch(_continulet.throw, typ, val, tb) + return self.__switch('throw', typ, val, tb) - def __switch(target, unbound_method, *args): + def __switch(target, methodname, *args): current = getcurrent() # while not target: if not target.__started: - if unbound_method != _continulet.throw: + if methodname == 'switch': greenlet_func = _greenlet_start else: greenlet_func = _greenlet_throw _continulet.__init__(target, greenlet_func, *args) - unbound_method = _continulet.switch + methodname = 'switch' args = () target.__started = True break @@ -75,22 +75,8 @@ target = target.parent # try: - if current.__main: - if target.__main: - # switch from main to main - if unbound_method == _continulet.throw: - raise args[0], args[1], args[2] - (args,) = args - else: - # enter from main to target - args = unbound_method(target, *args) - else: - if target.__main: - # leave to go to target=main - args = unbound_method(current, *args) - else: - # switch from non-main to non-main - args = unbound_method(current, *args, to=target) + unbound_method = getattr(_continulet, methodname) + args = unbound_method(current, *args, to=target) except GreenletExit, e: args = (e,) finally: @@ -110,7 +96,16 @@ @property def gr_frame(self): - raise NotImplementedError("attribute 'gr_frame' of greenlet objects") + # xxx this doesn't work when called on either the current or + # the main greenlet of another thread + if self is getcurrent(): + return None + if self.__main: + self = getcurrent() + f = _continulet.__reduce__(self)[2][0] + if not f: + return None + return f.f_back.f_back.f_back # go past start(), __switch(), switch() # ____________________________________________________________ # Internal stuff @@ -138,8 +133,7 @@ try: res = greenlet.run(*args) finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) return (res,) def _greenlet_throw(greenlet, exc, value, tb): @@ -147,5 +141,4 @@ try: raise exc, value, tb finally: - if greenlet.parent is not _tls.main: - _continuation.permute(greenlet, greenlet.parent) + _continuation.permute(greenlet, greenlet.parent) diff --git a/lib_pypy/pypy_test/test_stackless_pickling.py b/lib_pypy/pypy_test/test_stackless_pickling.py --- a/lib_pypy/pypy_test/test_stackless_pickling.py +++ b/lib_pypy/pypy_test/test_stackless_pickling.py @@ -1,7 +1,3 @@ -""" -this test should probably not run from CPython or py.py. -I'm not entirely sure, how to do that. -""" from __future__ import absolute_import from py.test import skip try: @@ -16,11 +12,15 @@ class Test_StacklessPickling: + def test_pickle_main_coroutine(self): + import stackless, pickle + s = pickle.dumps(stackless.coroutine.getcurrent()) + print s + c = pickle.loads(s) + assert c is stackless.coroutine.getcurrent() + def test_basic_tasklet_pickling(self): - try: - import stackless - except ImportError: - skip("can't load stackless and don't know why!!!") + import stackless from stackless import run, schedule, tasklet import pickle diff --git a/lib_pypy/pyrepl/completing_reader.py b/lib_pypy/pyrepl/completing_reader.py --- a/lib_pypy/pyrepl/completing_reader.py +++ b/lib_pypy/pyrepl/completing_reader.py @@ -229,7 +229,8 @@ def after_command(self, cmd): super(CompletingReader, self).after_command(cmd) - if not isinstance(cmd, complete) and not isinstance(cmd, self_insert): + if not isinstance(cmd, self.commands['complete']) \ + and not isinstance(cmd, self.commands['self_insert']): self.cmpltn_reset() def calc_screen(self): diff --git a/lib_pypy/stackless.py b/lib_pypy/stackless.py --- a/lib_pypy/stackless.py +++ b/lib_pypy/stackless.py @@ -5,51 +5,54 @@ """ -import traceback import _continuation -from functools import partial class TaskletExit(Exception): pass CoroutineExit = TaskletExit -class GWrap(_continuation.continulet): - """This is just a wrapper around continulet to allow - to stick additional attributes to a continulet. - To be more concrete, we need a backreference to - the coroutine object""" + +def _coroutine_getcurrent(): + "Returns the current coroutine (i.e. the one which called this function)." + try: + return _tls.current_coroutine + except AttributeError: + # first call in this thread: current == main + return _coroutine_getmain() + +def _coroutine_getmain(): + try: + return _tls.main_coroutine + except AttributeError: + # create the main coroutine for this thread + continulet = _continuation.continulet + main = coroutine() + main._frame = continulet.__new__(continulet) + main._is_started = -1 + _tls.current_coroutine = _tls.main_coroutine = main + return _tls.main_coroutine class coroutine(object): - "we can't have continulet as a base, because continulets can't be rebound" + _is_started = 0 # 0=no, 1=yes, -1=main def __init__(self): self._frame = None - self.is_zombie = False - - def __getattr__(self, attr): - return getattr(self._frame, attr) - - def __del__(self): - self.is_zombie = True - del self._frame - self._frame = None def bind(self, func, *argl, **argd): """coro.bind(f, *argl, **argd) -> None. binds function f to coro. f will be called with arguments *argl, **argd """ - if self._frame is None or not self._frame.is_pending(): - - def _func(c, *args, **kwargs): - return func(*args, **kwargs) - - run = partial(_func, *argl, **argd) - self._frame = frame = GWrap(run) - else: + if self.is_alive: raise ValueError("cannot bind a bound coroutine") + def run(c): + _tls.current_coroutine = self + self._is_started = 1 + return func(*argl, **argd) + self._is_started = 0 + self._frame = _continuation.continulet(run) def switch(self): """coro.switch() -> returnvalue @@ -57,46 +60,38 @@ f finishes, the returnvalue is that of f, otherwise None is returned """ - current = _getcurrent() - current._jump_to(self) - - def _jump_to(self, coroutine): - _tls.current_coroutine = coroutine - self._frame.switch(to=coroutine._frame) + current = _coroutine_getcurrent() + try: + current._frame.switch(to=self._frame) + finally: + _tls.current_coroutine = current def kill(self): """coro.kill() : kill coroutine coro""" - _tls.current_coroutine = self - self._frame.throw(CoroutineExit) + current = _coroutine_getcurrent() + try: + current._frame.throw(CoroutineExit, to=self._frame) + finally: + _tls.current_coroutine = current - def _is_alive(self): - if self._frame is None: - return False - return not self._frame.is_pending() - is_alive = property(_is_alive) - del _is_alive + @property + def is_alive(self): + return self._is_started < 0 or ( + self._frame is not None and self._frame.is_pending()) - def getcurrent(): - """coroutine.getcurrent() -> the currently running coroutine""" - try: - return _getcurrent() - except AttributeError: - return _maincoro - getcurrent = staticmethod(getcurrent) + @property + def is_zombie(self): + return self._is_started > 0 and not self._frame.is_pending() + + getcurrent = staticmethod(_coroutine_getcurrent) def __reduce__(self): - raise TypeError, 'pickling is not possible based upon continulets' + if self._is_started < 0: + return _coroutine_getmain, () + else: + return type(self), (), self.__dict__ -def _getcurrent(): - "Returns the current coroutine (i.e. the one which called this function)." - try: - return _tls.current_coroutine - except AttributeError: - # first call in this thread: current == main - _coroutine_create_main() - return _tls.current_coroutine - try: from thread import _local except ImportError: @@ -105,17 +100,8 @@ _tls = _local() -def _coroutine_create_main(): - # create the main coroutine for this thread - _tls.current_coroutine = None - main_coroutine = coroutine() - main_coroutine.bind(lambda x:x) - _tls.main_coroutine = main_coroutine - _tls.current_coroutine = main_coroutine - return main_coroutine - -_maincoro = _coroutine_create_main() +# ____________________________________________________________ from collections import deque @@ -161,10 +147,7 @@ _last_task = next assert not next.blocked if next is not current: - try: - next.switch() - except CoroutineExit: - raise TaskletExit + next.switch() return current def set_schedule_callback(callback): @@ -188,34 +171,6 @@ raise self.type, self.value, self.traceback # -# helpers for pickling -# - -_stackless_primitive_registry = {} - -def register_stackless_primitive(thang, retval_expr='None'): - import types - func = thang - if isinstance(thang, types.MethodType): - func = thang.im_func - code = func.func_code - _stackless_primitive_registry[code] = retval_expr - # It is not too nice to attach info via the code object, but - # I can't think of a better solution without a real transform. - -def rewrite_stackless_primitive(coro_state, alive, tempval): - flags, frame, thunk, parent = coro_state - while frame is not None: - retval_expr = _stackless_primitive_registry.get(frame.f_code) - if retval_expr: - # this tasklet needs to stop pickling here and return its value. - tempval = eval(retval_expr, globals(), frame.f_locals) - coro_state = flags, frame, thunk, parent - break - frame = frame.f_back - return coro_state, alive, tempval - -# # class channel(object): @@ -367,8 +322,6 @@ """ return self._channel_action(None, -1) - register_stackless_primitive(receive, retval_expr='receiver.tempval') - def send_exception(self, exp_type, msg): self.send(bomb(exp_type, exp_type(msg))) @@ -385,9 +338,8 @@ the runnables list. """ return self._channel_action(msg, 1) - - register_stackless_primitive(send) - + + class tasklet(coroutine): """ A tasklet object represents a tiny task in a Python thread. @@ -459,6 +411,7 @@ def _func(): try: try: + coroutine.switch(back) func(*argl, **argd) except TaskletExit: pass @@ -468,6 +421,8 @@ self.func = None coroutine.bind(self, _func) + back = _coroutine_getcurrent() + coroutine.switch(self) self.alive = True _scheduler_append(self) return self @@ -490,39 +445,6 @@ raise RuntimeError, "The current tasklet cannot be removed." # not sure if I will revive this " Use t=tasklet().capture()" _scheduler_remove(self) - - def __reduce__(self): - one, two, coro_state = coroutine.__reduce__(self) - assert one is coroutine - assert two == () - # we want to get rid of the parent thing. - # for now, we just drop it - a, frame, c, d = coro_state - - # Removing all frames related to stackless.py. - # They point to stuff we don't want to be pickled. - - pickleframe = frame - while frame is not None: - if frame.f_code == schedule.func_code: - # Removing everything including and after the - # call to stackless.schedule() - pickleframe = frame.f_back - break - frame = frame.f_back - if d: - assert isinstance(d, coroutine) - coro_state = a, pickleframe, c, None - coro_state, alive, tempval = rewrite_stackless_primitive(coro_state, self.alive, self.tempval) - inst_dict = self.__dict__.copy() - inst_dict.pop('tempval', None) - return self.__class__, (), (coro_state, alive, tempval, inst_dict) - - def __setstate__(self, (coro_state, alive, tempval, inst_dict)): - coroutine.__setstate__(self, coro_state) - self.__dict__.update(inst_dict) - self.alive = alive - self.tempval = tempval def getmain(): """ @@ -611,30 +533,7 @@ global _last_task _global_task_id = 0 _main_tasklet = coroutine.getcurrent() - try: - _main_tasklet.__class__ = tasklet - except TypeError: # we are running pypy-c - class TaskletProxy(object): - """TaskletProxy is needed to give the _main_coroutine tasklet behaviour""" - def __init__(self, coro): - self._coro = coro - - def __getattr__(self,attr): - return getattr(self._coro,attr) - - def __str__(self): - return '' % (self._task_id, self.is_alive) - - def __reduce__(self): - return getmain, () - - __repr__ = __str__ - - - global _main_coroutine - _main_coroutine = _main_tasklet - _main_tasklet = TaskletProxy(_main_tasklet) - assert _main_tasklet.is_alive and not _main_tasklet.is_zombie + _main_tasklet.__class__ = tasklet # XXX HAAAAAAAAAAAAAAAAAAAAACK _last_task = _main_tasklet tasklet._init.im_func(_main_tasklet, label='main') _squeue = deque() diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -139,7 +139,7 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, end + return start, len(self) def getblockend(self, lineno): # XXX diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -149,7 +149,7 @@ desc = olddesc.bind_self(classdef) args = self.bookkeeper.build_args("simple_call", args_s[:]) desc.consider_call_site(self.bookkeeper, desc.getcallfamily(), [desc], - args, annmodel.s_ImpossibleValue) + args, annmodel.s_ImpossibleValue, None) result = [] def schedule(graph, inputcells): result.append((graph, inputcells)) diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -209,8 +209,8 @@ self.consider_call_site(call_op) for pbc, args_s in self.emulated_pbc_calls.itervalues(): - self.consider_call_site_for_pbc(pbc, 'simple_call', - args_s, s_ImpossibleValue) + self.consider_call_site_for_pbc(pbc, 'simple_call', + args_s, s_ImpossibleValue, None) self.emulated_pbc_calls = {} finally: self.leave() @@ -257,18 +257,18 @@ args_s = [lltype_to_annotation(adtmeth.ll_ptrtype)] + args_s if isinstance(s_callable, SomePBC): s_result = binding(call_op.result, s_ImpossibleValue) - self.consider_call_site_for_pbc(s_callable, - call_op.opname, - args_s, s_result) + self.consider_call_site_for_pbc(s_callable, call_op.opname, args_s, + s_result, call_op) - def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result): + def consider_call_site_for_pbc(self, s_callable, opname, args_s, s_result, + call_op): descs = list(s_callable.descriptions) if not descs: return family = descs[0].getcallfamily() args = self.build_args(opname, args_s) s_callable.getKind().consider_call_site(self, family, descs, args, - s_result) + s_result, call_op) def getuniqueclassdef(self, cls): """Get the ClassDef associated with the given user cls. @@ -656,6 +656,7 @@ whence = None else: whence = emulated # callback case + op = None s_previous_result = s_ImpossibleValue def schedule(graph, inputcells): @@ -663,7 +664,7 @@ results = [] for desc in descs: - results.append(desc.pycall(schedule, args, s_previous_result)) + results.append(desc.pycall(schedule, args, s_previous_result, op)) s_result = unionof(*results) return s_result diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -255,7 +255,11 @@ raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) return inputcells - def specialize(self, inputcells): + def specialize(self, inputcells, op=None): + if (op is None and + getattr(self.bookkeeper, "position_key", None) is not None): + _, block, i = self.bookkeeper.position_key + op = block.operations[i] if self.specializer is None: # get the specializer based on the tag of the 'pyobj' # (if any), according to the current policy @@ -269,11 +273,14 @@ enforceargs = Sig(*enforceargs) self.pyobj._annenforceargs_ = enforceargs enforceargs(self, inputcells) # can modify inputcells in-place - return self.specializer(self, inputcells) + if getattr(self.pyobj, '_annspecialcase_', '').endswith("call_location"): + return self.specializer(self, inputcells, op) + else: + return self.specializer(self, inputcells) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): inputcells = self.parse_arguments(args) - result = self.specialize(inputcells) + result = self.specialize(inputcells, op) if isinstance(result, FunctionGraph): graph = result # common case # if that graph has a different signature, we need to re-parse @@ -296,17 +303,17 @@ None, # selfclassdef name) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args) - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) - def variant_for_call_site(bookkeeper, family, descs, args): + def variant_for_call_site(bookkeeper, family, descs, args, op): shape = rawshape(args) bookkeeper.enter(None) try: - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) finally: bookkeeper.leave() index = family.calltable_lookup_row(shape, row) @@ -316,7 +323,7 @@ def rowkey(self): return self - def row_to_consider(descs, args): + def row_to_consider(descs, args, op): # see comments in CallFamily from pypy.annotation.model import s_ImpossibleValue row = {} @@ -324,7 +331,7 @@ def enlist(graph, ignore): row[desc.rowkey()] = graph return s_ImpossibleValue # meaningless - desc.pycall(enlist, args, s_ImpossibleValue) + desc.pycall(enlist, args, s_ImpossibleValue, op) return row row_to_consider = staticmethod(row_to_consider) @@ -521,7 +528,7 @@ "specialization" % (self.name,)) return self.getclassdef(None) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance, SomeImpossibleValue if self.specialize: if self.specialize == 'specialize:ctr_location': @@ -664,7 +671,7 @@ cdesc = cdesc.basedesc return s_result # common case - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): from pypy.annotation.model import SomeInstance, SomePBC, s_None if len(descs) == 1: # call to a single class, look at the result annotation @@ -709,7 +716,7 @@ initdescs[0].mergecallfamilies(*initdescs[1:]) initfamily = initdescs[0].getcallfamily() MethodDesc.consider_call_site(bookkeeper, initfamily, initdescs, - args, s_None) + args, s_None, op) consider_call_site = staticmethod(consider_call_site) def getallbases(self): @@ -782,13 +789,13 @@ def getuniquegraph(self): return self.funcdesc.getuniquegraph() - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomeInstance if self.selfclassdef is None: raise Exception("calling %r" % (self,)) s_instance = SomeInstance(self.selfclassdef, flags = self.flags) args = args.prepend(s_instance) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) def bind_under(self, classdef, name): self.bookkeeper.warning("rebinding an already bound %r" % (self,)) @@ -801,10 +808,10 @@ self.name, flags) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [methoddesc.funcdesc for methoddesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) @@ -956,16 +963,16 @@ return '' % (self.funcdesc, self.frozendesc) - def pycall(self, schedule, args, s_previous_result): + def pycall(self, schedule, args, s_previous_result, op=None): from pypy.annotation.model import SomePBC s_self = SomePBC([self.frozendesc]) args = args.prepend(s_self) - return self.funcdesc.pycall(schedule, args, s_previous_result) + return self.funcdesc.pycall(schedule, args, s_previous_result, op) - def consider_call_site(bookkeeper, family, descs, args, s_result): + def consider_call_site(bookkeeper, family, descs, args, s_result, op): shape = rawshape(args, nextra=1) # account for the extra 'self' funcdescs = [mofdesc.funcdesc for mofdesc in descs] - row = FunctionDesc.row_to_consider(descs, args) + row = FunctionDesc.row_to_consider(descs, args, op) family.calltable_add_row(shape, row) consider_call_site = staticmethod(consider_call_site) diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,7 +1,7 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype -from pypy.annotation.specialize import memo +from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. from pypy.annotation import model as annmodel @@ -75,6 +75,7 @@ specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) + specialize__call_location = staticmethod(specialize_call_location) def specialize__ll(pol, *args): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -370,3 +370,7 @@ else: key = s.listdef.listitem.s_value.knowntype return maybe_star_args(funcdesc, key, args_s) + +def specialize_call_location(funcdesc, args_s, op): + assert op is not None + return maybe_star_args(funcdesc, op, args_s) diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -1099,8 +1099,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1]) - graph2 = allocdesc.specialize([s_C2]) + graph1 = allocdesc.specialize([s_C1], None) + graph2 = allocdesc.specialize([s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1135,8 +1135,8 @@ allocdesc = a.bookkeeper.getdesc(alloc) s_C1 = a.bookkeeper.immutablevalue(C1) s_C2 = a.bookkeeper.immutablevalue(C2) - graph1 = allocdesc.specialize([s_C1, s_C2]) - graph2 = allocdesc.specialize([s_C2, s_C2]) + graph1 = allocdesc.specialize([s_C1, s_C2], None) + graph2 = allocdesc.specialize([s_C2, s_C2], None) assert a.binding(graph1.getreturnvar()).classdef == C1df assert a.binding(graph2.getreturnvar()).classdef == C2df assert graph1 in a.translator.graphs @@ -1194,6 +1194,19 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_call_location(self): + def g(a): + return a + g._annspecialcase_ = "specialize:call_location" + def f(x): + return g(x) + f._annspecialcase_ = "specialize:argtype(0)" + def h(y): + w = f(y) + return int(f(str(y))) + w + a = self.RPythonAnnotator() + assert a.build_types(h, [int]) == annmodel.SomeInteger() + def test_assert_list_doesnt_lose_info(self): class T(object): pass diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -21,8 +21,6 @@ * `Papers`_: Academic papers, talks, and related projects -* `Videos`_: Videos of PyPy talks and presentations - * `speed.pypy.org`_: Daily benchmarks of how fast PyPy is * `potential project ideas`_: In case you want to get your feet wet... diff --git a/pypy/doc/stackless.rst b/pypy/doc/stackless.rst --- a/pypy/doc/stackless.rst +++ b/pypy/doc/stackless.rst @@ -66,7 +66,7 @@ In practice, in PyPy, you cannot change the ``f_back`` of an abitrary frame, but only of frames stored in ``continulets``. -Continulets are internally implemented using stacklets. Stacklets are a +Continulets are internally implemented using stacklets_. Stacklets are a bit more primitive (they are really one-shot continuations), but that idea only works in C, not in Python. The basic idea of continulets is to have at any point in time a complete valid stack; this is important @@ -215,11 +215,6 @@ * Support for other CPUs than x86 and x86-64 -* The app-level ``f_back`` field of frames crossing continulet boundaries - is None for now, unlike what I explain in the theoretical overview - above. It mostly means that in a ``pdb.set_trace()`` you cannot go - ``up`` past countinulet boundaries. This could be fixed. - .. __: `recursion depth limit`_ (*) Pickling, as well as changing threads, could be implemented by using @@ -285,6 +280,24 @@ to use other interfaces like genlets and greenlets.) +Stacklets ++++++++++ + +Continulets are internally implemented using stacklets, which is the +generic RPython-level building block for "one-shot continuations". For +more information about them please see the documentation in the C source +at `pypy/translator/c/src/stacklet/stacklet.h`_. + +The module ``pypy.rlib.rstacklet`` is a thin wrapper around the above +functions. The key point is that new() and switch() always return a +fresh stacklet handle (or an empty one), and switch() additionally +consumes one. It makes no sense to have code in which the returned +handle is ignored, or used more than once. Note that ``stacklet.c`` is +written assuming that the user knows that, and so no additional checking +occurs; this can easily lead to obscure crashes if you don't use a +wrapper like PyPy's '_continuation' module. + + Theory of composability +++++++++++++++++++++++ diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -125,6 +125,7 @@ ### Manipulation ### + @jit.look_inside_iff(lambda self: not self._dont_jit) def unpack(self): # slowish "Return a ([w1,w2...], {'kw':w3...}) pair." kwds_w = {} @@ -245,6 +246,8 @@ ### Parsing for function calls ### + # XXX: this should be @jit.look_inside_iff, but we need key word arguments, + # and it doesn't support them for now. def _match_signature(self, w_firstarg, scope_w, signature, defaults_w=None, blindargs=0): """Parse args and kwargs according to the signature of a code object, diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -8,13 +8,13 @@ from pypy.interpreter.miscutils import ThreadLocals from pypy.tool.cache import Cache from pypy.tool.uid import HUGEVAL_BYTES -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, newlist from pypy.rlib.debug import make_sure_not_resized from pypy.rlib.timer import DummyTimer, Timer from pypy.rlib.rarithmetic import r_uint from pypy.rlib import jit from pypy.tool.sourcetools import func_with_new_name -import os, sys, py +import os, sys __all__ = ['ObjSpace', 'OperationError', 'Wrappable', 'W_Root'] @@ -757,7 +757,18 @@ w_iterator = self.iter(w_iterable) # If we know the expected length we can preallocate. if expected_length == -1: - items = [] + try: + lgt_estimate = self.len_w(w_iterable) + except OperationError, o: + if (not o.match(self, self.w_AttributeError) and + not o.match(self, self.w_TypeError)): + raise + items = [] + else: + try: + items = newlist(lgt_estimate) + except MemoryError: + items = [] # it might have lied else: items = [None] * expected_length idx = 0 diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -1,5 +1,4 @@ import sys -from pypy.interpreter.miscutils import Stack from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.unroll import unrolling_iterable @@ -48,6 +47,7 @@ return frame @staticmethod + @jit.unroll_safe # should usually loop 0 times, very rarely more than once def getnextframe_nohidden(frame): frame = frame.f_backref() while frame and frame.hide(): @@ -81,58 +81,6 @@ # ________________________________________________________________ - - class Subcontext(object): - # coroutine: subcontext support - - def __init__(self): - self.topframe = None - self.w_tracefunc = None - self.profilefunc = None - self.w_profilefuncarg = None - self.is_tracing = 0 - - def enter(self, ec): - ec.topframeref = jit.non_virtual_ref(self.topframe) - ec.w_tracefunc = self.w_tracefunc - ec.profilefunc = self.profilefunc - ec.w_profilefuncarg = self.w_profilefuncarg - ec.is_tracing = self.is_tracing - ec.space.frame_trace_action.fire() - - def leave(self, ec): - self.topframe = ec.gettopframe() - self.w_tracefunc = ec.w_tracefunc - self.profilefunc = ec.profilefunc - self.w_profilefuncarg = ec.w_profilefuncarg - self.is_tracing = ec.is_tracing - - def clear_framestack(self): - self.topframe = None - - # the following interface is for pickling and unpickling - def getstate(self, space): - if self.topframe is None: - return space.w_None - return self.topframe - - def setstate(self, space, w_state): - from pypy.interpreter.pyframe import PyFrame - if space.is_w(w_state, space.w_None): - self.topframe = None - else: - self.topframe = space.interp_w(PyFrame, w_state) - - def getframestack(self): - lst = [] - f = self.topframe - while f is not None: - lst.append(f) - f = f.f_backref() - lst.reverse() - return lst - # coroutine: I think this is all, folks! - def c_call_trace(self, frame, w_func, args=None): "Profile the call of a builtin function" self._c_call_return_trace(frame, w_func, args, 'c_call') diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -242,8 +242,10 @@ # we have been seen by other means so rtyping should not choke # on us identifier = self.code.identifier - assert Function._all.get(identifier, self) is self, ("duplicate " - "function ids") + previous = Function._all.get(identifier, self) + assert previous is self, ( + "duplicate function ids with identifier=%r: %r and %r" % ( + identifier, previous, self)) self.add_to_table() return False diff --git a/pypy/interpreter/miscutils.py b/pypy/interpreter/miscutils.py --- a/pypy/interpreter/miscutils.py +++ b/pypy/interpreter/miscutils.py @@ -2,154 +2,6 @@ Miscellaneous utilities. """ -import types - -from pypy.rlib.rarithmetic import r_uint - -class RootStack: - pass - -class Stack(RootStack): - """Utility class implementing a stack.""" - - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - def __init__(self): - self.items = [] - - def clone(self): - s = self.__class__() - for item in self.items: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - self.items.append(item) - - def pop(self): - return self.items.pop() - - def drop(self, n): - if n > 0: - del self.items[-n:] - - def top(self, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - return self.items[~position] - - def set_top(self, value, position=0): - """'position' is 0 for the top of the stack, 1 for the item below, - and so on. It must not be negative.""" - if position < 0: - raise ValueError, 'negative stack position' - if position >= len(self.items): - raise IndexError, 'not enough entries in stack' - self.items[~position] = value - - def depth(self): - return len(self.items) - - def empty(self): - return len(self.items) == 0 - - -class FixedStack(RootStack): - _annspecialcase_ = "specialize:ctr_location" # polymorphic - - # unfortunately, we have to re-do everything - def __init__(self): - pass - - def setup(self, stacksize): - self.ptr = r_uint(0) # we point after the last element - self.items = [None] * stacksize - - def clone(self): - # this is only needed if we support flow space - s = self.__class__() - s.setup(len(self.items)) - for item in self.items[:self.ptr]: - try: - item = item.clone() - except AttributeError: - pass - s.push(item) - return s - - def push(self, item): - ptr = self.ptr - self.items[ptr] = item - self.ptr = ptr + 1 - - def pop(self): - ptr = self.ptr - 1 - ret = self.items[ptr] # you get OverflowError if the stack is empty - self.items[ptr] = None - self.ptr = ptr - return ret - - def drop(self, n): - while n > 0: - n -= 1 - self.ptr -= 1 - self.items[self.ptr] = None - - def top(self, position=0): - # for a fixed stack, we assume correct indices - return self.items[self.ptr + ~position] - - def set_top(self, value, position=0): - # for a fixed stack, we assume correct indices - self.items[self.ptr + ~position] = value - - def depth(self): - return self.ptr - - def empty(self): - return not self.ptr - - -class InitializedClass(type): - """NOT_RPYTHON. A meta-class that allows a class to initialize itself (or - its subclasses) by calling __initclass__() as a class method.""" - def __init__(self, name, bases, dict): - super(InitializedClass, self).__init__(name, bases, dict) - for basecls in self.__mro__: - raw = basecls.__dict__.get('__initclass__') - if isinstance(raw, types.FunctionType): - raw(self) # call it as a class method - - -class RwDictProxy(object): - """NOT_RPYTHON. A dict-like class standing for 'cls.__dict__', to work - around the fact that the latter is a read-only proxy for new-style - classes.""" - - def __init__(self, cls): - self.cls = cls - - def __getitem__(self, attr): - return self.cls.__dict__[attr] - - def __setitem__(self, attr, value): - setattr(self.cls, attr, value) - - def __contains__(self, value): - return value in self.cls.__dict__ - - def items(self): - return self.cls.__dict__.items() - - class ThreadLocals: """Pseudo thread-local storage, for 'space.threadlocals'. This is not really thread-local at all; the intention is that the PyPy diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -10,7 +10,7 @@ from pypy.interpreter.argument import Signature from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import NoneNotWrapped, unwrap_spec -from pypy.interpreter.astcompiler.consts import (CO_OPTIMIZED, +from pypy.interpreter.astcompiler.consts import ( CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_NESTED, CO_GENERATOR, CO_CONTAINSGLOBALS) from pypy.rlib.rarithmetic import intmask diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -614,7 +614,8 @@ return self.get_builtin().getdict(space) def fget_f_back(self, space): - return self.space.wrap(self.f_backref()) + f_back = ExecutionContext.getnextframe_nohidden(self) + return self.space.wrap(f_back) def fget_f_lasti(self, space): return self.space.wrap(self.last_instr) diff --git a/pypy/interpreter/pyparser/future.py b/pypy/interpreter/pyparser/future.py --- a/pypy/interpreter/pyparser/future.py +++ b/pypy/interpreter/pyparser/future.py @@ -225,14 +225,16 @@ raise DoneException self.consume_whitespace() - def consume_whitespace(self): + def consume_whitespace(self, newline_ok=False): while 1: c = self.getc() if c in whitespace: self.pos += 1 continue - elif c == '\\': - self.pos += 1 + elif c == '\\' or newline_ok: + slash = c == '\\' + if slash: + self.pos += 1 c = self.getc() if c == '\n': self.pos += 1 @@ -243,8 +245,10 @@ if self.getc() == '\n': self.pos += 1 self.atbol() + elif slash: + raise DoneException else: - raise DoneException + return else: return @@ -281,7 +285,7 @@ return else: self.pos += 1 - self.consume_whitespace() + self.consume_whitespace(paren_list) if paren_list and self.getc() == ')': self.pos += 1 return # Handles trailing comma inside parenthesis diff --git a/pypy/interpreter/pyparser/test/test_futureautomaton.py b/pypy/interpreter/pyparser/test/test_futureautomaton.py --- a/pypy/interpreter/pyparser/test/test_futureautomaton.py +++ b/pypy/interpreter/pyparser/test/test_futureautomaton.py @@ -3,7 +3,7 @@ from pypy.tool import stdlib___future__ as fut def run(s): - f = future.FutureAutomaton(future.futureFlags_2_5, s) + f = future.FutureAutomaton(future.futureFlags_2_7, s) try: f.start() except future.DoneException: @@ -113,6 +113,14 @@ assert f.lineno == 1 assert f.col_offset == 0 +def test_paren_with_newline(): + s = 'from __future__ import (division,\nabsolute_import)\n' + f = run(s) + assert f.pos == len(s) + assert f.flags == (fut.CO_FUTURE_DIVISION | fut.CO_FUTURE_ABSOLUTE_IMPORT) + assert f.lineno == 1 + assert f.col_offset == 0 + def test_multiline(): s = '"abc" #def\n #ghi\nfrom __future__ import (division as b, generators,)\nfrom __future__ import with_statement\n' f = run(s) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -71,6 +71,23 @@ assert err.value.match(space, space.w_ValueError) err = raises(OperationError, space.unpackiterable, w_l, 5) assert err.value.match(space, space.w_ValueError) + w_a = space.appexec((), """(): + class A(object): + def __iter__(self): + return self + def next(self): + raise StopIteration + def __len__(self): + 1/0 + return A() + """) + try: + space.unpackiterable(w_a) + except OperationError, o: + if not o.match(space, space.w_ZeroDivisionError): + raise Exception("DID NOT RAISE") + else: + raise Exception("DID NOT RAISE") def test_fixedview(self): space = self.space diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -1,4 +1,5 @@ from pypy.tool import udir +from pypy.conftest import option class AppTestPyFrame: @@ -6,6 +7,15 @@ def setup_class(cls): cls.w_udir = cls.space.wrap(str(udir.udir)) cls.w_tempfile1 = cls.space.wrap(str(udir.udir.join('tempfile1'))) + if not option.runappdirect: + w_call_further = cls.space.appexec([], """(): + def call_further(f): + return f() + return call_further + """) + assert not w_call_further.code.hidden_applevel + w_call_further.code.hidden_applevel = True # hack + cls.w_call_further = w_call_further # test for the presence of the attributes, not functionality @@ -107,6 +117,22 @@ frame = f() assert frame.f_back.f_code.co_name == 'f' + def test_f_back_hidden(self): + if not hasattr(self, 'call_further'): + skip("not for runappdirect testing") + import sys + def f(): + return (sys._getframe(0), + sys._getframe(1), + sys._getframe(0).f_back) + def main(): + return self.call_further(f) + f0, f1, f1bis = main() + assert f0.f_code.co_name == 'f' + assert f1.f_code.co_name == 'main' + assert f1bis is f1 + assert f0.f_back is f1 + def test_f_exc_xxx(self): import sys diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -496,6 +496,16 @@ u = lltype.cast_opaque_ptr(lltype.Ptr(rstr.UNICODE), string) u.chars[index] = unichr(newvalue) + def bh_copystrcontent(self, src, dst, srcstart, dststart, length): + src = lltype.cast_opaque_ptr(lltype.Ptr(rstr.STR), src) + dst = lltype.cast_opaque_ptr(lltype.Ptr(rstr.STR), dst) + rstr.copy_string_contents(src, dst, srcstart, dststart, length) + + def bh_copyunicodecontent(self, src, dst, srcstart, dststart, length): + src = lltype.cast_opaque_ptr(lltype.Ptr(rstr.UNICODE), src) + dst = lltype.cast_opaque_ptr(lltype.Ptr(rstr.UNICODE), dst) + rstr.copy_unicode_contents(src, dst, srcstart, dststart, length) + def bh_call_i(self, func, calldescr, args_i, args_r, args_f): assert isinstance(calldescr, BaseIntCallDescr) if not we_are_translated(): diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -78,7 +78,7 @@ Optionally, return a ``ops_offset`` dictionary. See the docstring of ``compiled_loop`` for more informations about it. """ - raise NotImplementedError + raise NotImplementedError def dump_loop_token(self, looptoken): """Print a disassembled version of looptoken to stdout""" @@ -298,6 +298,10 @@ raise NotImplementedError def bh_unicodesetitem(self, string, index, newvalue): raise NotImplementedError + def bh_copystrcontent(self, src, dst, srcstart, dststart, length): + raise NotImplementedError + def bh_copyunicodecontent(self, src, dst, srcstart, dststart, length): + raise NotImplementedError def force(self, force_token): raise NotImplementedError diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1158,6 +1158,12 @@ return SpaceOperation('%s_assert_green' % kind, args, None) elif oopspec_name == 'jit.current_trace_length': return SpaceOperation('current_trace_length', [], op.result) + elif oopspec_name == 'jit.isconstant': + kind = getkind(args[0].concretetype) + return SpaceOperation('%s_isconstant' % kind, args, op.result) + elif oopspec_name == 'jit.isvirtual': + kind = getkind(args[0].concretetype) + return SpaceOperation('%s_isvirtual' % kind, args, op.result) else: raise AssertionError("missing support for %r" % oopspec_name) @@ -1415,6 +1421,14 @@ else: assert 0, "args[0].concretetype must be STR or UNICODE" # + if oopspec_name == 'stroruni.copy_contents': + if SoU.TO == rstr.STR: + new_op = 'copystrcontent' + elif SoU.TO == rstr.UNICODE: + new_op = 'copyunicodecontent' + else: + assert 0 + return SpaceOperation(new_op, args, op.result) if oopspec_name == "stroruni.equal": for otherindex, othername, argtypes, resulttype in [ (EffectInfo.OS_STREQ_SLICE_CHECKNULL, diff --git a/pypy/jit/metainterp/blackhole.py b/pypy/jit/metainterp/blackhole.py --- a/pypy/jit/metainterp/blackhole.py +++ b/pypy/jit/metainterp/blackhole.py @@ -835,6 +835,18 @@ def bhimpl_current_trace_length(): return -1 + @arguments("i", returns="i") + def bhimpl_int_isconstant(x): + return False + + @arguments("r", returns="i") + def bhimpl_ref_isconstant(x): + return False + + @arguments("r", returns="i") + def bhimpl_ref_isvirtual(x): + return False + # ---------- # the main hints and recursive calls @@ -1224,6 +1236,9 @@ @arguments("cpu", "r", "i", "i") def bhimpl_strsetitem(cpu, string, index, newchr): cpu.bh_strsetitem(string, index, newchr) + @arguments("cpu", "r", "r", "i", "i", "i") + def bhimpl_copystrcontent(cpu, src, dst, srcstart, dststart, length): + cpu.bh_copystrcontent(src, dst, srcstart, dststart, length) @arguments("cpu", "i", returns="r") def bhimpl_newunicode(cpu, length): @@ -1237,6 +1252,9 @@ @arguments("cpu", "r", "i", "i") def bhimpl_unicodesetitem(cpu, unicode, index, newchr): cpu.bh_unicodesetitem(unicode, index, newchr) + @arguments("cpu", "r", "r", "i", "i", "i") + def bhimpl_copyunicodecontent(cpu, src, dst, srcstart, dststart, length): + cpu.bh_copyunicodecontent(src, dst, srcstart, dststart, length) @arguments(returns=(longlong.is_64_bit and "i" or "f")) def bhimpl_ll_read_timestamp(): @@ -1441,7 +1459,7 @@ def resume_in_blackhole(metainterp_sd, jitdriver_sd, resumedescr, all_virtuals=None): from pypy.jit.metainterp.resume import blackhole_from_resumedata - debug_start('jit-blackhole') + #debug_start('jit-blackhole') metainterp_sd.profiler.start_blackhole() blackholeinterp = blackhole_from_resumedata( metainterp_sd.blackholeinterpbuilder, @@ -1460,12 +1478,12 @@ _run_forever(blackholeinterp, current_exc) finally: metainterp_sd.profiler.end_blackhole() - debug_stop('jit-blackhole') + #debug_stop('jit-blackhole') def convert_and_run_from_pyjitpl(metainterp, raising_exception=False): # Get a chain of blackhole interpreters and fill them by copying # 'metainterp.framestack'. - debug_start('jit-blackhole') + #debug_start('jit-blackhole') metainterp_sd = metainterp.staticdata metainterp_sd.profiler.start_blackhole() nextbh = None @@ -1488,4 +1506,4 @@ _run_forever(firstbh, current_exc) finally: metainterp_sd.profiler.end_blackhole() - debug_stop('jit-blackhole') + #debug_stop('jit-blackhole') diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/heapcache.py @@ -0,0 +1,210 @@ +from pypy.jit.metainterp.history import ConstInt +from pypy.jit.metainterp.resoperation import rop + + +class HeapCache(object): + def __init__(self): + self.reset() + + def reset(self): + # contains boxes where the class is already known + self.known_class_boxes = {} + # store the boxes that contain newly allocated objects, this maps the + # boxes to a bool, the bool indicates whether or not the object has + # escaped the trace or not (True means the box never escaped, False + # means it did escape), its presences in the mapping shows that it was + # allocated inside the trace + self.new_boxes = {} + # Tracks which boxes should be marked as escaped when the key box + # escapes. + self.dependencies = {} + # contains frame boxes that are not virtualizables + self.nonstandard_virtualizables = {} + # heap cache + # maps descrs to {from_box, to_box} dicts + self.heap_cache = {} + # heap array cache + # maps descrs to {index: {from_box: to_box}} dicts + self.heap_array_cache = {} + # cache the length of arrays + self.length_cache = {} + + def invalidate_caches(self, opnum, descr, argboxes): + self.mark_escaped(opnum, argboxes) + self.clear_caches(opnum, descr, argboxes) + + def mark_escaped(self, opnum, argboxes): + idx = 0 + if opnum == rop.SETFIELD_GC: + assert len(argboxes) == 2 + box, valuebox = argboxes + if self.is_unescaped(box) and self.is_unescaped(valuebox): + self.dependencies.setdefault(box, []).append(valuebox) + else: + self._escape(valuebox) + # GETFIELD_GC doesn't escape it's argument + elif opnum != rop.GETFIELD_GC: + for box in argboxes: + # setarrayitem_gc don't escape its first argument + if not (idx == 0 and opnum in [rop.SETARRAYITEM_GC]): + self._escape(box) + idx += 1 + + def _escape(self, box): + if box in self.new_boxes: + self.new_boxes[box] = False + if box in self.dependencies: + for dep in self.dependencies[box]: + self._escape(dep) + del self.dependencies[box] + + def clear_caches(self, opnum, descr, argboxes): + if opnum == rop.SETFIELD_GC: + return + if opnum == rop.SETARRAYITEM_GC: + return + if opnum == rop.SETFIELD_RAW: + return + if opnum == rop.SETARRAYITEM_RAW: + return + if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: + return + if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: + return + if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: + effectinfo = descr.get_extra_info() + ef = effectinfo.extraeffect + if ef == effectinfo.EF_LOOPINVARIANT or \ + ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ + ef == effectinfo.EF_ELIDABLE_CAN_RAISE: + return + # A special case for ll_arraycopy, because it is so common, and its + # effects are so well defined. + elif effectinfo.oopspecindex == effectinfo.OS_ARRAYCOPY: + # The destination box + if argboxes[2] in self.new_boxes: + # XXX: no descr here so we invalidate any of them, not just + # of the correct type + # XXX: in theory the indices of the copy could be looked at + # as well + for descr, cache in self.heap_array_cache.iteritems(): + for idx, cache in cache.iteritems(): + for frombox in cache.keys(): + if frombox not in self.new_boxes: + del cache[frombox] + return + + self.heap_cache.clear() + self.heap_array_cache.clear() + + def is_class_known(self, box): + return box in self.known_class_boxes + + def class_now_known(self, box): + self.known_class_boxes[box] = None + + def is_nonstandard_virtualizable(self, box): + return box in self.nonstandard_virtualizables + + def nonstandard_virtualizables_now_known(self, box): + self.nonstandard_virtualizables[box] = None + + def is_unescaped(self, box): + return self.new_boxes.get(box, False) + + def new(self, box): + self.new_boxes[box] = True + + def new_array(self, box, lengthbox): + self.new(box) + self.arraylen_now_known(box, lengthbox) + + def getfield(self, box, descr): + d = self.heap_cache.get(descr, None) + if d: + tobox = d.get(box, None) + if tobox: + return tobox + return None + + def getfield_now_known(self, box, descr, fieldbox): + self.heap_cache.setdefault(descr, {})[box] = fieldbox + + def setfield(self, box, descr, fieldbox): + d = self.heap_cache.get(descr, None) + new_d = self._do_write_with_aliasing(d, box, fieldbox) + self.heap_cache[descr] = new_d + + def _do_write_with_aliasing(self, d, box, fieldbox): + # slightly subtle logic here + # a write to an arbitrary box, all other boxes can alias this one + if not d or box not in self.new_boxes: + # therefore we throw away the cache + return {box: fieldbox} + # the object we are writing to is freshly allocated + # only remove some boxes from the cache + new_d = {} + for frombox, tobox in d.iteritems(): + # the other box is *also* freshly allocated + # therefore frombox and box *must* contain different objects + # thus we can keep it in the cache + if frombox in self.new_boxes: + new_d[frombox] = tobox + new_d[box] = fieldbox + return new_d + + def getarrayitem(self, box, descr, indexbox): + if not isinstance(indexbox, ConstInt): + return + index = indexbox.getint() + cache = self.heap_array_cache.get(descr, None) + if cache: + indexcache = cache.get(index, None) + if indexcache is not None: + return indexcache.get(box, None) + + def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + if not isinstance(indexbox, ConstInt): + return + index = indexbox.getint() + cache = self.heap_array_cache.setdefault(descr, {}) + indexcache = cache.get(index, None) + if indexcache is not None: + indexcache[box] = valuebox + else: + cache[index] = {box: valuebox} + + def setarrayitem(self, box, descr, indexbox, valuebox): + if not isinstance(indexbox, ConstInt): + cache = self.heap_array_cache.get(descr, None) + if cache is not None: + cache.clear() + return + index = indexbox.getint() + cache = self.heap_array_cache.setdefault(descr, {}) + indexcache = cache.get(index, None) + cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) + + def arraylen(self, box): + return self.length_cache.get(box, None) + + def arraylen_now_known(self, box, lengthbox): + self.length_cache[box] = lengthbox + + def _replace_box(self, d, oldbox, newbox): + new_d = {} + for frombox, tobox in d.iteritems(): + if frombox is oldbox: + frombox = newbox + if tobox is oldbox: + tobox = newbox + new_d[frombox] = tobox + return new_d + + def replace_box(self, oldbox, newbox): + for descr, d in self.heap_cache.iteritems(): + self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) + for descr, d in self.heap_array_cache.iteritems(): + for index, cache in d.iteritems(): + d[index] = self._replace_box(cache, oldbox, newbox) + self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -37,6 +37,12 @@ self.force_lazy_setfield(optheap) assert not self.possible_aliasing(optheap, structvalue) cached_fieldvalue = self._cached_fields.get(structvalue, None) + + # Hack to ensure constants are imported from the preamble + if cached_fieldvalue and fieldvalue.is_constant(): + optheap.optimizer.ensure_imported(cached_fieldvalue) + cached_fieldvalue = self._cached_fields.get(structvalue, None) + if cached_fieldvalue is not fieldvalue: # common case: store the 'op' as lazy_setfield, and register # myself in the optheap's _lazy_setfields_and_arrayitems list @@ -132,9 +138,7 @@ result = newresult getop = ResOperation(rop.GETFIELD_GC, [op.getarg(0)], result, op.getdescr()) - getop = shortboxes.add_potential(getop) - self._cached_fields_getfield_op[structvalue] = getop - self._cached_fields[structvalue] = optimizer.getvalue(result) + shortboxes.add_potential(getop, synthetic=True) elif op.result is not None: shortboxes.add_potential(op) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -10,6 +10,7 @@ from pypy.jit.metainterp.typesystem import llhelper, oohelper from pypy.tool.pairtype import extendabletype from pypy.rlib.debug import debug_start, debug_stop, debug_print +from pypy.rlib.objectmodel import specialize LEVEL_UNKNOWN = '\x00' LEVEL_NONNULL = '\x01' @@ -25,6 +26,9 @@ self.descr = descr self.bound = bound + def clone(self): + return LenBound(self.mode, self.descr, self.bound.clone()) + class OptValue(object): __metaclass__ = extendabletype _attrs_ = ('box', 'known_class', 'last_guard_index', 'level', 'intbound', 'lenbound') @@ -67,7 +71,7 @@ guards.append(op) elif self.level == LEVEL_KNOWNCLASS: op = ResOperation(rop.GUARD_NONNULL, [box], None) - guards.append(op) + guards.append(op) op = ResOperation(rop.GUARD_CLASS, [box, self.known_class], None) guards.append(op) else: @@ -88,8 +92,27 @@ assert False guards.append(op) self.lenbound.bound.make_guards(lenbox, guards) + return guards - return guards + def import_from(self, other, optimizer): + assert self.level <= LEVEL_NONNULL + if other.level == LEVEL_CONSTANT: + self.make_constant(other.get_key_box()) + optimizer.turned_constant(self) + elif other.level == LEVEL_KNOWNCLASS: + self.make_constant_class(other.known_class, -1) + else: + if other.level == LEVEL_NONNULL: + self.ensure_nonnull() + self.intbound.intersect(other.intbound) + if other.lenbound: + if self.lenbound: + assert other.lenbound.mode == self.lenbound.mode + assert other.lenbound.descr == self.lenbound.descr + self.lenbound.bound.intersect(other.lenbound.bound) + else: + self.lenbound = other.lenbound.clone() + def force_box(self): return self.box @@ -123,7 +146,7 @@ assert isinstance(constbox, Const) self.box = constbox self.level = LEVEL_CONSTANT - + if isinstance(constbox, ConstInt): val = constbox.getint() self.intbound = IntBound(val, val) @@ -200,6 +223,9 @@ def __init__(self, box): self.make_constant(box) + def __repr__(self): + return 'Constant(%r)' % (self.box,) + CONST_0 = ConstInt(0) CONST_1 = ConstInt(1) CVAL_ZERO = ConstantValue(CONST_0) @@ -308,7 +334,6 @@ self.resumedata_memo = resume.ResumeDataLoopMemo(metainterp_sd) self.bool_boxes = {} self.pure_operations = args_dict() - self.emitted_pure_operations = {} self.producer = {} self.pendingfields = [] self.posponedop = None @@ -316,12 +341,11 @@ self.quasi_immutable_deps = None self.opaque_pointers = {} self.newoperations = [] - self.emitting_dissabled = False - self.emitted_guards = 0 if loop is not None: self.call_pure_results = loop.call_pure_results self.set_optimizations(optimizations) + self.setup() def set_optimizations(self, optimizations): if optimizations: @@ -348,23 +372,18 @@ assert self.posponedop is None def new(self): + new = Optimizer(self.metainterp_sd, self.loop) + return self._new(new) + + def _new(self, new): assert self.posponedop is None - new = Optimizer(self.metainterp_sd, self.loop) optimizations = [o.new() for o in self.optimizations] new.set_optimizations(optimizations) new.quasi_immutable_deps = self.quasi_immutable_deps return new - + def produce_potential_short_preamble_ops(self, sb): - for op in self.emitted_pure_operations: - if op.getopnum() == rop.GETARRAYITEM_GC_PURE or \ - op.getopnum() == rop.STRGETITEM or \ - op.getopnum() == rop.UNICODEGETITEM: - if not self.getvalue(op.getarg(1)).is_constant(): - continue - sb.add_potential(op) - for opt in self.optimizations: - opt.produce_potential_short_preamble_ops(sb) + raise NotImplementedError('This is implemented in unroll.UnrollableOptimizer') def turned_constant(self, value): for o in self.optimizations: @@ -386,19 +405,26 @@ else: return box + @specialize.argtype(0) def getvalue(self, box): box = self.getinterned(box) try: value = self.values[box] except KeyError: value = self.values[box] = OptValue(box) + self.ensure_imported(value) return value + def ensure_imported(self, value): + pass + + @specialize.argtype(0) def get_constant_box(self, box): if isinstance(box, Const): return box try: value = self.values[box] + self.ensure_imported(value) except KeyError: return None if value.is_constant(): @@ -481,18 +507,22 @@ def emit_operation(self, op): if op.returns_bool_result(): self.bool_boxes[self.getvalue(op.result)] = None - if self.emitting_dissabled: - return - + self._emit_operation(op) + + @specialize.argtype(0) + def _emit_operation(self, op): for i in range(op.numargs()): arg = op.getarg(i) - if arg in self.values: - box = self.values[arg].force_box() - op.setarg(i, box) + try: + value = self.values[arg] + except KeyError: + pass + else: + self.ensure_imported(value) + op.setarg(i, value.force_box()) self.metainterp_sd.profiler.count(jitprof.OPT_OPS) if op.is_guard(): self.metainterp_sd.profiler.count(jitprof.OPT_GUARDS) - self.emitted_guards += 1 # FIXME: can we reuse above counter? op = self.store_final_boxes_in_guard(op) elif op.can_raise(): self.exception_might_have_happened = True @@ -541,9 +571,10 @@ arg = value.get_key_box() args[i] = arg args[n] = ConstInt(op.getopnum()) - args[n+1] = op.getdescr() + args[n + 1] = op.getdescr() return args + @specialize.argtype(0) def optimize_default(self, op): canfold = op.is_always_pure() if op.is_ovf(): @@ -579,13 +610,16 @@ return else: self.pure_operations[args] = op - self.emitted_pure_operations[op] = True + self.remember_emitting_pure(op) # otherwise, the operation remains self.emit_operation(op) if nextop: self.emit_operation(nextop) + def remember_emitting_pure(self, op): + pass + def constant_fold(self, op): argboxes = [self.get_constant_box(op.getarg(i)) for i in range(op.numargs())] @@ -627,9 +661,9 @@ arrayvalue = self.getvalue(op.getarg(0)) arrayvalue.make_len_gt(MODE_UNICODE, op.getdescr(), indexvalue.box.getint()) self.optimize_default(op) - - + + dispatch_opt = make_dispatcher_method(Optimizer, 'optimize_', default=Optimizer.optimize_default) diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -19,7 +19,7 @@ def new(self): return OptRewrite() - + def produce_potential_short_preamble_ops(self, sb): for op in self.loop_invariant_producer.values(): sb.add_potential(op) @@ -231,6 +231,17 @@ else: self.make_constant(op.result, result) return + + args = self.optimizer.make_args_key(op) + oldop = self.optimizer.pure_operations.get(args, None) + if oldop is not None and oldop.getdescr() is op.getdescr(): + assert oldop.getopnum() == op.getopnum() + self.make_equal_to(op.result, self.getvalue(oldop.result)) + return + else: + self.optimizer.pure_operations[args] = op + self.optimizer.remember_emitting_pure(op) + # replace CALL_PURE with just CALL args = op.getarglist() self.emit_operation(ResOperation(rop.CALL, args, op.result, @@ -351,7 +362,7 @@ # expects a compile-time constant assert isinstance(arg, Const) key = make_hashable_int(arg.getint()) - + resvalue = self.loop_invariant_results.get(key, None) if resvalue is not None: self.make_equal_to(op.result, resvalue) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -4711,6 +4711,35 @@ """ self.optimize_loop(ops, expected) + def test_empty_copystrunicontent(self): + ops = """ + [p0, p1, i0, i2, i3] + i4 = int_eq(i3, 0) + guard_true(i4) [] + copystrcontent(p0, p1, i0, i2, i3) + jump(p0, p1, i0, i2, i3) + """ + expected = """ + [p0, p1, i0, i2, i3] + i4 = int_eq(i3, 0) + guard_true(i4) [] + jump(p0, p1, i0, i2, 0) + """ + self.optimize_strunicode_loop(ops, expected) + + def test_empty_copystrunicontent_virtual(self): + ops = """ + [p0] + p1 = newstr(23) + copystrcontent(p0, p1, 0, 0, 0) + jump(p0) + """ + expected = """ + [p0] + jump(p0) + """ + self.optimize_strunicode_loop(ops, expected) + def test_forced_virtuals_aliasing(self): ops = """ [i0, i1] @@ -4738,6 +4767,27 @@ # other self.optimize_loop(ops, expected) + def test_plain_virtual_string_copy_content(self): + ops = """ + [] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = call(0, p0, s"abc123", descr=strconcatdescr) + i0 = strgetitem(p1, 0) + finish(i0) + """ + expected = """ + [] + p0 = newstr(6) + copystrcontent(s"hello!", p0, 0, 0, 6) + p1 = newstr(12) + copystrcontent(p0, p1, 0, 0, 6) + copystrcontent(s"abc123", p1, 0, 6, 6) + i0 = strgetitem(p1, 0) + finish(i0) + """ + self.optimize_strunicode_loop(ops, expected) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -102,9 +102,9 @@ print "Short Preamble:" short = loop.preamble.token.short_preamble[0] print short.inputargs - print '\n'.join([str(o) for o in short.operations]) + print '\n'.join([str(o) for o in short.operations]) print - + assert expected != "crash!", "should have raised an exception" self.assert_equal(loop, expected) if expected_preamble: @@ -113,7 +113,7 @@ if expected_short: self.assert_equal(short, expected_short, text_right='expected short preamble') - + return loop class OptimizeOptTest(BaseTestWithUnroll): @@ -472,7 +472,13 @@ [i0] jump(i0) """ - self.optimize_loop(ops, expected, preamble) + short = """ + [i0] + i1 = int_is_true(i0) + guard_value(i1, 1) [] + jump(i0) + """ + self.optimize_loop(ops, expected, preamble, expected_short=short) def test_bound_int_is_true(self): ops = """ @@ -860,10 +866,10 @@ setfield_gc(p3sub, i1, descr=valuedescr) setfield_gc(p1, p3sub, descr=nextdescr) # XXX: We get two extra operations here because the setfield - # above is the result of forcing p1 and thus not + # above is the result of forcing p1 and thus not # registered with the heap optimizer. I've makred tests # below with VIRTUALHEAP if they suffer from this issue - p3sub2 = getfield_gc(p1, descr=nextdescr) + p3sub2 = getfield_gc(p1, descr=nextdescr) guard_nonnull_class(p3sub2, ConstClass(node_vtable2)) [] jump(i1, p1, p3sub2) """ @@ -1405,7 +1411,7 @@ guard_isnull(p18) [p0, p8] p31 = new(descr=ssize) p35 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p35, p31, descr=valuedescr) + setfield_gc(p35, p31, descr=valuedescr) jump(p0, p35) """ expected = """ @@ -1420,7 +1426,7 @@ guard_isnull(p18) [p0, p8] p31 = new(descr=ssize) p35 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p35, p31, descr=valuedescr) + setfield_gc(p35, p31, descr=valuedescr) jump(p0, p35, p19, p18) """ expected = """ @@ -1429,7 +1435,7 @@ jump(p0, NULL) """ self.optimize_loop(ops, expected) - + def test_varray_1(self): ops = """ [i1] @@ -2175,7 +2181,7 @@ jump(p1) """ self.optimize_loop(ops, expected) - + def test_duplicate_getarrayitem_2(self): ops = """ [p1, i0] @@ -2193,7 +2199,7 @@ jump(p1, i7, i6) """ self.optimize_loop(ops, expected) - + def test_duplicate_getarrayitem_after_setarrayitem_1(self): ops = """ [p1, p2] @@ -2806,14 +2812,14 @@ guard_no_overflow() [] i3b = int_is_true(i3) guard_true(i3b) [] - setfield_gc(p1, i1, descr=valuedescr) + setfield_gc(p1, i1, descr=valuedescr) escape(i3) escape(i3) jump(i1, p1, i3) """ expected = """ [i1, p1, i3] - setfield_gc(p1, i1, descr=valuedescr) + setfield_gc(p1, i1, descr=valuedescr) escape(i3) escape(i3) jump(i1, p1, i3) @@ -2824,7 +2830,7 @@ ops = """ [p8, p11, i24] p26 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p26, i24, descr=adescr) + setfield_gc(p26, i24, descr=adescr) i34 = getfield_gc_pure(p11, descr=valuedescr) i35 = getfield_gc_pure(p26, descr=adescr) i36 = int_add_ovf(i34, i35) @@ -2833,10 +2839,10 @@ """ expected = """ [p8, p11, i26] - jump(p8, p11, i26) - """ - self.optimize_loop(ops, expected) - + jump(p8, p11, i26) + """ + self.optimize_loop(ops, expected) + def test_ovf_guard_in_short_preamble2(self): ops = """ [p8, p11, p12] @@ -3185,13 +3191,18 @@ jump(p1, i4, i3) ''' expected = ''' + [p1, i4, i3, i5] + setfield_gc(p1, i5, descr=valuedescr) + jump(p1, i3, i5, i5) + ''' + preamble = ''' [p1, i1, i4] setfield_gc(p1, i1, descr=valuedescr) i3 = call(p1, descr=plaincalldescr) setfield_gc(p1, i3, descr=valuedescr) - jump(p1, i4, i3) + jump(p1, i4, i3, i3) ''' - self.optimize_loop(ops, expected, expected) + self.optimize_loop(ops, expected, preamble) def test_call_pure_invalidates_heap_knowledge(self): # CALL_PURE should still force the setfield_gc() to occur before it @@ -3203,21 +3214,20 @@ jump(p1, i4, i3) ''' expected = ''' + [p1, i4, i3, i5] + setfield_gc(p1, i4, descr=valuedescr) + jump(p1, i3, i5, i5) + ''' + preamble = ''' [p1, i1, i4] setfield_gc(p1, i1, descr=valuedescr) i3 = call(p1, descr=plaincalldescr) setfield_gc(p1, i1, descr=valuedescr) - jump(p1, i4, i3) + jump(p1, i4, i3, i3) ''' - self.optimize_loop(ops, expected, expected) + self.optimize_loop(ops, expected, preamble) def test_call_pure_constant_folding(self): - # CALL_PURE is not marked as is_always_pure(), because it is wrong - # to call the function arbitrary many times at arbitrary points in - # time. Check that it is either constant-folded (and replaced by - # the result of the call, recorded as the first arg), or turned into - # a regular CALL. - # XXX can this test be improved with unrolling? arg_consts = [ConstInt(i) for i in (123456, 4, 5, 6)] call_pure_results = {tuple(arg_consts): ConstInt(42)} ops = ''' @@ -3233,14 +3243,13 @@ escape(i1) escape(i2) i4 = call(123456, 4, i0, 6, descr=plaincalldescr) - jump(i0, i4) + jump(i0, i4, i4) ''' expected = ''' - [i0, i2] + [i0, i4, i5] escape(42) - escape(i2) - i4 = call(123456, 4, i0, 6, descr=plaincalldescr) - jump(i0, i4) + escape(i4) + jump(i0, i5, i5) ''' self.optimize_loop(ops, expected, preamble, call_pure_results) @@ -3264,18 +3273,43 @@ escape(i2) i4 = call(123456, 4, i0, 6, descr=plaincalldescr) guard_no_exception() [] - jump(i0, i4) + jump(i0, i4, i4) ''' expected = ''' - [i0, i2] + [i0, i2, i3] escape(42) escape(i2) - i4 = call(123456, 4, i0, 6, descr=plaincalldescr) - guard_no_exception() [] - jump(i0, i4) + jump(i0, i3, i3) ''' self.optimize_loop(ops, expected, preamble, call_pure_results) + def test_call_pure_returning_virtual(self): + # XXX: This kind of loop invaraint call_pure will be forced + # both in the preamble and in the peeled loop + ops = ''' + [p1, i1, i2] + p2 = call_pure(0, p1, i1, i2, descr=strslicedescr) + escape(p2) + jump(p1, i1, i2) + ''' + preamble = ''' + [p1, i1, i2] + i6 = int_sub(i2, i1) + p2 = newstr(i6) + copystrcontent(p1, p2, i1, 0, i6) + escape(p2) + jump(p1, i1, i2, i6) + ''' + expected = ''' + [p1, i1, i2, i6] + p2 = newstr(i6) + copystrcontent(p1, p2, i1, 0, i6) + escape(p2) + jump(p1, i1, i2, i6) + ''' + self.optimize_loop(ops, expected, preamble) + + # ---------- def test_vref_nonvirtual_nonescape(self): @@ -5144,14 +5178,14 @@ [i0, i1, i10, i11, i2, i3, i4] escape(i2) escape(i3) - escape(i4) + escape(i4) i24 = int_mul_ovf(i10, i11) guard_no_overflow() [] i23 = int_sub_ovf(i10, i11) guard_no_overflow() [] i22 = int_add_ovf(i10, i11) guard_no_overflow() [] - jump(i0, i1, i10, i11, i2, i3, i4) + jump(i0, i1, i10, i11, i2, i3, i4) """ self.optimize_loop(ops, expected) @@ -5360,6 +5394,8 @@ """ self.optimize_strunicode_loop(ops, expected, expected) + # XXX Should some of the call's below now be call_pure? + def test_str_concat_1(self): ops = """ [p1, p2] @@ -5693,14 +5729,14 @@ ops = """ [p0, i0] i1 = unicodegetitem(p0, i0) - i10 = unicodegetitem(p0, i0) + i10 = unicodegetitem(p0, i0) i2 = int_lt(i1, 0) guard_false(i2) [] jump(p0, i0) """ expected = """ [p0, i0] - i1 = unicodegetitem(p0, i0) + i1 = unicodegetitem(p0, i0) jump(p0, i0) """ self.optimize_loop(ops, expected) @@ -5859,7 +5895,7 @@ """ preamble = """ [p1, i1, i2, p3] - guard_nonnull(p3) [] + guard_nonnull(p3) [] i4 = int_sub(i2, i1) i0 = call(0, p1, i1, i4, p3, descr=streq_slice_nonnull_descr) escape(i0) @@ -6468,7 +6504,7 @@ setfield_gc(p3, i1, descr=adescr) setfield_gc(p3, i2, descr=bdescr) i5 = int_gt(ii, 42) - guard_true(i5) [] + guard_true(i5) [] jump(p0, p1, p3, ii2, ii, i1, i2) """ self.optimize_loop(ops, expected) @@ -6494,7 +6530,7 @@ p1 = getfield_gc(p0, descr=nextdescr) guard_nonnull_class(p1, ConstClass(node_vtable)) [] p2 = getfield_gc(p1, descr=nextdescr) - guard_nonnull_class(p2, ConstClass(node_vtable)) [] + guard_nonnull_class(p2, ConstClass(node_vtable)) [] jump(p0) """ expected = """ @@ -6508,11 +6544,11 @@ guard_class(p1, ConstClass(node_vtable)) [] p2 = getfield_gc(p1, descr=nextdescr) guard_nonnull(p2) [] - guard_class(p2, ConstClass(node_vtable)) [] + guard_class(p2, ConstClass(node_vtable)) [] jump(p0) """ self.optimize_loop(ops, expected, expected_short=short) - + def test_forced_virtual_pure_getfield(self): ops = """ [p0] @@ -6576,7 +6612,7 @@ jump(p1, i2) """ self.optimize_loop(ops, expected) - + def test_loopinvariant_strlen(self): ops = """ [p9] @@ -6709,7 +6745,7 @@ [p0, p1] p2 = new_with_vtable(ConstClass(node_vtable)) p3 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p2, p3, descr=nextdescr) + setfield_gc(p2, p3, descr=nextdescr) jump(p2, p3) """ expected = """ @@ -6728,7 +6764,7 @@ jump(p2, i2) """ expected = """ - [p1] + [p1] p2 = getarrayitem_gc(p1, 7, descr=) i1 = arraylen_gc(p1) jump(p2) @@ -6769,8 +6805,8 @@ jump(p0, p2, p1) """ self.optimize_loop(ops, expected, expected_short=short) - - + + def test_loopinvariant_constant_strgetitem(self): ops = """ [p0] @@ -6824,11 +6860,11 @@ expected = """ [p0, i22, p1] call(i22, descr=nonwritedescr) - i3 = unicodelen(p1) # Should be killed by backend + i3 = unicodelen(p1) # Should be killed by backend jump(p0, i22, p1) """ self.optimize_loop(ops, expected, expected_short=short) - + def test_propagate_virtual_arryalen(self): ops = """ [p0] @@ -6897,7 +6933,7 @@ [p0, p1, p10, p11] i1 = arraylen_gc(p10, descr=arraydescr) getarrayitem_gc(p11, 1, descr=arraydescr) - call(i1, descr=nonwritedescr) + call(i1, descr=nonwritedescr) jump(p1, p0, p11, p10) """ self.optimize_loop(ops, expected) @@ -6906,20 +6942,20 @@ ops = """ [p5] i10 = getfield_gc(p5, descr=valuedescr) - call(i10, descr=nonwritedescr) + call(i10, descr=nonwritedescr) setfield_gc(p5, 1, descr=valuedescr) jump(p5) """ preamble = """ [p5] i10 = getfield_gc(p5, descr=valuedescr) - call(i10, descr=nonwritedescr) + call(i10, descr=nonwritedescr) setfield_gc(p5, 1, descr=valuedescr) jump(p5) """ expected = """ [p5] - call(1, descr=nonwritedescr) + call(1, descr=nonwritedescr) jump(p5) """ self.optimize_loop(ops, expected, preamble) @@ -6957,7 +6993,7 @@ [p9] call_assembler(0, descr=asmdescr) i18 = getfield_gc(p9, descr=valuedescr) - guard_value(i18, 0) [] + guard_value(i18, 0) [] jump(p9) """ self.optimize_loop(ops, expected) @@ -6986,17 +7022,37 @@ i10 = getfield_gc(p5, descr=valuedescr) i11 = getfield_gc(p6, descr=nextdescr) call(i10, i11, descr=nonwritedescr) - setfield_gc(p6, i10, descr=nextdescr) + setfield_gc(p6, i10, descr=nextdescr) jump(p5, p6) """ expected = """ [p5, p6, i10, i11] call(i10, i11, descr=nonwritedescr) - setfield_gc(p6, i10, descr=nextdescr) + setfield_gc(p6, i10, descr=nextdescr) jump(p5, p6, i10, i10) """ self.optimize_loop(ops, expected) - + + def test_cached_pure_func_of_equal_fields(self): + ops = """ + [p5, p6] + i10 = getfield_gc(p5, descr=valuedescr) + i11 = getfield_gc(p6, descr=nextdescr) + i12 = int_add(i10, 7) + i13 = int_add(i11, 7) + call(i12, i13, descr=nonwritedescr) + setfield_gc(p6, i10, descr=nextdescr) + jump(p5, p6) + """ + expected = """ + [p5, p6, i14, i12, i10] + i13 = int_add(i14, 7) + call(i12, i13, descr=nonwritedescr) + setfield_gc(p6, i10, descr=nextdescr) + jump(p5, p6, i10, i12, i10) + """ + self.optimize_loop(ops, expected) + def test_forced_counter(self): # XXX: VIRTUALHEAP (see above) py.test.skip("would be fixed by make heap optimizer aware of virtual setfields") @@ -7086,8 +7142,84 @@ """ self.optimize_loop(ops, expected) - + def test_import_constants_when_folding_pure_operations(self): + ops = """ + [p0] + f1 = getfield_gc(p0, descr=valuedescr) + f2 = float_abs(f1) + call(7.0, descr=nonwritedescr) + setfield_gc(p0, -7.0, descr=valuedescr) + jump(p0) + """ + expected = """ + [p0] + call(7.0, descr=nonwritedescr) + jump(p0) + """ + self.optimize_loop(ops, expected) + + def test_exploding_duplicatipon(self): + ops = """ + [i1, i2] + i3 = int_add(i1, i1) + i4 = int_add(i3, i3) + i5 = int_add(i4, i4) + i6 = int_add(i5, i5) + call(i6, descr=nonwritedescr) + jump(i1, i3) + """ + expected = """ + [i1, i2, i6, i3] + call(i6, descr=nonwritedescr) + jump(i1, i3, i6, i3) + """ + short = """ + [i1, i2] + i3 = int_add(i1, i1) + i4 = int_add(i3, i3) + i5 = int_add(i4, i4) + i6 = int_add(i5, i5) + jump(i1, i2, i6, i3) + """ + self.optimize_loop(ops, expected, expected_short=short) + + def test_prioritize_getfield1(self): + ops = """ + [p1, p2] + i1 = getfield_gc(p1, descr=valuedescr) + setfield_gc(p2, i1, descr=nextdescr) + i2 = int_neg(i1) + call(i2, descr=nonwritedescr) + jump(p1, p2) + """ + expected = """ + [p1, p2, i2, i1] + call(i2, descr=nonwritedescr) + setfield_gc(p2, i1, descr=nextdescr) + jump(p1, p2, i2, i1) + """ + self.optimize_loop(ops, expected) + + def test_prioritize_getfield2(self): + # Same as previous, but with descrs intercahnged which means + # that the getfield is discovered first when looking for + # potential short boxes during tests + ops = """ + [p1, p2] + i1 = getfield_gc(p1, descr=nextdescr) + setfield_gc(p2, i1, descr=valuedescr) + i2 = int_neg(i1) + call(i2, descr=nonwritedescr) + jump(p1, p2) + """ + expected = """ + [p1, p2, i2, i1] + call(i2, descr=nonwritedescr) + setfield_gc(p2, i1, descr=valuedescr) + jump(p1, p2, i2, i1) + """ + self.optimize_loop(ops, expected) class TestLLtype(OptimizeOptTest, LLtypeMixin): pass - + diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -70,6 +70,47 @@ self.snapshot_map[snapshot] = new_snapshot return new_snapshot +class UnrollableOptimizer(Optimizer): + def setup(self): + self.importable_values = {} + self.emitting_dissabled = False + self.emitted_guards = 0 + self.emitted_pure_operations = {} + + def ensure_imported(self, value): + if not self.emitting_dissabled and value in self.importable_values: + imp = self.importable_values[value] + del self.importable_values[value] + imp.import_value(value) + + def emit_operation(self, op): + if op.returns_bool_result(): + self.bool_boxes[self.getvalue(op.result)] = None + if self.emitting_dissabled: + return + if op.is_guard(): + self.emitted_guards += 1 # FIXME: can we use counter in self._emit_operation? + self._emit_operation(op) + + def new(self): + new = UnrollableOptimizer(self.metainterp_sd, self.loop) + return self._new(new) + + def remember_emitting_pure(self, op): + self.emitted_pure_operations[op] = True + + def produce_potential_short_preamble_ops(self, sb): + for op in self.emitted_pure_operations: + if op.getopnum() == rop.GETARRAYITEM_GC_PURE or \ + op.getopnum() == rop.STRGETITEM or \ + op.getopnum() == rop.UNICODEGETITEM: + if not self.getvalue(op.getarg(1)).is_constant(): + continue + sb.add_potential(op) + for opt in self.optimizations: + opt.produce_potential_short_preamble_ops(sb) + + class UnrollOptimizer(Optimization): """Unroll the loop into two iterations. The first one will @@ -77,7 +118,7 @@ distinction anymore)""" def __init__(self, metainterp_sd, loop, optimizations): - self.optimizer = Optimizer(metainterp_sd, loop, optimizations) + self.optimizer = UnrollableOptimizer(metainterp_sd, loop, optimizations) self.cloned_operations = [] for op in self.optimizer.loop.operations: newop = op.clone() @@ -150,6 +191,7 @@ args = ", ".join([logops.repr_of_arg(arg) for arg in short_inputargs]) debug_print('short inputargs: ' + args) self.short_boxes.debug_print(logops) + # Force virtuals amoung the jump_args of the preamble to get the # operations needed to setup the proper state of those virtuals @@ -161,8 +203,9 @@ if box in seen: continue seen[box] = True - value = preamble_optimizer.getvalue(box) - inputarg_setup_ops.extend(value.make_guards(box)) + preamble_value = preamble_optimizer.getvalue(box) + value = self.optimizer.getvalue(box) + value.import_from(preamble_value, self.optimizer) for box in short_inputargs: if box in seen: continue @@ -181,23 +224,17 @@ for op in self.short_boxes.operations(): self.ensure_short_op_emitted(op, self.optimizer, seen) if op and op.result: - # The order of these guards is not important as - # self.optimizer.emitting_dissabled is False - value = preamble_optimizer.getvalue(op.result) - for guard in value.make_guards(op.result): - self.optimizer.send_extra_operation(guard) + preamble_value = preamble_optimizer.getvalue(op.result) + value = self.optimizer.getvalue(op.result) + if not value.is_virtual(): + imp = ValueImporter(self, preamble_value, op) + self.optimizer.importable_values[value] = imp newresult = self.optimizer.getvalue(op.result).get_key_box() if newresult is not op.result: self.short_boxes.alias(newresult, op.result) self.optimizer.flush() self.optimizer.emitting_dissabled = False - # XXX Hack to prevent the arraylen/strlen/unicodelen ops generated - # by value.make_guards() from ending up in pure_operations - for key, op in self.optimizer.pure_operations.items(): - if not self.short_boxes.has_producer(op.result): - del self.optimizer.pure_operations[key] - initial_inputargs_len = len(inputargs) self.inliner = Inliner(loop.inputargs, jump_args) @@ -276,16 +313,11 @@ short_jumpargs = inputargs[:] - short = [] - short_seen = {} + short = self.short = [] + short_seen = self.short_seen = {} for box, const in self.constant_inputargs.items(): short_seen[box] = True - for op in self.short_boxes.operations(): - if op is not None: - if len(self.getvalue(op.result).make_guards(op.result)) > 0: - self.add_op_to_short(op, short, short_seen, False, True) - # This loop is equivalent to the main optimization loop in # Optimizer.propagate_all_forward jumpop = None @@ -380,7 +412,7 @@ if op.is_ovf(): guard = ResOperation(rop.GUARD_NO_OVERFLOW, [], None) optimizer.send_extra_operation(guard) - + def add_op_to_short(self, op, short, short_seen, emit=True, guards_needed=False): if op is None: return None @@ -536,6 +568,13 @@ loop_token.failed_states.append(virtual_state) self.emit_operation(op) +class ValueImporter(object): + def __init__(self, unroll, value, op): + self.unroll = unroll + self.preamble_value = value + self.op = op - - + def import_value(self, value): + value.import_from(self.preamble_value, self.unroll.optimizer) + self.unroll.add_op_to_short(self.op, self.unroll.short, self.unroll.short_seen, False, True) + diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -58,6 +58,9 @@ def _really_force(self): raise NotImplementedError("abstract base") + def import_from(self, other, optimizer): + raise NotImplementedError("should not be called at this level") + def get_fielddescrlist_cache(cpu): if not hasattr(cpu, '_optimizeopt_fielddescrlist_cache'): result = descrlist_dict() diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -12,6 +12,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib.objectmodel import we_are_translated +import os class AbstractVirtualStateInfo(resume.AbstractVirtualInfo): position = -1 @@ -461,8 +462,10 @@ class ShortBoxes(object): def __init__(self, optimizer, surviving_boxes): self.potential_ops = {} - self.duplicates = {} + self.alternatives = {} + self.synthetic = {} self.aliases = {} + self.rename = {} self.optimizer = optimizer for box in surviving_boxes: self.potential_ops[box] = None @@ -476,33 +479,81 @@ except BoxNotProducable: pass + def prioritized_alternatives(self, box): + if box not in self.alternatives: + return [self.potential_ops[box]] + alts = self.alternatives[box] + hi, lo = 0, len(alts) - 1 + while hi < lo: + if alts[lo] is None: # Inputarg, lowest priority + alts[lo], alts[-1] = alts[-1], alts[lo] + lo -= 1 + elif alts[lo] not in self.synthetic: # Hi priority + alts[hi], alts[lo] = alts[lo], alts[hi] + hi += 1 + else: # Low priority + lo -= 1 + return alts + + def renamed(self, box): + if box in self.rename: + return self.rename[box] + return box + + def add_to_short(self, box, op): + if op: + op = op.clone() + for i in range(op.numargs()): + op.setarg(i, self.renamed(op.getarg(i))) + if box in self.short_boxes: + if op is None: + oldop = self.short_boxes[box].clone() + oldres = oldop.result + newbox = oldop.result = oldres.clonebox() + self.rename[box] = newbox + self.short_boxes[box] = None + self.short_boxes[newbox] = oldop + else: + newop = op.clone() + newbox = newop.result = op.result.clonebox() + self.short_boxes[newop.result] = newop + value = self.optimizer.getvalue(box) + self.optimizer.make_equal_to(newbox, value) + else: + self.short_boxes[box] = op + def produce_short_preamble_box(self, box): if box in self.short_boxes: return if isinstance(box, Const): return if box in self.potential_ops: - op = self.potential_ops[box] - if op: - for arg in op.getarglist(): - self.produce_short_preamble_box(arg) - self.short_boxes[box] = op + ops = self.prioritized_alternatives(box) + produced_one = False + for op in ops: + try: + if op: + for arg in op.getarglist(): + self.produce_short_preamble_box(arg) + except BoxNotProducable: + pass + else: + produced_one = True + self.add_to_short(box, op) + if not produced_one: + raise BoxNotProducable else: raise BoxNotProducable - def add_potential(self, op): + def add_potential(self, op, synthetic=False): if op.result not in self.potential_ops: self.potential_ops[op.result] = op - return op - newop = op.clone() - newop.result = op.result.clonebox() - self.potential_ops[newop.result] = newop - if op.result in self.duplicates: - self.duplicates[op.result].append(newop.result) else: - self.duplicates[op.result] = [newop.result] - self.optimizer.make_equal_to(newop.result, self.optimizer.getvalue(op.result)) - return newop + if op.result not in self.alternatives: + self.alternatives[op.result] = [self.potential_ops[op.result]] + self.alternatives[op.result].append(op) + if synthetic: + self.synthetic[op] = True def debug_print(self, logops): debug_start('jit-short-boxes') diff --git a/pypy/jit/metainterp/optimizeopt/vstring.py b/pypy/jit/metainterp/optimizeopt/vstring.py --- a/pypy/jit/metainterp/optimizeopt/vstring.py +++ b/pypy/jit/metainterp/optimizeopt/vstring.py @@ -141,6 +141,11 @@ for c in self._chars]) def string_copy_parts(self, optimizer, targetbox, offsetbox, mode): + if not self.is_virtual() and targetbox is not self.box: + lengthbox = self.getstrlen(optimizer, mode) + srcbox = self.force_box() + return copy_str_content(optimizer, srcbox, targetbox, + CONST_0, offsetbox, lengthbox, mode) for i in range(len(self._chars)): charbox = self._chars[i].force_box() if not (isinstance(charbox, Const) and charbox.same_constant(CONST_0)): @@ -296,7 +301,7 @@ def copy_str_content(optimizer, srcbox, targetbox, - srcoffsetbox, offsetbox, lengthbox, mode): + srcoffsetbox, offsetbox, lengthbox, mode, need_next_offset=True): if isinstance(srcbox, ConstPtr) and isinstance(srcoffsetbox, Const): M = 5 else: @@ -313,7 +318,10 @@ None)) offsetbox = _int_add(optimizer, offsetbox, CONST_1) else: - nextoffsetbox = _int_add(optimizer, offsetbox, lengthbox) + if need_next_offset: + nextoffsetbox = _int_add(optimizer, offsetbox, lengthbox) + else: + nextoffsetbox = None op = ResOperation(mode.COPYSTRCONTENT, [srcbox, targetbox, srcoffsetbox, offsetbox, lengthbox], None) @@ -365,7 +373,7 @@ def new(self): return OptString() - + def make_vstring_plain(self, box, source_op, mode): vvalue = VStringPlainValue(self.optimizer, box, source_op, mode) self.make_equal_to(box, vvalue) @@ -435,7 +443,11 @@ # if isinstance(value, VStringPlainValue): # even if no longer virtual if vindex.is_constant(): - return value.getitem(vindex.box.getint()) + res = value.getitem(vindex.box.getint()) + # If it is uninitialized we can't return it, it was set by a + # COPYSTRCONTENT, not a STRSETITEM + if res is not optimizer.CVAL_UNINITIALIZED_ZERO: + return res # resbox = _strgetitem(self.optimizer, value.force_box(), vindex.force_box(), mode) return self.getvalue(resbox) @@ -450,6 +462,30 @@ lengthbox = value.getstrlen(self.optimizer, mode) self.make_equal_to(op.result, self.getvalue(lengthbox)) + def optimize_COPYSTRCONTENT(self, op): + self._optimize_COPYSTRCONTENT(op, mode_string) + def optimize_COPYUNICODECONTENT(self, op): + self._optimize_COPYSTRCONTENT(op, mode_unicode) + + def _optimize_COPYSTRCONTENT(self, op, mode): + # args: src dst srcstart dststart length + src = self.getvalue(op.getarg(0)) + dst = self.getvalue(op.getarg(1)) + srcstart = self.getvalue(op.getarg(2)) + dststart = self.getvalue(op.getarg(3)) + length = self.getvalue(op.getarg(4)) + + if length.is_constant() and length.box.getint() == 0: + return + copy_str_content(self.optimizer, + src.force_box(), + dst.force_box(), + srcstart.force_box(), + dststart.force_box(), + length.force_box(), + mode, need_next_offset=False + ) + def optimize_CALL(self, op): # dispatch based on 'oopspecindex' to a method that handles # specifically the given oopspec call. For non-oopspec calls, diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -17,6 +17,7 @@ from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG, ABORT_BRIDGE, \ ABORT_FORCE_QUASIIMMUT, ABORT_BAD_LOOP from pypy.jit.metainterp.jitexc import JitException, get_llexception +from pypy.jit.metainterp.heapcache import HeapCache from pypy.rlib.objectmodel import specialize from pypy.jit.codewriter.jitcode import JitCode, SwitchDictDescr from pypy.jit.codewriter import heaptracker @@ -209,7 +210,8 @@ self.metainterp.clear_exception() resbox = self.execute(rop.%s, b1, b2) self.make_result_of_lastop(resbox) # same as execute_varargs() - self.metainterp.handle_possible_overflow_error() + if not isinstance(resbox, Const): + self.metainterp.handle_possible_overflow_error() return resbox ''' % (_opimpl, _opimpl.upper())).compile() @@ -321,7 +323,7 @@ def _establish_nullity(self, box, orgpc): value = box.nonnull() if value: - if box not in self.metainterp.known_class_boxes: + if not self.metainterp.heapcache.is_class_known(box): self.generate_guard(rop.GUARD_NONNULL, box, resumepc=orgpc) else: if not isinstance(box, Const): @@ -366,14 +368,17 @@ @arguments("descr") def opimpl_new(self, sizedescr): - return self.execute_with_descr(rop.NEW, sizedescr) + resbox = self.execute_with_descr(rop.NEW, sizedescr) + self.metainterp.heapcache.new(resbox) + return resbox @arguments("descr") def opimpl_new_with_vtable(self, sizedescr): cpu = self.metainterp.cpu cls = heaptracker.descr2vtable(cpu, sizedescr) resbox = self.execute(rop.NEW_WITH_VTABLE, ConstInt(cls)) - self.metainterp.known_class_boxes[resbox] = None + self.metainterp.heapcache.new(resbox) + self.metainterp.heapcache.class_now_known(resbox) return resbox ## @FixME #arguments("box") @@ -392,26 +397,30 @@ ## self.execute(rop.SUBCLASSOF, box1, box2) @arguments("descr", "box") - def opimpl_new_array(self, itemsizedescr, countbox): - return self.execute_with_descr(rop.NEW_ARRAY, itemsizedescr, countbox) + def opimpl_new_array(self, itemsizedescr, lengthbox): + resbox = self.execute_with_descr(rop.NEW_ARRAY, itemsizedescr, lengthbox) + self.metainterp.heapcache.new_array(resbox, lengthbox) + return resbox + + @specialize.arg(1) + def _do_getarrayitem_gc_any(self, op, arraybox, arraydescr, indexbox): + tobox = self.metainterp.heapcache.getarrayitem( + arraybox, arraydescr, indexbox) + if tobox: + # sanity check: see whether the current array value + # corresponds to what the cache thinks the value is + resbox = executor.execute(self.metainterp.cpu, self.metainterp, op, + arraydescr, arraybox, indexbox) + assert resbox.constbox().same_constant(tobox.constbox()) + return tobox + resbox = self.execute_with_descr(op, arraydescr, arraybox, indexbox) + self.metainterp.heapcache.getarrayitem_now_known( + arraybox, arraydescr, indexbox, resbox) + return resbox @arguments("box", "descr", "box") def _opimpl_getarrayitem_gc_any(self, arraybox, arraydescr, indexbox): - cache = self.metainterp.heap_array_cache.get(arraydescr, None) - if cache and isinstance(indexbox, ConstInt): - index = indexbox.getint() - frombox, tobox = cache.get(index, (None, None)) - if frombox is arraybox: - return tobox - resbox = self.execute_with_descr(rop.GETARRAYITEM_GC, - arraydescr, arraybox, indexbox) - if isinstance(indexbox, ConstInt): - if not cache: - cache = self.metainterp.heap_array_cache[arraydescr] = {} - index = indexbox.getint() - cache[index] = arraybox, resbox - return resbox - + return self._do_getarrayitem_gc_any(rop.GETARRAYITEM_GC, arraybox, arraydescr, indexbox) opimpl_getarrayitem_gc_i = _opimpl_getarrayitem_gc_any opimpl_getarrayitem_gc_r = _opimpl_getarrayitem_gc_any @@ -427,8 +436,7 @@ @arguments("box", "descr", "box") def _opimpl_getarrayitem_gc_pure_any(self, arraybox, arraydescr, indexbox): - return self.execute_with_descr(rop.GETARRAYITEM_GC_PURE, - arraydescr, arraybox, indexbox) + return self._do_getarrayitem_gc_any(rop.GETARRAYITEM_GC_PURE, arraybox, arraydescr, indexbox) opimpl_getarrayitem_gc_pure_i = _opimpl_getarrayitem_gc_pure_any opimpl_getarrayitem_gc_pure_r = _opimpl_getarrayitem_gc_pure_any @@ -439,13 +447,8 @@ indexbox, itembox): self.execute_with_descr(rop.SETARRAYITEM_GC, arraydescr, arraybox, indexbox, itembox) - if isinstance(indexbox, ConstInt): - cache = self.metainterp.heap_array_cache.setdefault(arraydescr, {}) - cache[indexbox.getint()] = arraybox, itembox - else: - cache = self.metainterp.heap_array_cache.get(arraydescr, None) - if cache: - cache.clear() + self.metainterp.heapcache.setarrayitem( + arraybox, arraydescr, indexbox, itembox) opimpl_setarrayitem_gc_i = _opimpl_setarrayitem_gc_any opimpl_setarrayitem_gc_r = _opimpl_setarrayitem_gc_any @@ -462,7 +465,12 @@ @arguments("box", "descr") def opimpl_arraylen_gc(self, arraybox, arraydescr): - return self.execute_with_descr(rop.ARRAYLEN_GC, arraydescr, arraybox) + lengthbox = self.metainterp.heapcache.arraylen(arraybox) + if lengthbox is None: + lengthbox = self.execute_with_descr( + rop.ARRAYLEN_GC, arraydescr, arraybox) + self.metainterp.heapcache.arraylen_now_known(arraybox, lengthbox) + return lengthbox @arguments("orgpc", "box", "descr", "box") def opimpl_check_neg_index(self, orgpc, arraybox, arraydescr, indexbox): @@ -471,19 +479,17 @@ negbox = self.implement_guard_value(orgpc, negbox) if negbox.getint(): # the index is < 0; add the array length to it - lenbox = self.metainterp.execute_and_record( - rop.ARRAYLEN_GC, arraydescr, arraybox) + lengthbox = self.opimpl_arraylen_gc(arraybox, arraydescr) indexbox = self.metainterp.execute_and_record( - rop.INT_ADD, None, indexbox, lenbox) + rop.INT_ADD, None, indexbox, lengthbox) return indexbox @arguments("descr", "descr", "descr", "descr", "box") def opimpl_newlist(self, structdescr, lengthdescr, itemsdescr, arraydescr, sizebox): - sbox = self.metainterp.execute_and_record(rop.NEW, structdescr) + sbox = self.opimpl_new(structdescr) self._opimpl_setfield_gc_any(sbox, lengthdescr, sizebox) - abox = self.metainterp.execute_and_record(rop.NEW_ARRAY, arraydescr, - sizebox) + abox = self.opimpl_new_array(arraydescr, sizebox) self._opimpl_setfield_gc_any(sbox, itemsdescr, abox) return sbox @@ -540,11 +546,15 @@ @specialize.arg(1) def _opimpl_getfield_gc_any_pureornot(self, opnum, box, fielddescr): - frombox, tobox = self.metainterp.heap_cache.get(fielddescr, (None, None)) - if frombox is box: + tobox = self.metainterp.heapcache.getfield(box, fielddescr) + if tobox is not None: + # sanity check: see whether the current struct value + # corresponds to what the cache thinks the value is + resbox = executor.execute(self.metainterp.cpu, self.metainterp, + rop.GETFIELD_GC, fielddescr, box) return tobox resbox = self.execute_with_descr(opnum, fielddescr, box) - self.metainterp.heap_cache[fielddescr] = (box, resbox) + self.metainterp.heapcache.getfield_now_known(box, fielddescr, resbox) return resbox @arguments("orgpc", "box", "descr") @@ -565,11 +575,11 @@ @arguments("box", "descr", "box") def _opimpl_setfield_gc_any(self, box, fielddescr, valuebox): - frombox, tobox = self.metainterp.heap_cache.get(fielddescr, (None, None)) - if frombox is box and tobox is valuebox: + tobox = self.metainterp.heapcache.getfield(box, fielddescr) + if tobox is valuebox: return self.execute_with_descr(rop.SETFIELD_GC, fielddescr, box, valuebox) - self.metainterp.heap_cache[fielddescr] = (box, valuebox) + self.metainterp.heapcache.setfield(box, fielddescr, valuebox) opimpl_setfield_gc_i = _opimpl_setfield_gc_any opimpl_setfield_gc_r = _opimpl_setfield_gc_any opimpl_setfield_gc_f = _opimpl_setfield_gc_any @@ -633,7 +643,7 @@ standard_box = self.metainterp.virtualizable_boxes[-1] if standard_box is box: return False - if box in self.metainterp.nonstandard_virtualizables: + if self.metainterp.heapcache.is_nonstandard_virtualizable(box): return True eqbox = self.metainterp.execute_and_record(rop.PTR_EQ, None, box, standard_box) @@ -642,7 +652,7 @@ if isstandard: self.metainterp.replace_box(box, standard_box) else: - self.metainterp.nonstandard_virtualizables[box] = None + self.metainterp.heapcache.nonstandard_virtualizables_now_known(box) return not isstandard def _get_virtualizable_field_index(self, fielddescr): @@ -727,7 +737,7 @@ def opimpl_arraylen_vable(self, pc, box, fdescr, adescr): if self._nonstandard_virtualizable(pc, box): arraybox = self._opimpl_getfield_gc_any(box, fdescr) - return self.execute_with_descr(rop.ARRAYLEN_GC, adescr, arraybox) + return self.opimpl_arraylen_gc(arraybox, adescr) vinfo = self.metainterp.jitdriver_sd.virtualizable_info virtualizable_box = self.metainterp.virtualizable_boxes[-1] virtualizable = vinfo.unwrap_virtualizable_box(virtualizable_box) @@ -858,6 +868,14 @@ def opimpl_newunicode(self, lengthbox): return self.execute(rop.NEWUNICODE, lengthbox) + @arguments("box", "box", "box", "box", "box") + def opimpl_copystrcontent(self, srcbox, dstbox, srcstartbox, dststartbox, lengthbox): + return self.execute(rop.COPYSTRCONTENT, srcbox, dstbox, srcstartbox, dststartbox, lengthbox) + + @arguments("box", "box", "box", "box", "box") + def opimpl_copyunicodecontent(self, srcbox, dstbox, srcstartbox, dststartbox, lengthbox): + return self.execute(rop.COPYUNICODECONTENT, srcbox, dstbox, srcstartbox, dststartbox, lengthbox) + ## @FixME #arguments("descr", "varargs") ## def opimpl_residual_oosend_canraise(self, methdescr, varargs): ## return self.execute_varargs(rop.OOSEND, varargs, descr=methdescr, @@ -884,9 +902,9 @@ @arguments("orgpc", "box") def opimpl_guard_class(self, orgpc, box): clsbox = self.cls_of_box(box) - if box not in self.metainterp.known_class_boxes: + if not self.metainterp.heapcache.is_class_known(box): self.generate_guard(rop.GUARD_CLASS, box, [clsbox], resumepc=orgpc) - self.metainterp.known_class_boxes[box] = None + self.metainterp.heapcache.class_now_known(box) return clsbox @arguments("int", "orgpc") @@ -1052,6 +1070,18 @@ return ConstInt(trace_length) @arguments("box") + def _opimpl_isconstant(self, box): + return ConstInt(isinstance(box, Const)) + + opimpl_int_isconstant = opimpl_ref_isconstant = _opimpl_isconstant + + @arguments("box") + def _opimpl_isvirtual(self, box): + return ConstInt(self.metainterp.heapcache.is_unescaped(box)) + + opimpl_ref_isvirtual = _opimpl_isvirtual + + @arguments("box") def opimpl_virtual_ref(self, box): # Details on the content of metainterp.virtualref_boxes: # @@ -1492,16 +1522,7 @@ self.last_exc_value_box = None self.retracing_loop_from = None self.call_pure_results = args_dict_box() - # contains boxes where the class is already known - self.known_class_boxes = {} - # contains frame boxes that are not virtualizables - self.nonstandard_virtualizables = {} - # heap cache - # maps descrs to (from_box, to_box) tuples - self.heap_cache = {} - # heap array cache - # maps descrs to {index: (from_box, to_box)} dicts - self.heap_array_cache = {} + self.heapcache = HeapCache() def perform_call(self, jitcode, boxes, greenkey=None): # causes the metainterp to enter the given subfunction @@ -1674,32 +1695,18 @@ def _record_helper_nonpure_varargs(self, opnum, resbox, descr, argboxes): assert resbox is None or isinstance(resbox, Box) + if (rop._OVF_FIRST <= opnum <= rop._OVF_LAST and + self.last_exc_value_box is None and + self._all_constants_varargs(argboxes)): + return resbox.constbox() # record the operation profiler = self.staticdata.profiler profiler.count_ops(opnum, RECORDED_OPS) - self._invalidate_caches(opnum, descr) + self.heapcache.invalidate_caches(opnum, descr, argboxes) op = self.history.record(opnum, argboxes, resbox, descr) self.attach_debug_info(op) return resbox - def _invalidate_caches(self, opnum, descr): - if opnum == rop.SETFIELD_GC: - return - if opnum == rop.SETARRAYITEM_GC: - return - if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: - return - if opnum == rop.CALL: - effectinfo = descr.get_extra_info() - ef = effectinfo.extraeffect - if ef == effectinfo.EF_LOOPINVARIANT or \ - ef == effectinfo.EF_ELIDABLE_CANNOT_RAISE or \ - ef == effectinfo.EF_ELIDABLE_CAN_RAISE: - return - if self.heap_cache: - self.heap_cache.clear() - if self.heap_array_cache: - self.heap_array_cache.clear() def attach_debug_info(self, op): if (not we_are_translated() and op is not None @@ -1862,10 +1869,7 @@ duplicates[box] = None def reached_loop_header(self, greenboxes, redboxes, resumedescr): - self.known_class_boxes = {} - self.nonstandard_virtualizables = {} # XXX maybe not needed? - self.heap_cache = {} - self.heap_array_cache = {} + self.heapcache.reset() duplicates = {} self.remove_consts_and_duplicates(redboxes, len(redboxes), @@ -2373,17 +2377,7 @@ for i in range(len(boxes)): if boxes[i] is oldbox: boxes[i] = newbox - for descr, (frombox, tobox) in self.heap_cache.iteritems(): - change = False - if frombox is oldbox: - change = True - frombox = newbox - if tobox is oldbox: - change = True - tobox = newbox - if change: - self.heap_cache[descr] = frombox, tobox - # XXX what about self.heap_array_cache? + self.heapcache.replace_box(oldbox, newbox) def find_biggest_function(self): start_stack = [] diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -1,23 +1,25 @@ +import sys + import py -import sys -from pypy.rlib.jit import JitDriver, we_are_jitted, hint, dont_look_inside -from pypy.rlib.jit import loop_invariant, elidable, promote -from pypy.rlib.jit import jit_debug, assert_green, AssertGreenFailed -from pypy.rlib.jit import unroll_safe, current_trace_length + +from pypy import conftest +from pypy.jit.codewriter.policy import JitPolicy, StopAtXPolicy from pypy.jit.metainterp import pyjitpl, history +from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT +from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin, noConst +from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper +from pypy.jit.metainterp.warmspot import get_stats from pypy.jit.metainterp.warmstate import set_future_value -from pypy.jit.metainterp.warmspot import get_stats -from pypy.jit.codewriter.policy import JitPolicy, StopAtXPolicy -from pypy import conftest +from pypy.rlib.jit import (JitDriver, we_are_jitted, hint, dont_look_inside, + loop_invariant, elidable, promote, jit_debug, assert_green, + AssertGreenFailed, unroll_safe, current_trace_length, look_inside_iff, + isconstant, isvirtual) from pypy.rlib.rarithmetic import ovfcheck -from pypy.jit.metainterp.typesystem import LLTypeHelper, OOTypeHelper from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype -from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT -from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin, noConst + class BasicTests: - def test_basic(self): def f(x, y): return x + y @@ -99,14 +101,14 @@ myjitdriver.jit_merge_point(x=x, y=y, res=res) res += x * x x += 1 - res += x * x + res += x * x y -= 1 return res res = self.meta_interp(f, [6, 7]) assert res == 1323 self.check_loop_count(1) self.check_loops(int_mul=1) - + def test_loop_variant_mul_ovf(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) def f(x, y): @@ -1372,7 +1374,7 @@ return x res = self.meta_interp(f, [299], listops=True) assert res == f(299) - self.check_loops(guard_class=0, guard_value=3) + self.check_loops(guard_class=0, guard_value=3) self.check_loops(guard_class=0, guard_value=6, everywhere=True) def test_merge_guardnonnull_guardclass(self): @@ -2118,7 +2120,7 @@ return sa res = self.meta_interp(f, [32, 7]) assert res == f(32, 7) - + def test_caching_setarrayitem_fixed(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'i', 'n', 'a', 'node']) def f(n, a): @@ -2138,7 +2140,7 @@ return sa res = self.meta_interp(f, [32, 7]) assert res == f(32, 7) - + def test_caching_setarrayitem_var(self): myjitdriver = JitDriver(greens = [], reds = ['sa', 'i', 'n', 'a', 'b', 'node']) def f(n, a, b): @@ -2668,7 +2670,7 @@ myjitdriver.set_param('threshold', 3) myjitdriver.set_param('trace_eagerness', 1) myjitdriver.set_param('retrace_limit', 5) - myjitdriver.set_param('function_threshold', -1) + myjitdriver.set_param('function_threshold', -1) pc = sa = i = 0 while pc < len(bytecode): myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i) @@ -2693,12 +2695,12 @@ def g(n1, n2): for i in range(10): f(n1) - for i in range(10): + for i in range(10): f(n2) nn = [10, 3] assert self.meta_interp(g, nn) == g(*nn) - + # The attempts of retracing first loop will end up retracing the # second and thus fail 5 times, saturating the retrace_count. Instead a # bridge back to the preamble of the first loop is produced. A guard in @@ -2709,7 +2711,7 @@ self.check_tree_loop_count(2 + 3) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. - + def g(n): for i in range(n): for j in range(10): @@ -2945,15 +2947,15 @@ a = [0, 1, 2, 3, 4] while i < n: myjitdriver.jit_merge_point(sa=sa, n=n, a=a, i=i) - if i < n/2: + if i < n / 2: sa += a[4] - elif i == n/2: + elif i == n / 2: a.pop() i += 1 res = self.meta_interp(f, [32]) assert res == f(32) self.check_loops(arraylen_gc=2) - + class TestOOtype(BasicTests, OOJitMixin): def test_oohash(self): @@ -3173,7 +3175,7 @@ res = self.meta_interp(f, [32]) assert res == f(32) self.check_tree_loop_count(3) - + def test_two_loopinvariant_arrays3(self): from pypy.rpython.lltypesystem import lltype, llmemory, rffi myjitdriver = JitDriver(greens = [], reds = ['sa', 'n', 'i', 'a']) @@ -3197,7 +3199,7 @@ res = self.meta_interp(f, [32]) assert res == f(32) self.check_tree_loop_count(2) - + def test_two_loopinvariant_arrays_boxed(self): class A(object): def __init__(self, a): @@ -3222,7 +3224,7 @@ res = self.meta_interp(f, [32]) assert res == f(32) self.check_loops(arraylen_gc=2, everywhere=True) - + def test_release_gil_flush_heap_cache(self): if sys.platform == "win32": py.test.skip("needs 'time'") @@ -3276,7 +3278,136 @@ return n self.meta_interp(f, [10], repeat=3) - + + def test_jit_merge_point_with_pbc(self): + driver = JitDriver(greens = [], reds = ['x']) + + class A(object): + def __init__(self, x): + self.x = x + def _freeze_(self): + return True + pbc = A(1) + + def main(x): + return f(x, pbc) + + def f(x, pbc): + while x > 0: + driver.jit_merge_point(x = x) + x -= pbc.x + return x + + self.meta_interp(main, [10]) + + def test_look_inside_iff_const(self): + @look_inside_iff(lambda arg: isconstant(arg)) + def f(arg): + s = 0 + while arg > 0: + s += arg + arg -= 1 + return s + + driver = JitDriver(greens = ['code'], reds = ['n', 'arg', 's']) + + def main(code, n, arg): + s = 0 + while n > 0: + driver.jit_merge_point(code=code, n=n, arg=arg, s=s) + if code == 0: + s += f(arg) + else: + s += f(1) + n -= 1 + return s + + res = self.meta_interp(main, [0, 10, 2], enable_opts='') + assert res == main(0, 10, 2) + self.check_loops(call=1) + res = self.meta_interp(main, [1, 10, 2], enable_opts='') + assert res == main(1, 10, 2) + self.check_loops(call=0) + + def test_look_inside_iff_virtual(self): + # There's no good reason for this to be look_inside_iff, but it's a test! + @look_inside_iff(lambda arg, n: isvirtual(arg)) + def f(arg, n): + if n == 100: + for i in xrange(n): + n += i + return arg.x + class A(object): + def __init__(self, x): + self.x = x + driver = JitDriver(greens=['n'], reds=['i', 'a']) + def main(n): + i = 0 + a = A(3) + while i < 20: + driver.jit_merge_point(i=i, n=n, a=a) + if n == 0: + i += f(a, n) + else: + i += f(A(2), n) + res = self.meta_interp(main, [0], enable_opts='') + assert res == main(0) + self.check_loops(call=1, getfield_gc=0) + res = self.meta_interp(main, [1], enable_opts='') + assert res == main(1) + self.check_loops(call=0, getfield_gc=0) + + def test_reuse_elidable_result(self): + driver = JitDriver(reds=['n', 's'], greens = []) + def main(n): + s = 0 + while n > 0: + driver.jit_merge_point(s=s, n=n) + s += len(str(n)) + len(str(n)) + n -= 1 + return s + res = self.meta_interp(main, [10]) + assert res == main(10) + self.check_loops({ + 'call': 1, 'guard_no_exception': 1, 'guard_true': 1, 'int_add': 2, + 'int_gt': 1, 'int_sub': 1, 'strlen': 1, 'jump': 1, + }) + + def test_look_inside_iff_const_getarrayitem_gc_pure(self): + driver = JitDriver(greens=['unroll'], reds=['s', 'n']) + + class A(object): + _immutable_fields_ = ["x[*]"] + def __init__(self, x): + self.x = [x] + + @look_inside_iff(lambda x: isconstant(x)) + def f(x): + i = 0 + for c in x: + i += 1 + return i + + def main(unroll, n): + s = 0 + while n > 0: + driver.jit_merge_point(s=s, n=n, unroll=unroll) + if unroll: + x = A("xx") + else: + x = A("x" * n) + s += f(x.x[0]) + n -= 1 + return s + + res = self.meta_interp(main, [0, 10]) + assert res == main(0, 10) + # 2 calls, one for f() and one for char_mul + self.check_loops(call=2) + res = self.meta_interp(main, [1, 10]) + assert res == main(1, 10) + self.check_loops(call=0) + class TestLLtype(BaseLLtypeTests, LLJitMixin): pass diff --git a/pypy/jit/metainterp/test/test_dict.py b/pypy/jit/metainterp/test/test_dict.py --- a/pypy/jit/metainterp/test/test_dict.py +++ b/pypy/jit/metainterp/test/test_dict.py @@ -153,11 +153,7 @@ res = self.meta_interp(f, [100], listops=True) assert res == f(50) - # XXX: ideally there would be 7 calls here, but repeated CALL_PURE with - # the same arguments are not folded, because we have conflicting - # definitions of pure, once strhash can be appropriately folded - # this should be decreased to seven. - self.check_loops({"call": 8, "guard_false": 1, "guard_no_exception": 6, + self.check_loops({"call": 7, "guard_false": 1, "guard_no_exception": 6, "guard_true": 1, "int_and": 1, "int_gt": 1, "int_is_true": 1, "int_sub": 1, "jump": 1, "new_with_vtable": 1, "setfield_gc": 1}) diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -0,0 +1,365 @@ +from pypy.jit.metainterp.heapcache import HeapCache +from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.history import ConstInt + +box1 = object() +box2 = object() +box3 = object() +box4 = object() +lengthbox1 = object() +lengthbox2 = object() +descr1 = object() +descr2 = object() +descr3 = object() + +index1 = ConstInt(0) +index2 = ConstInt(1) + + +class FakeEffektinfo(object): + EF_ELIDABLE_CANNOT_RAISE = 0 #elidable function (and cannot raise) + EF_LOOPINVARIANT = 1 #special: call it only once per loop + EF_CANNOT_RAISE = 2 #a function which cannot raise + EF_ELIDABLE_CAN_RAISE = 3 #elidable function (but can raise) + EF_CAN_RAISE = 4 #normal function (can raise) + EF_FORCES_VIRTUAL_OR_VIRTUALIZABLE = 5 #can raise and force virtualizables + EF_RANDOM_EFFECTS = 6 #can do whatever + + OS_ARRAYCOPY = 0 + + def __init__(self, extraeffect, oopspecindex): + self.extraeffect = extraeffect + self.oopspecindex = oopspecindex + +class FakeCallDescr(object): + def __init__(self, extraeffect, oopspecindex=None): + self.extraeffect = extraeffect + self.oopspecindex = oopspecindex + + def get_extra_info(self): + return FakeEffektinfo(self.extraeffect, self.oopspecindex) + +class TestHeapCache(object): + def test_known_class_box(self): + h = HeapCache() + assert not h.is_class_known(1) + assert not h.is_class_known(2) + h.class_now_known(1) + assert h.is_class_known(1) + assert not h.is_class_known(2) + + h.reset() + assert not h.is_class_known(1) + assert not h.is_class_known(2) + + def test_nonstandard_virtualizable(self): + h = HeapCache() + assert not h.is_nonstandard_virtualizable(1) + assert not h.is_nonstandard_virtualizable(2) + h.nonstandard_virtualizables_now_known(1) + assert h.is_nonstandard_virtualizable(1) + assert not h.is_nonstandard_virtualizable(2) + + h.reset() + assert not h.is_nonstandard_virtualizable(1) + assert not h.is_nonstandard_virtualizable(2) + + + def test_heapcache_fields(self): + h = HeapCache() + assert h.getfield(box1, descr1) is None + assert h.getfield(box1, descr2) is None + h.setfield(box1, descr1, box2) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is None + h.setfield(box1, descr2, box3) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box3 + h.setfield(box1, descr1, box3) + assert h.getfield(box1, descr1) is box3 + assert h.getfield(box1, descr2) is box3 + h.setfield(box3, descr1, box1) + assert h.getfield(box3, descr1) is box1 + assert h.getfield(box1, descr1) is None + assert h.getfield(box1, descr2) is box3 + + h.reset() + assert h.getfield(box1, descr1) is None + assert h.getfield(box1, descr2) is None + assert h.getfield(box3, descr1) is None + + def test_heapcache_read_fields_multiple(self): + h = HeapCache() + h.getfield_now_known(box1, descr1, box2) + h.getfield_now_known(box3, descr1, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is None + assert h.getfield(box3, descr1) is box4 + assert h.getfield(box3, descr2) is None + + h.reset() + assert h.getfield(box1, descr1) is None + assert h.getfield(box1, descr2) is None + assert h.getfield(box3, descr1) is None + assert h.getfield(box3, descr2) is None + + def test_heapcache_write_fields_multiple(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + assert h.getfield(box1, descr1) is box2 + h.setfield(box3, descr1, box4) + assert h.getfield(box3, descr1) is box4 + assert h.getfield(box1, descr1) is None # box1 and box3 can alias + + h = HeapCache() + h.new(box1) + h.setfield(box1, descr1, box2) + assert h.getfield(box1, descr1) is box2 + h.setfield(box3, descr1, box4) + assert h.getfield(box3, descr1) is box4 + assert h.getfield(box1, descr1) is None # box1 and box3 can alias + + h = HeapCache() + h.new(box1) + h.new(box3) + h.setfield(box1, descr1, box2) + assert h.getfield(box1, descr1) is box2 + h.setfield(box3, descr1, box4) + assert h.getfield(box3, descr1) is box4 + assert h.getfield(box1, descr1) is box2 # box1 and box3 cannot alias + h.setfield(box1, descr1, box3) + assert h.getfield(box1, descr1) is box3 + + + def test_heapcache_arrays(self): + h = HeapCache() + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box1, descr1, index2) is None + assert h.getarrayitem(box1, descr2, index2) is None + + h.setarrayitem(box1, descr1, index1, box2) + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box1, descr1, index2) is None + assert h.getarrayitem(box1, descr2, index2) is None + h.setarrayitem(box1, descr1, index2, box4) + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box1, descr1, index2) is box4 + assert h.getarrayitem(box1, descr2, index2) is None + + h.setarrayitem(box1, descr2, index1, box3) + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr2, index1) is box3 + assert h.getarrayitem(box1, descr1, index2) is box4 + assert h.getarrayitem(box1, descr2, index2) is None + + h.setarrayitem(box1, descr1, index1, box3) + assert h.getarrayitem(box1, descr1, index1) is box3 + assert h.getarrayitem(box1, descr2, index1) is box3 + assert h.getarrayitem(box1, descr1, index2) is box4 + assert h.getarrayitem(box1, descr2, index2) is None + + h.setarrayitem(box3, descr1, index1, box1) + assert h.getarrayitem(box3, descr1, index1) is box1 + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr2, index1) is box3 + assert h.getarrayitem(box1, descr1, index2) is box4 + assert h.getarrayitem(box1, descr2, index2) is None + + h.reset() + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box3, descr1, index1) is None + + def test_heapcache_array_nonconst_index(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr1, index2, box4) + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + h.setarrayitem(box1, descr1, box2, box3) + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr1, index2) is None + + def test_heapcache_read_fields_multiple_array(self): + h = HeapCache() + h.getarrayitem_now_known(box1, descr1, index1, box2) + h.getarrayitem_now_known(box3, descr1, index1, box4) + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box3, descr1, index1) is box4 + assert h.getarrayitem(box3, descr2, index1) is None + + h.reset() + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr2, index1) is None + assert h.getarrayitem(box3, descr1, index1) is None + assert h.getarrayitem(box3, descr2, index1) is None + + def test_heapcache_write_fields_multiple_array(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + assert h.getarrayitem(box1, descr1, index1) is box2 + h.setarrayitem(box3, descr1, index1, box4) + assert h.getarrayitem(box3, descr1, index1) is box4 + assert h.getarrayitem(box1, descr1, index1) is None # box1 and box3 can alias + + h = HeapCache() + h.new(box1) + h.setarrayitem(box1, descr1, index1, box2) + assert h.getarrayitem(box1, descr1, index1) is box2 + h.setarrayitem(box3, descr1, index1, box4) + assert h.getarrayitem(box3, descr1, index1) is box4 + assert h.getarrayitem(box1, descr1, index1) is None # box1 and box3 can alias + + h = HeapCache() + h.new(box1) + h.new(box3) + h.setarrayitem(box1, descr1, index1, box2) + assert h.getarrayitem(box1, descr1, index1) is box2 + h.setarrayitem(box3, descr1, index1, box4) + assert h.getarrayitem(box3, descr1, index1) is box4 + assert h.getarrayitem(box1, descr1, index1) is box2 # box1 and box3 cannot alias + h.setarrayitem(box1, descr1, index1, box3) + assert h.getarrayitem(box3, descr1, index1) is box4 + assert h.getarrayitem(box1, descr1, index1) is box3 # box1 and box3 cannot alias + + def test_length_cache(self): + h = HeapCache() + h.new_array(box1, lengthbox1) + assert h.arraylen(box1) is lengthbox1 + + assert h.arraylen(box2) is None + h.arraylen_now_known(box2, lengthbox2) + assert h.arraylen(box2) is lengthbox2 + + + def test_invalidate_cache(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr1, index2, box4) + h.invalidate_caches(rop.INT_ADD, None, []) + h.invalidate_caches(rop.INT_ADD_OVF, None, []) + h.invalidate_caches(rop.SETFIELD_RAW, None, []) + h.invalidate_caches(rop.SETARRAYITEM_RAW, None, []) + assert h.getfield(box1, descr1) is box2 + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_ELIDABLE_CANNOT_RAISE), []) + assert h.getfield(box1, descr1) is box2 + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + + h.invalidate_caches( + rop.CALL_LOOPINVARIANT, FakeCallDescr(FakeEffektinfo.EF_LOOPINVARIANT), []) + + h.invalidate_caches( + rop.CALL, FakeCallDescr(FakeEffektinfo.EF_RANDOM_EFFECTS), []) + assert h.getfield(box1, descr1) is None + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr1, index2) is None + + + def test_replace_box(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + assert h.getfield(box1, descr1) is None + assert h.getfield(box1, descr2) is None + assert h.getfield(box4, descr1) is box2 + assert h.getfield(box4, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + + def test_replace_box_array(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + assert h.getarrayitem(box1, descr1, index1) is None + assert h.getarrayitem(box1, descr2, index1) is None + assert h.arraylen(box1) is None + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box4, descr1, index1) is box2 + assert h.getarrayitem(box4, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box4 + assert h.getarrayitem(box3, descr2, index2) is box4 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + assert h.arraylen(box4) is lengthbox2 + + def test_ll_arraycopy(self): + h = HeapCache() + h.new_array(box1, lengthbox1) + h.setarrayitem(box1, descr1, index1, box2) + h.new_array(box2, lengthbox1) + # Just need the destination box for this call + h.invalidate_caches( + rop.CALL, + FakeCallDescr(FakeEffektinfo.EF_CANNOT_RAISE, FakeEffektinfo.OS_ARRAYCOPY), + [None, None, box2, None, None] + ) + assert h.getarrayitem(box1, descr1, index1) is box2 + h.invalidate_caches( + rop.CALL, + FakeCallDescr(FakeEffektinfo.EF_CANNOT_RAISE, FakeEffektinfo.OS_ARRAYCOPY), + [None, None, box3, None, None] + ) + assert h.getarrayitem(box1, descr1, index1) is None + + h.setarrayitem(box4, descr1, index1, box2) + assert h.getarrayitem(box4, descr1, index1) is box2 + h.invalidate_caches( + rop.CALL, + FakeCallDescr(FakeEffektinfo.EF_CANNOT_RAISE, FakeEffektinfo.OS_ARRAYCOPY), + [None, None, box2, None, None] + ) + assert h.getarrayitem(box4, descr1, index1) is None + + def test_unescaped(self): + h = HeapCache() + assert not h.is_unescaped(box1) + h.new(box2) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETFIELD_GC, None, [box2, box1]) + assert h.is_unescaped(box2) + h.invalidate_caches(rop.SETFIELD_GC, None, [box1, box2]) + assert not h.is_unescaped(box2) + + def test_unescaped_testing(self): + h = HeapCache() + h.new(box1) + h.new(box2) + assert h.is_unescaped(box1) + assert h.is_unescaped(box2) + # Putting a virtual inside of another virtual doesn't escape it. + h.invalidate_caches(rop.SETFIELD_GC, None, [box1, box2]) + assert h.is_unescaped(box2) + # Reading a field from a virtual doesn't escape it. + h.invalidate_caches(rop.GETFIELD_GC, None, [box1]) + assert h.is_unescaped(box1) + # Escaping a virtual transitively escapes anything inside of it. + assert not h.is_unescaped(box3) + h.invalidate_caches(rop.SETFIELD_GC, None, [box3, box1]) + assert not h.is_unescaped(box1) + assert not h.is_unescaped(box2) + + def test_unescaped_array(self): + h = HeapCache() + h.new_array(box1, lengthbox1) + assert h.is_unescaped(box1) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box1, index1, box2]) + assert h.is_unescaped(box1) + h.invalidate_caches(rop.SETARRAYITEM_GC, None, [box2, index1, box1]) + assert not h.is_unescaped(box1) \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -34,7 +34,7 @@ l = [x + 1] n -= 1 return l[0] - + res = self.meta_interp(f, [10], listops=True) assert res == f(10) self.check_all_virtualized() @@ -60,7 +60,7 @@ def test_ll_fixed_setitem_fast(self): jitdriver = JitDriver(greens = [], reds = ['n', 'l']) - + def f(n): l = [1, 2, 3] @@ -116,7 +116,7 @@ assert res == f(10) py.test.skip("'[non-null] * n' gives a residual call so far") self.check_loops(setarrayitem_gc=0, getarrayitem_gc=0, call=0) - + def test_arraycopy_simpleoptimize(self): def f(): l = [1, 2, 3, 4] @@ -208,6 +208,26 @@ assert res == f(15) self.check_loops(guard_exception=0) + def test_virtual_resize(self): + jitdriver = JitDriver(greens = [], reds = ['n', 's']) + def f(n): + s = 0 + while n > 0: + jitdriver.jit_merge_point(n=n, s=s) + lst = [] + lst += [1] + n -= len(lst) + s += lst[0] + lst.pop() + lst.append(1) + s /= lst.pop() + return s + res = self.meta_interp(f, [15], listops=True) + assert res == f(15) + self.check_loops({"int_add": 1, "int_sub": 1, "int_gt": 1, + "guard_true": 1, "jump": 1}) + + class TestOOtype(ListTests, OOJitMixin): pass @@ -236,8 +256,6 @@ return a * b res = self.meta_interp(f, [37]) assert res == f(37) - # There is the one actual field on a, plus 2 getfield's from the list - # itself, 1 to get the length (which is then incremented and passed to - # the resize func), and then a read of the items field to actually - # perform the setarrayitem on - self.check_loops(getfield_gc=5, everywhere=True) + # There is the one actual field on a, plus several fields on the list + # itself + self.check_loops(getfield_gc=10, everywhere=True) diff --git a/pypy/jit/metainterp/test/test_slist.py b/pypy/jit/metainterp/test/test_slist.py --- a/pypy/jit/metainterp/test/test_slist.py +++ b/pypy/jit/metainterp/test/test_slist.py @@ -5,7 +5,6 @@ class ListTests(object): def test_basic_list(self): - py.test.skip("not yet") myjitdriver = JitDriver(greens = [], reds = ['n', 'lst']) def f(n): lst = [] @@ -34,7 +33,7 @@ return m res = self.interp_operations(f, [11], listops=True) assert res == 49 - self.check_operations_history(call=5) + self.check_operations_history(call=3) def test_list_of_voids(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'lst']) @@ -93,7 +92,7 @@ return x res = self.meta_interp(f, [-2], listops=True) assert res == 41 - self.check_loops(call=1, guard_value=0) + self.check_loops(call=0, guard_value=0) # we don't support resizable lists on ootype #class TestOOtype(ListTests, OOJitMixin): diff --git a/pypy/jit/metainterp/test/test_string.py b/pypy/jit/metainterp/test/test_string.py --- a/pypy/jit/metainterp/test/test_string.py +++ b/pypy/jit/metainterp/test/test_string.py @@ -1,9 +1,11 @@ import py + +from pypy.jit.codewriter.policy import StopAtXPolicy +from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.rlib.debug import debug_print from pypy.rlib.jit import JitDriver, dont_look_inside, we_are_jitted -from pypy.rlib.debug import debug_print -from pypy.jit.codewriter.policy import StopAtXPolicy +from pypy.rlib.rstring import StringBuilder from pypy.rpython.ootypesystem import ootype -from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin class StringTests: @@ -27,7 +29,7 @@ return i res = self.meta_interp(f, [10, True, _str('h')], listops=True) assert res == 5 - self.check_loops(**{self.CALL: 1, self.CALL_PURE: 0}) + self.check_loops(**{self.CALL: 1, self.CALL_PURE: 0, 'everywhere': True}) def test_eq_folded(self): _str = self._str @@ -327,7 +329,7 @@ def test_str_slice_len_surviving(self): _str = self._str longstring = _str("Unrolling Trouble") - mydriver = JitDriver(reds = ['i', 'a', 'sa'], greens = []) + mydriver = JitDriver(reds = ['i', 'a', 'sa'], greens = []) def f(a): i = sa = a while i < len(longstring): @@ -343,7 +345,7 @@ fillers = _str("abcdefghijklmnopqrstuvwxyz") data = _str("ABCDEFGHIJKLMNOPQRSTUVWXYZ") - mydriver = JitDriver(reds = ['line', 'noise', 'res'], greens = []) + mydriver = JitDriver(reds = ['line', 'noise', 'res'], greens = []) def f(): line = data noise = fillers @@ -370,7 +372,7 @@ def __init__(self, value): self.value = value mydriver = JitDriver(reds = ['ratio', 'line', 'noise', 'res'], - greens = []) + greens = []) def f(): line = Str(data) noise = Str(fillers) @@ -408,7 +410,7 @@ return len(sa) assert self.meta_interp(f, [16]) == f(16) - def test_loop_invariant_string_slize(self): + def test_loop_invariant_string_slice(self): _str = self._str mydriver = JitDriver(reds = ['i', 'n', 'sa', 's', 's1'], greens = []) def f(n, c): @@ -425,7 +427,7 @@ return sa assert self.meta_interp(f, [16, 'a']) == f(16, 'a') - def test_loop_invariant_string_slize_boxed(self): + def test_loop_invariant_string_slice_boxed(self): class Str(object): def __init__(self, value): self.value = value @@ -445,7 +447,7 @@ return sa assert self.meta_interp(f, [16, 'a']) == f(16, 'a') - def test_loop_invariant_string_slize_in_array(self): + def test_loop_invariant_string_slice_in_array(self): _str = self._str mydriver = JitDriver(reds = ['i', 'n', 'sa', 's', 's1'], greens = []) def f(n, c): @@ -513,7 +515,7 @@ m -= 1 return 42 self.meta_interp(f, [6, 7]) - self.check_loops(call=3, # str(), _str(), escape() + self.check_loops(call=1, # escape() newunicode=1, unicodegetitem=0, unicodesetitem=1, copyunicodecontent=1) @@ -536,3 +538,55 @@ self.check_loops(call_pure=0, call=1, newunicode=0, unicodegetitem=0, unicodesetitem=0, copyunicodecontent=0) + + def test_join_chars(self): + jitdriver = JitDriver(reds=['a', 'b', 'c', 'i'], greens=[]) + def f(a, b, c): + i = 0 + while i < 10: + jitdriver.jit_merge_point(a=a, b=b, c=c, i=i) + x = [] + if a: + x.append("a") + if b: + x.append("b") + if c: + x.append("c") + i += len("".join(x)) + return i + res = self.meta_interp(f, [1, 1, 1]) + assert res == f(True, True, True) + # The "".join should be unrolled, since the length of x is known since + # it is virtual, ensure there are no calls to ll_join_chars, or + # allocations. + self.check_loops({ + "guard_true": 5, "int_is_true": 3, "int_lt": 2, "int_add": 2, "jump": 2, + }, everywhere=True) + + def test_virtual_copystringcontent(self): + jitdriver = JitDriver(reds=['n', 'result'], greens=[]) + def main(n): + result = 0 + while n >= 0: + jitdriver.jit_merge_point(n=n, result=result) + b = StringBuilder(6) + b.append("Hello!") + result += ord(b.build()[0]) + n -= 1 + return result + res = self.meta_interp(main, [9]) + assert res == main(9) + + def test_virtual_copystringcontent2(self): + jitdriver = JitDriver(reds=['n', 'result'], greens=[]) + def main(n): + result = 0 + while n >= 0: + jitdriver.jit_merge_point(n=n, result=result) + b = StringBuilder(6) + b.append("Hello!") + result += ord((b.build() + "xyz")[0]) + n -= 1 + return result + res = self.meta_interp(main, [9]) + assert res == main(9) diff --git a/pypy/jit/metainterp/test/test_tracingopts.py b/pypy/jit/metainterp/test/test_tracingopts.py --- a/pypy/jit/metainterp/test/test_tracingopts.py +++ b/pypy/jit/metainterp/test/test_tracingopts.py @@ -1,7 +1,10 @@ +import sys + +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.rlib import jit +from pypy.rlib.rarithmetic import ovfcheck + import py -import sys -from pypy.rlib import jit -from pypy.jit.metainterp.test.support import LLJitMixin class TestLLtype(LLJitMixin): @@ -257,6 +260,28 @@ self.check_operations_history(setarrayitem_gc=2, setfield_gc=2, getarrayitem_gc=0, getfield_gc=2) + def test_promote_changes_array_cache(self): + a1 = [0, 0] + a2 = [0, 0] + def fn(n): + if n > 0: + a = a1 + else: + a = a2 + a[0] = n + jit.hint(n, promote=True) + x1 = a[0] + jit.hint(x1, promote=True) + a[n - n] = n + 1 + return a[0] + x1 + res = self.interp_operations(fn, [7]) + assert res == 7 + 7 + 1 + self.check_operations_history(getarrayitem_gc=0, guard_value=1) + res = self.interp_operations(fn, [-7]) + assert res == -7 - 7 + 1 + self.check_operations_history(getarrayitem_gc=0, guard_value=1) + + def test_list_caching(self): a1 = [0, 0] a2 = [0, 0] @@ -357,7 +382,7 @@ assert res == f(10, 1, 1) self.check_history(getarrayitem_gc=0, getfield_gc=0) - def test_heap_caching_pure(self): + def test_heap_caching_array_pure(self): class A(object): pass p1 = A() @@ -405,3 +430,164 @@ assert res == -7 + 7 self.check_operations_history(getfield_gc=0) return + + def test_heap_caching_multiple_objects(self): + class Gbl(object): + pass + g = Gbl() + class A(object): + pass + a1 = A() + g.a1 = a1 + a1.x = 7 + a2 = A() + g.a2 = a2 + a2.x = 7 + def gn(a1, a2): + return a1.x + a2.x + def fn(n): + if n < 0: + a1 = A() + g.a1 = a1 + a1.x = n + a2 = A() + g.a2 = a2 + a2.x = n - 1 + else: + a1 = g.a1 + a2 = g.a2 + return a1.x + a2.x + gn(a1, a2) + res = self.interp_operations(fn, [-7]) + assert res == 2 * -7 + 2 * -8 + self.check_operations_history(setfield_gc=4, getfield_gc=0) + res = self.interp_operations(fn, [7]) + assert res == 4 * 7 + self.check_operations_history(getfield_gc=4) + + def test_heap_caching_multiple_tuples(self): + class Gbl(object): + pass + g = Gbl() + def gn(a1, a2): + return a1[0] + a2[0] + def fn(n): + a1 = (n, ) + g.a = a1 + a2 = (n - 1, ) + g.a = a2 + jit.promote(n) + return a1[0] + a2[0] + gn(a1, a2) + res = self.interp_operations(fn, [7]) + assert res == 2 * 7 + 2 * 6 + self.check_operations_history(getfield_gc_pure=0) + res = self.interp_operations(fn, [-7]) + assert res == 2 * -7 + 2 * -8 + self.check_operations_history(getfield_gc_pure=0) + + def test_heap_caching_multiple_arrays(self): + class Gbl(object): + pass + g = Gbl() + def fn(n): + a1 = [n, n, n] + g.a = a1 + a1[0] = n + a2 = [n, n, n] + g.a = a2 + a2[0] = n - 1 + return a1[0] + a2[0] + a1[0] + a2[0] + res = self.interp_operations(fn, [7]) + assert res == 2 * 7 + 2 * 6 + self.check_operations_history(getarrayitem_gc=0) + res = self.interp_operations(fn, [-7]) + assert res == 2 * -7 + 2 * -8 + self.check_operations_history(getarrayitem_gc=0) + + def test_heap_caching_multiple_arrays_getarrayitem(self): + class Gbl(object): + pass + g = Gbl() + g.a1 = [7, 8, 9] + g.a2 = [8, 9, 10, 11] + + def fn(i): + if i < 0: + g.a1 = [7, 8, 9] + g.a2 = [7, 8, 9, 10] + jit.promote(i) + a1 = g.a1 + a1[i + 1] = 15 # make lists mutable + a2 = g.a2 + a2[i + 1] = 19 + return a1[i] + a2[i] + a1[i] + a2[i] + res = self.interp_operations(fn, [0]) + assert res == 2 * 7 + 2 * 8 + self.check_operations_history(getarrayitem_gc=2) + + + def test_heap_caching_multiple_lists(self): + class Gbl(object): + pass + g = Gbl() + g.l = [] + def fn(n): + if n < -100: + g.l.append(1) + a1 = [n, n, n] + g.l = a1 + a1[0] = n + a2 = [n, n, n] + g.l = a2 + a2[0] = n - 1 + return a1[0] + a2[0] + a1[0] + a2[0] + res = self.interp_operations(fn, [7]) + assert res == 2 * 7 + 2 * 6 + self.check_operations_history(getarrayitem_gc=0, getfield_gc=0) + res = self.interp_operations(fn, [-7]) + assert res == 2 * -7 + 2 * -8 + self.check_operations_history(getarrayitem_gc=0, getfield_gc=0) + + def test_length_caching(self): + class Gbl(object): + pass + g = Gbl() + g.a = [0] * 7 + def fn(n): + a = g.a + res = len(a) + len(a) + a1 = [0] * n + g.a = a1 + return len(a1) + res + res = self.interp_operations(fn, [7]) + assert res == 7 * 3 + self.check_operations_history(arraylen_gc=1) + + def test_arraycopy(self): + class Gbl(object): + pass + g = Gbl() + g.a = [0] * 7 + def fn(n): + assert n >= 0 + a = g.a + x = [0] * n + x[2] = 21 + return len(a[:n]) + x[2] + res = self.interp_operations(fn, [3]) + assert res == 24 + self.check_operations_history(getarrayitem_gc=0) + + def test_fold_int_add_ovf(self): + def fn(n): + jit.promote(n) + try: + n = ovfcheck(n + 1) + except OverflowError: + return 12 + else: + return n + res = self.interp_operations(fn, [3]) + assert res == 4 + self.check_operations_history(int_add_ovf=0) + res = self.interp_operations(fn, [sys.maxint]) + assert res == 12 \ No newline at end of file diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -2,7 +2,7 @@ import py from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.optimizeopt.virtualstate import VirtualStateInfo, VStructStateInfo, \ - VArrayStateInfo, NotVirtualStateInfo, VirtualState + VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr from pypy.rpython.lltypesystem import lltype @@ -11,6 +11,7 @@ from pypy.jit.metainterp.history import TreeLoop, LoopToken from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import FakeDescr, FakeMetaInterpStaticData from pypy.jit.metainterp.optimize import RetraceLoop +from pypy.jit.metainterp.resoperation import ResOperation, rop class TestBasic: someptr1 = LLtypeMixin.myptr @@ -129,6 +130,7 @@ info.fieldstate = [info] assert info.generalization_of(info, {}, {}) + class BaseTestGenerateGuards(BaseTest): def guards(self, info1, info2, box, expected): info1.position = info2.position = 0 @@ -910,3 +912,111 @@ class TestLLtypeBridges(BaseTestBridges, LLtypeMixin): pass +class FakeOptimizer: + def make_equal_to(*args): + pass + def getvalue(*args): + pass + +class TestShortBoxes: + p1 = BoxPtr() + p2 = BoxPtr() + p3 = BoxPtr() + p4 = BoxPtr() + i1 = BoxInt() + i2 = BoxInt() + i3 = BoxInt() + i4 = BoxInt() + + def test_short_box_duplication_direct(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1)) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p2], self.i1)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes) == 4 + assert self.i1 in sb.short_boxes + assert sum([op.result is self.i1 for op in sb.short_boxes.values() if op]) == 1 + + def test_dont_duplicate_potential_boxes(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1)) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [BoxPtr()], self.i1)) + sb.add_potential(ResOperation(rop.INT_NEG, [self.i1], self.i2)) + sb.add_potential(ResOperation(rop.INT_ADD, [ConstInt(7), self.i2], + self.i3)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes) == 5 + + def test_prioritize1(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1)) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p2], self.i1)) + sb.add_potential(ResOperation(rop.INT_NEG, [self.i1], self.i2)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes.values()) == 5 + int_neg = [op for op in sb.short_boxes.values() + if op and op.getopnum() == rop.INT_NEG] + assert len(int_neg) == 1 + int_neg = int_neg[0] + getfield = [op for op in sb.short_boxes.values() + if op and op.result == int_neg.getarg(0)] + assert len(getfield) == 1 + assert getfield[0].getarg(0) in [self.p1, self.p2] + + def test_prioritize1bis(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1), + synthetic=True) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p2], self.i1), + synthetic=True) + sb.add_potential(ResOperation(rop.INT_NEG, [self.i1], self.i2)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes.values()) == 5 + int_neg = [op for op in sb.short_boxes.values() + if op and op.getopnum() == rop.INT_NEG] + assert len(int_neg) == 1 + int_neg = int_neg[0] + getfield = [op for op in sb.short_boxes.values() + if op and op.result == int_neg.getarg(0)] + assert len(getfield) == 1 + assert getfield[0].getarg(0) in [self.p1, self.p2] + + def test_prioritize2(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1), + synthetic=True) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p2], self.i1)) + sb.add_potential(ResOperation(rop.INT_NEG, [self.i1], self.i2)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes.values()) == 5 + int_neg = [op for op in sb.short_boxes.values() + if op and op.getopnum() == rop.INT_NEG] + assert len(int_neg) == 1 + int_neg = int_neg[0] + getfield = [op for op in sb.short_boxes.values() + if op and op.result == int_neg.getarg(0)] + assert len(getfield) == 1 + assert getfield[0].getarg(0) == self.p2 + + def test_prioritize3(self): + class Optimizer(FakeOptimizer): + def produce_potential_short_preamble_ops(_self, sb): + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p1], self.i1)) + sb.add_potential(ResOperation(rop.GETFIELD_GC, [self.p2], self.i1), + synthetic=True) + sb.add_potential(ResOperation(rop.INT_NEG, [self.i1], self.i2)) + sb = ShortBoxes(Optimizer(), [self.p1, self.p2]) + assert len(sb.short_boxes.values()) == 5 + int_neg = [op for op in sb.short_boxes.values() + if op and op.getopnum() == rop.INT_NEG] + assert len(int_neg) == 1 + int_neg = int_neg[0] + getfield = [op for op in sb.short_boxes.values() + if op and op.result == int_neg.getarg(0)] + assert len(getfield) == 1 + assert getfield[0].getarg(0) == self.p1 diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -367,9 +367,9 @@ # ---------- execute assembler ---------- while True: # until interrupted by an exception metainterp_sd.profiler.start_running() - debug_start("jit-running") + #debug_start("jit-running") fail_descr = warmrunnerdesc.execute_token(loop_token) - debug_stop("jit-running") + #debug_stop("jit-running") metainterp_sd.profiler.end_running() loop_token = None # for test_memmgr if vinfo is not None: diff --git a/pypy/jit/tl/pypyjit.py b/pypy/jit/tl/pypyjit.py --- a/pypy/jit/tl/pypyjit.py +++ b/pypy/jit/tl/pypyjit.py @@ -40,7 +40,7 @@ config.objspace.usemodules.array = False config.objspace.usemodules._weakref = True config.objspace.usemodules._sre = False -config.objspace.usemodules._lsprof = True +config.objspace.usemodules._lsprof = False # config.objspace.usemodules._ffi = True config.objspace.usemodules.cppyy = True @@ -78,7 +78,7 @@ def read_code(): from pypy.module.marshal.interp_marshal import dumps - + filename = 'pypyjit_demo.py' source = readfile(filename) ec = space.getexecutioncontext() diff --git a/pypy/jit/tl/pypyjit_demo.py b/pypy/jit/tl/pypyjit_demo.py --- a/pypy/jit/tl/pypyjit_demo.py +++ b/pypy/jit/tl/pypyjit_demo.py @@ -2,22 +2,16 @@ pypyjit.set_param(threshold=200) -def main(a, b): - i = sa = 0 - while i < 300: - if a > 0: # Specialises the loop - pass - if b < 2 and b > 0: - pass - if (a >> b) >= 0: - sa += 1 - if (a << b) > 2: - sa += 10000 - i += 1 - return sa +def f(n): + pairs = [(0.0, 1.0), (2.0, 3.0)] * n + mag = 0 + for (x1, x2) in pairs: + dx = x1 - x2 + mag += ((dx * dx ) ** (-1.5)) + return n try: - print main(2, 1) + print f(301) except Exception, e: print "Exception: ", type(e) diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -3,13 +3,13 @@ """ +from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import NoneNotWrapped -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.gateway import NoneNotWrapped, interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef -from pypy.interpreter.baseobjspace import Wrappable +from pypy.rlib import jit +from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_uint, intmask -from pypy.rlib.objectmodel import specialize from pypy.rlib.rbigint import rbigint @@ -134,29 +134,15 @@ @specialize.arg(2) + at jit.look_inside_iff(lambda space, args, implementation_of: + jit.isconstant(len(args.arguments_w)) and + len(args.arguments_w) == 2 +) def min_max(space, args, implementation_of): if implementation_of == "max": compare = space.gt else: compare = space.lt - - args_w = args.arguments_w - if len(args_w) == 2 and not args.keywords: - # simple case, suitable for the JIT - w_arg0, w_arg1 = args_w - if space.is_true(compare(w_arg0, w_arg1)): - return w_arg0 - else: - return w_arg1 - else: - return min_max_loop(space, args, implementation_of) - - at specialize.arg(2) -def min_max_loop(space, args, implementation_of): - if implementation_of == "max": - compare = space.gt - else: - compare = space.lt args_w = args.arguments_w if len(args_w) > 1: w_sequence = space.newtuple(args_w) diff --git a/pypy/module/__pypy__/__init__.py b/pypy/module/__pypy__/__init__.py --- a/pypy/module/__pypy__/__init__.py +++ b/pypy/module/__pypy__/__init__.py @@ -8,6 +8,7 @@ appleveldefs = {} interpleveldefs = { + "StringBuilder": "interp_builders.W_StringBuilder", "UnicodeBuilder": "interp_builders.W_UnicodeBuilder", } diff --git a/pypy/module/__pypy__/interp_builders.py b/pypy/module/__pypy__/interp_builders.py --- a/pypy/module/__pypy__/interp_builders.py +++ b/pypy/module/__pypy__/interp_builders.py @@ -2,49 +2,55 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef -from pypy.rlib.rstring import UnicodeBuilder +from pypy.rlib.rstring import UnicodeBuilder, StringBuilder +from pypy.tool.sourcetools import func_with_new_name -class W_UnicodeBuilder(Wrappable): - def __init__(self, space, size): - if size < 0: - self.builder = UnicodeBuilder() - else: - self.builder = UnicodeBuilder(size) - self.done = False +def create_builder(name, strtype, builder_cls): + class W_Builder(Wrappable): + def __init__(self, space, size): + if size < 0: + self.builder = builder_cls() + else: + self.builder = builder_cls(size) - def _check_done(self, space): - if self.done: - raise OperationError(space.w_ValueError, space.wrap("Can't operate on a done builder")) + def _check_done(self, space): + if self.builder is None: + raise OperationError(space.w_ValueError, space.wrap("Can't operate on a done builder")) - @unwrap_spec(size=int) - def descr__new__(space, w_subtype, size=-1): - return W_UnicodeBuilder(space, size) + @unwrap_spec(size=int) + def descr__new__(space, w_subtype, size=-1): + return W_Builder(space, size) - @unwrap_spec(s=unicode) - def descr_append(self, space, s): - self._check_done(space) - self.builder.append(s) + @unwrap_spec(s=strtype) + def descr_append(self, space, s): + self._check_done(space) + self.builder.append(s) - @unwrap_spec(s=unicode, start=int, end=int) - def descr_append_slice(self, space, s, start, end): - self._check_done(space) - if not 0 <= start <= end <= len(s): - raise OperationError(space.w_ValueError, space.wrap("bad start/stop")) - self.builder.append_slice(s, start, end) + @unwrap_spec(s=strtype, start=int, end=int) + def descr_append_slice(self, space, s, start, end): + self._check_done(space) + if not 0 <= start <= end <= len(s): + raise OperationError(space.w_ValueError, space.wrap("bad start/stop")) + self.builder.append_slice(s, start, end) - def descr_build(self, space): - self._check_done(space) - w_s = space.wrap(self.builder.build()) - self.done = True - return w_s + def descr_build(self, space): + self._check_done(space) + w_s = space.wrap(self.builder.build()) + self.builder = None + return w_s + W_Builder.__name__ = "W_%s" % name + W_Builder.typedef = TypeDef(name, + __new__ = interp2app(func_with_new_name( + W_Builder.descr__new__.im_func, + '%s_new' % (name,))), + append = interp2app(W_Builder.descr_append), + append_slice = interp2app(W_Builder.descr_append_slice), + build = interp2app(W_Builder.descr_build), + ) + W_Builder.typedef.acceptable_as_base_class = False + return W_Builder -W_UnicodeBuilder.typedef = TypeDef("UnicodeBuilder", - __new__ = interp2app(W_UnicodeBuilder.descr__new__.im_func), - - append = interp2app(W_UnicodeBuilder.descr_append), - append_slice = interp2app(W_UnicodeBuilder.descr_append_slice), - build = interp2app(W_UnicodeBuilder.descr_build), -) -W_UnicodeBuilder.typedef.acceptable_as_base_class = False +W_StringBuilder = create_builder("StringBuilder", str, StringBuilder) +W_UnicodeBuilder = create_builder("UnicodeBuilder", unicode, UnicodeBuilder) diff --git a/pypy/module/__pypy__/test/test_builders.py b/pypy/module/__pypy__/test/test_builders.py --- a/pypy/module/__pypy__/test/test_builders.py +++ b/pypy/module/__pypy__/test/test_builders.py @@ -31,4 +31,14 @@ raises(ValueError, b.append_slice, u"1", 2, 1) s = b.build() assert s == "cde" - raises(ValueError, b.append_slice, u"abc", 1, 2) \ No newline at end of file + raises(ValueError, b.append_slice, u"abc", 1, 2) + + def test_stringbuilder(self): + from __pypy__.builders import StringBuilder + b = StringBuilder() + b.append("abc") + b.append("123") + b.append("you and me") + s = b.build() + assert s == "abc123you and me" + raises(ValueError, b.build) \ No newline at end of file diff --git a/pypy/module/_continuation/__init__.py b/pypy/module/_continuation/__init__.py --- a/pypy/module/_continuation/__init__.py +++ b/pypy/module/_continuation/__init__.py @@ -37,4 +37,5 @@ interpleveldefs = { 'continulet': 'interp_continuation.W_Continulet', 'permute': 'interp_continuation.permute', + '_p': 'interp_continuation.unpickle', # pickle support } diff --git a/pypy/module/_continuation/interp_continuation.py b/pypy/module/_continuation/interp_continuation.py --- a/pypy/module/_continuation/interp_continuation.py +++ b/pypy/module/_continuation/interp_continuation.py @@ -5,6 +5,8 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef from pypy.interpreter.gateway import interp2app +from pypy.interpreter.pycode import PyCode +from pypy.interpreter.pyframe import PyFrame class W_Continulet(Wrappable): @@ -20,66 +22,69 @@ def check_sthread(self): ec = self.space.getexecutioncontext() if ec.stacklet_thread is not self.sthread: - start_state.clear() + global_state.clear() raise geterror(self.space, "inter-thread support is missing") - return ec def descr_init(self, w_callable, __args__): if self.sthread is not None: raise geterror(self.space, "continulet already __init__ialized") - start_state.origin = self - start_state.w_callable = w_callable - start_state.args = __args__ - self.sthread = build_sthread(self.space) - try: - self.h = self.sthread.new(new_stacklet_callback) - if self.sthread.is_empty_handle(self.h): # early return - raise MemoryError - except MemoryError: - self.sthread = None - start_state.clear() - raise getmemoryerror(self.space) + # + # hackish: build the frame "by hand", passing it the correct arguments + space = self.space + w_args, w_kwds = __args__.topacked() + bottomframe = space.createframe(get_entrypoint_pycode(space), + get_w_module_dict(space), None) + bottomframe.locals_stack_w[0] = space.wrap(self) + bottomframe.locals_stack_w[1] = w_callable + bottomframe.locals_stack_w[2] = w_args + bottomframe.locals_stack_w[3] = w_kwds + self.bottomframe = bottomframe + # + global_state.origin = self + sthread = build_sthread(self.space) + self.sthread = sthread + h = sthread.new(new_stacklet_callback) + post_switch(sthread, h) def switch(self, w_to): + sthread = self.sthread + if sthread is not None and sthread.is_empty_handle(self.h): + global_state.clear() + raise geterror(self.space, "continulet already finished") to = self.space.interp_w(W_Continulet, w_to, can_be_None=True) + if to is not None and to.sthread is None: + to = None + if sthread is None: # if self is non-initialized: + if to is not None: # if we are given a 'to' + self = to # then just use it and ignore 'self' + sthread = self.sthread + to = None + else: + return get_result() # else: no-op if to is not None: - if to.sthread is None: - start_state.clear() - raise geterror(self.space, "continulet not initialized yet") + if to.sthread is not sthread: + global_state.clear() + raise geterror(self.space, "cross-thread double switch") if self is to: # double-switch to myself: no-op return get_result() - if self.sthread is None: - start_state.clear() - raise geterror(self.space, "continulet not initialized yet") - ec = self.check_sthread() - saved_topframeref = ec.topframeref + if sthread.is_empty_handle(to.h): + global_state.clear() + raise geterror(self.space, "continulet already finished") + self.check_sthread() # - start_state.origin = self + global_state.origin = self if to is None: # simple switch: going to self.h - start_state.destination = self + global_state.destination = self else: # double switch: the final destination is to.h - start_state.destination = to + global_state.destination = to # - h = start_state.destination.h - sthread = self.sthread - if sthread.is_empty_handle(h): - start_state.clear() - raise geterror(self.space, "continulet already finished") - # - try: - do_switch(sthread, h) - except MemoryError: - start_state.clear() - raise getmemoryerror(self.space) - # - ec = sthread.ec - ec.topframeref = saved_topframeref - return get_result() + h = sthread.switch(global_state.destination.h) + return post_switch(sthread, h) def descr_switch(self, w_value=None, w_to=None): - start_state.w_value = w_value + global_state.w_value = w_value return self.switch(w_to) def descr_throw(self, w_type, w_val=None, w_tb=None, w_to=None): @@ -94,8 +99,8 @@ # operr = OperationError(w_type, w_val, tb) operr.normalize_exception(space) - start_state.w_value = None - start_state.propagate_exception = operr + global_state.w_value = None + global_state.propagate_exception = operr return self.switch(w_to) def descr_is_pending(self): @@ -103,12 +108,26 @@ and not self.sthread.is_empty_handle(self.h)) return self.space.newbool(valid) + def descr__reduce__(self): + from pypy.module._continuation import interp_pickle + return interp_pickle.reduce(self) + + def descr__setstate__(self, w_args): + from pypy.module._continuation import interp_pickle + interp_pickle.setstate(self, w_args) + def W_Continulet___new__(space, w_subtype, __args__): r = space.allocate_instance(W_Continulet, w_subtype) r.__init__(space) return space.wrap(r) +def unpickle(space, w_subtype): + """Pickle support.""" + r = space.allocate_instance(W_Continulet, w_subtype) + r.__init__(space) + return space.wrap(r) + W_Continulet.typedef = TypeDef( 'continulet', @@ -118,26 +137,52 @@ switch = interp2app(W_Continulet.descr_switch), throw = interp2app(W_Continulet.descr_throw), is_pending = interp2app(W_Continulet.descr_is_pending), + __reduce__ = interp2app(W_Continulet.descr__reduce__), + __setstate__= interp2app(W_Continulet.descr__setstate__), ) - # ____________________________________________________________ +# Continulet objects maintain a dummy frame object in order to ensure +# that the 'f_back' chain is consistent. We hide this dummy frame +# object by giving it a dummy code object with hidden_applevel=True. class State: def __init__(self, space): - self.space = space + self.space = space w_module = space.getbuiltinmodule('_continuation') self.w_error = space.getattr(w_module, space.wrap('error')) - self.w_memoryerror = OperationError(space.w_MemoryError, space.w_None) + # the following function switches away immediately, so that + # continulet.__init__() doesn't immediately run func(), but it + # also has the hidden purpose of making sure we have a single + # bottomframe for the whole duration of the continulet's run. + # Hackish: only the func_code is used, and used in the context + # of w_globals == this module, so we can access the name + # 'continulet' directly. + w_code = space.appexec([], '''(): + def start(c, func, args, kwds): + if continulet.switch(c) is not None: + raise TypeError( + "can\'t send non-None value to a just-started continulet") + return func(c, *args, **kwds) + return start.func_code + ''') + self.entrypoint_pycode = space.interp_w(PyCode, w_code) + self.entrypoint_pycode.hidden_applevel = True + self.w_unpickle = w_module.get('_p') + self.w_module_dict = w_module.getdict(space) def geterror(space, message): cs = space.fromcache(State) return OperationError(cs.w_error, space.wrap(message)) -def getmemoryerror(space): +def get_entrypoint_pycode(space): cs = space.fromcache(State) - return cs.w_memoryerror + return cs.entrypoint_pycode + +def get_w_module_dict(space): + cs = space.fromcache(State) + return cs.w_module_dict # ____________________________________________________________ @@ -148,71 +193,63 @@ StackletThread.__init__(self, space.config) self.space = space self.ec = ec + # for unpickling + from pypy.rlib.rweakref import RWeakKeyDictionary + self.frame2continulet = RWeakKeyDictionary(PyFrame, W_Continulet) ExecutionContext.stacklet_thread = None # ____________________________________________________________ -class StartState: # xxx a single global to pass around the function to start +class GlobalState: def clear(self): self.origin = None self.destination = None - self.w_callable = None - self.args = None self.w_value = None self.propagate_exception = None -start_state = StartState() -start_state.clear() +global_state = GlobalState() +global_state.clear() def new_stacklet_callback(h, arg): - self = start_state.origin - w_callable = start_state.w_callable - args = start_state.args - start_state.clear() - try: - do_switch(self.sthread, h) - except MemoryError: - return h # oups! do an early return in this case - # + self = global_state.origin + self.h = h + global_state.clear() space = self.space try: - ec = self.sthread.ec - ec.topframeref = jit.vref_None - - if start_state.propagate_exception is not None: - raise start_state.propagate_exception # just propagate it further - if start_state.w_value is not space.w_None: - raise OperationError(space.w_TypeError, space.wrap( - "can't send non-None value to a just-started continulet")) - - args = args.prepend(self.space.wrap(self)) - w_result = space.call_args(w_callable, args) + frame = self.bottomframe + w_result = frame.execute_frame() except Exception, e: - start_state.propagate_exception = e + global_state.propagate_exception = e else: - start_state.w_value = w_result - start_state.origin = self - start_state.destination = self + global_state.w_value = w_result + self.sthread.ec.topframeref = jit.vref_None + global_state.origin = self + global_state.destination = self return self.h - -def do_switch(sthread, h): - h = sthread.switch(h) - origin = start_state.origin - self = start_state.destination - start_state.origin = None - start_state.destination = None +def post_switch(sthread, h): + origin = global_state.origin + self = global_state.destination + global_state.origin = None + global_state.destination = None self.h, origin.h = origin.h, h + # + current = sthread.ec.topframeref + sthread.ec.topframeref = self.bottomframe.f_backref + self.bottomframe.f_backref = origin.bottomframe.f_backref + origin.bottomframe.f_backref = current + # + return get_result() def get_result(): - if start_state.propagate_exception: - e = start_state.propagate_exception - start_state.propagate_exception = None + if global_state.propagate_exception: + e = global_state.propagate_exception + global_state.propagate_exception = None raise e - w_value = start_state.w_value - start_state.w_value = None + w_value = global_state.w_value + global_state.w_value = None return w_value def build_sthread(space): @@ -232,7 +269,7 @@ cont = space.interp_w(W_Continulet, w_cont) if cont.sthread is not sthread: if cont.sthread is None: - raise geterror(space, "got a non-initialized continulet") + continue # ignore non-initialized continulets else: raise geterror(space, "inter-thread support is missing") elif sthread.is_empty_handle(cont.h): @@ -240,6 +277,9 @@ contlist.append(cont) # if len(contlist) > 1: - other = contlist[-1].h + otherh = contlist[-1].h + otherb = contlist[-1].bottomframe.f_backref for cont in contlist: - other, cont.h = cont.h, other + otherh, cont.h = cont.h, otherh + b = cont.bottomframe + otherb, b.f_backref = b.f_backref, otherb diff --git a/pypy/module/_continuation/interp_pickle.py b/pypy/module/_continuation/interp_pickle.py new file mode 100644 --- /dev/null +++ b/pypy/module/_continuation/interp_pickle.py @@ -0,0 +1,128 @@ +from pypy.tool import stdlib_opcode as pythonopcode +from pypy.rlib import jit +from pypy.interpreter.error import OperationError +from pypy.interpreter.pyframe import PyFrame +from pypy.module._continuation.interp_continuation import State, global_state +from pypy.module._continuation.interp_continuation import build_sthread +from pypy.module._continuation.interp_continuation import post_switch +from pypy.module._continuation.interp_continuation import get_result, geterror + + +def getunpickle(space): + cs = space.fromcache(State) + return cs.w_unpickle + + +def reduce(self): + # xxx this is known to be not completely correct with respect + # to subclasses, e.g. no __slots__ support, no looking for a + # __getnewargs__ or __getstate__ defined in the subclass, etc. + # Doing the right thing looks involved, though... + space = self.space + if self.sthread is None: + w_frame = space.w_False + elif self.sthread.is_empty_handle(self.h): + w_frame = space.w_None + else: + w_frame = space.wrap(self.bottomframe) + w_continulet_type = space.type(space.wrap(self)) + w_dict = self.getdict(space) or space.w_None + args = [getunpickle(space), + space.newtuple([w_continulet_type]), + space.newtuple([w_frame, w_dict]), + ] + return space.newtuple(args) + +def setstate(self, w_args): + space = self.space + if self.sthread is not None: + raise geterror(space, "continulet.__setstate__() on an already-" + "initialized continulet") + w_frame, w_dict = space.fixedview(w_args, expected_length=2) + if not space.is_w(w_dict, space.w_None): + self.setdict(space, w_dict) + if space.is_w(w_frame, space.w_False): + return # not initialized + sthread = build_sthread(self.space) + self.sthread = sthread + self.bottomframe = space.interp_w(PyFrame, w_frame, can_be_None=True) + # + global_state.origin = self + if self.bottomframe is not None: + sthread.frame2continulet.set(self.bottomframe, self) + self.h = sthread.new(resume_trampoline_callback) + get_result() # propagate the eventual MemoryError + +# ____________________________________________________________ + +def resume_trampoline_callback(h, arg): + self = global_state.origin + self.h = h + space = self.space + sthread = self.sthread + try: + global_state.clear() + if self.bottomframe is None: + w_result = space.w_None + else: + h = sthread.switch(self.h) + try: + w_result = post_switch(sthread, h) + operr = None + except OperationError, e: + w_result = None + operr = e + # + while True: + ec = sthread.ec + frame = ec.topframeref() + assert frame is not None # XXX better error message + exit_continulet = sthread.frame2continulet.get(frame) + # + continue_after_call(frame) + # + # small hack: unlink frame out of the execution context, + # because execute_frame will add it there again + ec.topframeref = frame.f_backref + # + try: + w_result = frame.execute_frame(w_result, operr) + operr = None + except OperationError, e: + w_result = None + operr = e + if exit_continulet is not None: + self = exit_continulet + break + sthread.ec.topframeref = jit.vref_None + if operr: + raise operr + except Exception, e: + global_state.propagate_exception = e + else: + global_state.w_value = w_result + global_state.origin = self + global_state.destination = self + return self.h + +def continue_after_call(frame): + code = frame.pycode.co_code + instr = frame.last_instr + opcode = ord(code[instr]) + map = pythonopcode.opmap + call_ops = [map['CALL_FUNCTION'], map['CALL_FUNCTION_KW'], + map['CALL_FUNCTION_VAR'], map['CALL_FUNCTION_VAR_KW'], + map['CALL_METHOD']] + assert opcode in call_ops # XXX check better, and complain better + instr += 1 + oparg = ord(code[instr]) | ord(code[instr + 1]) << 8 + nargs = oparg & 0xff + nkwds = (oparg >> 8) & 0xff + if nkwds == 0: # only positional arguments + # fast paths leaves things on the stack, pop them + if (frame.space.config.objspace.opcodes.CALL_METHOD and + opcode == map['CALL_METHOD']): + frame.dropvalues(nargs + 2) + elif opcode == map['CALL_FUNCTION']: + frame.dropvalues(nargs + 1) + frame.last_instr = instr + 1 # continue after the call diff --git a/pypy/module/_continuation/test/support.py b/pypy/module/_continuation/test/support.py --- a/pypy/module/_continuation/test/support.py +++ b/pypy/module/_continuation/test/support.py @@ -9,4 +9,4 @@ import pypy.rlib.rstacklet except CompilationError, e: py.test.skip("cannot import rstacklet: %s" % e) - cls.space = gettestobjspace(usemodules=['_continuation']) + cls.space = gettestobjspace(usemodules=['_continuation'], continuation=True) diff --git a/pypy/module/_continuation/test/test_stacklet.py b/pypy/module/_continuation/test/test_stacklet.py --- a/pypy/module/_continuation/test/test_stacklet.py +++ b/pypy/module/_continuation/test/test_stacklet.py @@ -13,7 +13,7 @@ from _continuation import continulet # def empty_callback(c): - pass + never_called # c = continulet(empty_callback) assert type(c) is continulet @@ -36,7 +36,7 @@ from _continuation import continulet, error # def empty_callback(c1): - pass + never_called # c = continulet(empty_callback) raises(error, c.__init__, empty_callback) @@ -135,12 +135,6 @@ e = raises(error, c.switch) assert str(e.value) == "continulet already finished" - def test_not_initialized_yet(self): - from _continuation import continulet, error - c = continulet.__new__(continulet) - e = raises(error, c.switch) - assert str(e.value) == "continulet not initialized yet" - def test_go_depth2(self): from _continuation import continulet # @@ -254,6 +248,15 @@ res = c_upper.switch('D') assert res == 'E' + def test_switch_not_initialized(self): + from _continuation import continulet + c0 = continulet.__new__(continulet) + res = c0.switch() + assert res is None + res = c0.switch(123) + assert res == 123 + raises(ValueError, c0.throw, ValueError) + def test_exception_with_switch_depth2(self): from _continuation import continulet # @@ -312,7 +315,7 @@ res = f() assert res == 2002 - def test_f_back_is_None_for_now(self): + def test_f_back(self): import sys from _continuation import continulet # @@ -321,6 +324,7 @@ c.switch(sys._getframe(0).f_back) c.switch(sys._getframe(1)) c.switch(sys._getframe(1).f_back) + assert sys._getframe(2) is f3.f_back c.switch(sys._getframe(2)) def f(c): g(c) @@ -331,10 +335,21 @@ f2 = c.switch() assert f2.f_code.co_name == 'f' f3 = c.switch() - assert f3.f_code.co_name == 'f' - f4 = c.switch() - assert f4 is None - raises(ValueError, c.switch) # "call stack is not deep enough" + assert f3 is f2 + assert f1.f_back is f3 + def main(): + f4 = c.switch() + assert f4.f_code.co_name == 'main', repr(f4.f_code.co_name) + assert f3.f_back is f1 # not running, so a loop + def main2(): + f5 = c.switch() + assert f5.f_code.co_name == 'main2', repr(f5.f_code.co_name) + assert f3.f_back is f1 # not running, so a loop + main() + main2() + res = c.switch() + assert res is None + assert f3.f_back is None def test_traceback_is_complete(self): import sys @@ -487,16 +502,31 @@ assert res == 'z' raises(TypeError, c1.switch, to=c2) # "can't send non-None value" - def test_switch2_not_initialized_yet(self): - from _continuation import continulet, error + def test_switch2_not_initialized(self): + from _continuation import continulet + c0 = continulet.__new__(continulet) + c0bis = continulet.__new__(continulet) + res = c0.switch(123, to=c0) + assert res == 123 + res = c0.switch(123, to=c0bis) + assert res == 123 + raises(ValueError, c0.throw, ValueError, to=c0) + raises(ValueError, c0.throw, ValueError, to=c0bis) # def f1(c1): - not_reachable - # + c1.switch('a') + raises(ValueError, c1.switch, 'b') + raises(KeyError, c1.switch, 'c') + return 'd' c1 = continulet(f1) - c2 = continulet.__new__(continulet) - e = raises(error, c1.switch, to=c2) - assert str(e.value) == "continulet not initialized yet" + res = c0.switch(to=c1) + assert res == 'a' + res = c1.switch(to=c0) + assert res == 'b' + res = c1.throw(ValueError, to=c0) + assert res == 'c' + res = c0.throw(KeyError, to=c1) + assert res == 'd' def test_switch2_already_finished(self): from _continuation import continulet, error @@ -609,6 +639,7 @@ assert res == "ok" def test_permute(self): + import sys from _continuation import continulet, permute # def f1(c1): @@ -617,14 +648,34 @@ return "done" # def f2(c2): + assert sys._getframe(1).f_code.co_name == 'main' permute(c1, c2) + assert sys._getframe(1).f_code.co_name == 'f1' return "ok" # c1 = continulet(f1) c2 = continulet(f2) + def main(): + c1.switch() + res = c2.switch() + assert res == "done" + main() + + def test_permute_noninitialized(self): + from _continuation import continulet, permute + permute(continulet.__new__(continulet)) # ignored + permute(continulet.__new__(continulet), # ignored + continulet.__new__(continulet)) + + def test_bug_finish_with_already_finished_stacklet(self): + from _continuation import continulet, error + # make an already-finished continulet + c1 = continulet(lambda x: x) c1.switch() - res = c2.switch() - assert res == "done" + # make another continulet + c2 = continulet(lambda x: x) + # this switch is forbidden, because it causes a crash when c2 finishes + raises(error, c1.switch, to=c2) def test_various_depths(self): skip("may fail on top of CPython") diff --git a/pypy/module/_continuation/test/test_zpickle.py b/pypy/module/_continuation/test/test_zpickle.py new file mode 100644 --- /dev/null +++ b/pypy/module/_continuation/test/test_zpickle.py @@ -0,0 +1,262 @@ +from pypy.conftest import gettestobjspace + + +class AppTestCopy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_continuation',), + CALL_METHOD=True) + cls.space.config.translation.continuation = True + + def test_basic_setup(self): + from _continuation import continulet + lst = [4] + co = continulet(lst.append) + assert lst == [4] + res = co.switch() + assert res is None + assert lst == [4, co] + + def test_copy_continulet_not_started(self): + from _continuation import continulet, error + import copy + lst = [] + co = continulet(lst.append) + co2, lst2 = copy.deepcopy((co, lst)) + # + assert lst == [] + co.switch() + assert lst == [co] + # + assert lst2 == [] + co2.switch() + assert lst2 == [co2] + + def test_copy_continulet_not_started_multiple(self): + from _continuation import continulet, error + import copy + lst = [] + co = continulet(lst.append) + co2, lst2 = copy.deepcopy((co, lst)) + co3, lst3 = copy.deepcopy((co, lst)) + co4, lst4 = copy.deepcopy((co, lst)) + # + assert lst == [] + co.switch() + assert lst == [co] + # + assert lst2 == [] + co2.switch() + assert lst2 == [co2] + # + assert lst3 == [] + co3.switch() + assert lst3 == [co3] + # + assert lst4 == [] + co4.switch() + assert lst4 == [co4] + + def test_copy_continulet_real(self): + import new, sys + mod = new.module('test_copy_continulet_real') + sys.modules['test_copy_continulet_real'] = mod + exec '''if 1: + from _continuation import continulet + import copy + def f(co, x): + co.switch(x + 1) + co.switch(x + 2) + return x + 3 + co = continulet(f, 40) + res = co.switch() + assert res == 41 + co2 = copy.deepcopy(co) + # + res = co2.switch() + assert res == 42 + assert co2.is_pending() + res = co2.switch() + assert res == 43 + assert not co2.is_pending() + # + res = co.switch() + assert res == 42 + assert co.is_pending() + res = co.switch() + assert res == 43 + assert not co.is_pending() + ''' in mod.__dict__ + + def test_copy_continulet_already_finished(self): + from _continuation import continulet, error + import copy + lst = [] + co = continulet(lst.append) + co.switch() + co2 = copy.deepcopy(co) + assert not co.is_pending() + assert not co2.is_pending() + raises(error, co.__init__, lst.append) + raises(error, co2.__init__, lst.append) + raises(error, co.switch) + raises(error, co2.switch) + + +class AppTestPickle: + version = 0 + + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_continuation',), + CALL_METHOD=True) + cls.space.appexec([], """(): + global continulet, A, __name__ + + import sys + __name__ = 'test_pickle_continulet' + thismodule = type(sys)(__name__) + sys.modules[__name__] = thismodule + + from _continuation import continulet + class A(continulet): + pass + + thismodule.__dict__.update(globals()) + """) + cls.w_version = cls.space.wrap(cls.version) + + def test_pickle_continulet_empty(self): + from _continuation import continulet + lst = [4] + co = continulet.__new__(continulet) + import pickle + pckl = pickle.dumps(co, self.version) + print repr(pckl) + co2 = pickle.loads(pckl) + assert co2 is not co + assert not co.is_pending() + assert not co2.is_pending() + # the empty unpickled coroutine can still be used: + result = [5] + co2.__init__(result.append) + res = co2.switch() + assert res is None + assert result == [5, co2] + + def test_pickle_continulet_empty_subclass(self): + from test_pickle_continulet import continulet, A + lst = [4] + co = continulet.__new__(A) + co.foo = 'bar' + co.bar = 'baz' + import pickle + pckl = pickle.dumps(co, self.version) + print repr(pckl) + co2 = pickle.loads(pckl) + assert co2 is not co + assert not co.is_pending() + assert not co2.is_pending() + assert type(co) is type(co2) is A + assert co.foo == co2.foo == 'bar' + assert co.bar == co2.bar == 'baz' + # the empty unpickled coroutine can still be used: + result = [5] + co2.__init__(result.append) + res = co2.switch() + assert res is None + assert result == [5, co2] + + def test_pickle_continulet_not_started(self): + from _continuation import continulet, error + import pickle + lst = [] + co = continulet(lst.append) + pckl = pickle.dumps((co, lst)) + print pckl + del co, lst + for i in range(2): + print 'resume...' + co2, lst2 = pickle.loads(pckl) + assert lst2 == [] + co2.switch() + assert lst2 == [co2] + + def test_pickle_continulet_real(self): + import new, sys + mod = new.module('test_pickle_continulet_real') + sys.modules['test_pickle_continulet_real'] = mod + mod.version = self.version + exec '''if 1: + from _continuation import continulet + import pickle + def f(co, x): + co.switch(x + 1) + co.switch(x + 2) + return x + 3 + co = continulet(f, 40) + res = co.switch() + assert res == 41 + pckl = pickle.dumps(co, version) + print repr(pckl) + co2 = pickle.loads(pckl) + # + res = co2.switch() + assert res == 42 + assert co2.is_pending() + res = co2.switch() + assert res == 43 + assert not co2.is_pending() + # + res = co.switch() + assert res == 42 + assert co.is_pending() + res = co.switch() + assert res == 43 + assert not co.is_pending() + ''' in mod.__dict__ + + def test_pickle_continulet_real_subclass(self): + import new, sys + mod = new.module('test_pickle_continulet_real_subclass') + sys.modules['test_pickle_continulet_real_subclass'] = mod + mod.version = self.version + exec '''if 1: + from _continuation import continulet + import pickle + class A(continulet): + def __init__(self): + crash + def f(co): + co.switch(co.x + 1) + co.switch(co.x + 2) + return co.x + 3 + co = A.__new__(A) + continulet.__init__(co, f) + co.x = 40 + res = co.switch() + assert res == 41 + pckl = pickle.dumps(co, version) + print repr(pckl) + co2 = pickle.loads(pckl) + # + assert type(co2) is A + res = co2.switch() + assert res == 42 + assert co2.is_pending() + res = co2.switch() + assert res == 43 + assert not co2.is_pending() + # + res = co.switch() + assert res == 42 + assert co.is_pending() + res = co.switch() + assert res == 43 + assert not co.is_pending() + ''' in mod.__dict__ + + +class AppTestPickle_v1(AppTestPickle): + version = 1 + +class AppTestPickle_v2(AppTestPickle): + version = 2 diff --git a/pypy/module/_multiprocessing/interp_connection.py b/pypy/module/_multiprocessing/interp_connection.py --- a/pypy/module/_multiprocessing/interp_connection.py +++ b/pypy/module/_multiprocessing/interp_connection.py @@ -225,7 +225,9 @@ except OSError: pass - def __init__(self, fd, flags): + def __init__(self, space, fd, flags): + if fd == self.INVALID_HANDLE_VALUE or fd < 0: + raise OperationError(space.w_IOError, space.wrap("invalid handle %d" % fd)) W_BaseConnection.__init__(self, flags) self.fd = fd @@ -234,7 +236,7 @@ flags = (readable and READABLE) | (writable and WRITABLE) self = space.allocate_instance(W_FileConnection, w_subtype) - W_FileConnection.__init__(self, fd, flags) + W_FileConnection.__init__(self, space, fd, flags) return space.wrap(self) def fileno(self, space): diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -468,6 +468,9 @@ self.count -= 1 + def after_fork(self): + self.count = 0 + @unwrap_spec(kind=int, maxvalue=int) def rebuild(space, w_cls, w_handle, kind, maxvalue): self = space.allocate_instance(W_SemLock, w_cls) @@ -512,6 +515,7 @@ acquire = interp2app(W_SemLock.acquire), release = interp2app(W_SemLock.release), _rebuild = interp2app(W_SemLock.rebuild.im_func, as_classmethod=True), + _after_fork = interp2app(W_SemLock.after_fork), __enter__=interp2app(W_SemLock.enter), __exit__=interp2app(W_SemLock.exit), SEM_VALUE_MAX=SEM_VALUE_MAX, diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -145,3 +145,9 @@ else: c.close() space.delslice(w_connections, space.wrap(0), space.wrap(100)) + + def test_bad_fd(self): + import _multiprocessing + + raises(IOError, _multiprocessing.Connection, -1) + raises(IOError, _multiprocessing.Connection, -15) \ No newline at end of file diff --git a/pypy/module/_multiprocessing/test/test_semaphore.py b/pypy/module/_multiprocessing/test/test_semaphore.py --- a/pypy/module/_multiprocessing/test/test_semaphore.py +++ b/pypy/module/_multiprocessing/test/test_semaphore.py @@ -39,6 +39,10 @@ sem.release() assert sem._count() == 0 + sem.acquire() + sem._after_fork() + assert sem._count() == 0 + def test_recursive(self): from _multiprocessing import SemLock kind = self.RECURSIVE diff --git a/pypy/module/_ssl/interp_ssl.py b/pypy/module/_ssl/interp_ssl.py --- a/pypy/module/_ssl/interp_ssl.py +++ b/pypy/module/_ssl/interp_ssl.py @@ -52,7 +52,8 @@ constants["CERT_OPTIONAL"] = PY_SSL_CERT_OPTIONAL constants["CERT_REQUIRED"] = PY_SSL_CERT_REQUIRED -constants["PROTOCOL_SSLv2"] = PY_SSL_VERSION_SSL2 +if not OPENSSL_NO_SSL2: + constants["PROTOCOL_SSLv2"] = PY_SSL_VERSION_SSL2 constants["PROTOCOL_SSLv3"] = PY_SSL_VERSION_SSL3 constants["PROTOCOL_SSLv23"] = PY_SSL_VERSION_SSL23 constants["PROTOCOL_TLSv1"] = PY_SSL_VERSION_TLS1 @@ -673,7 +674,7 @@ method = libssl_TLSv1_method() elif protocol == PY_SSL_VERSION_SSL3: method = libssl_SSLv3_method() - elif protocol == PY_SSL_VERSION_SSL2: + elif protocol == PY_SSL_VERSION_SSL2 and not OPENSSL_NO_SSL2: method = libssl_SSLv2_method() elif protocol == PY_SSL_VERSION_SSL23: method = libssl_SSLv23_method() diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -57,6 +57,9 @@ compile_extra=['-DPy_BUILD_CORE'], ) +class CConfig2: + _compilation_info_ = CConfig._compilation_info_ + class CConfig_constants: _compilation_info_ = CConfig._compilation_info_ @@ -300,9 +303,13 @@ return unwrapper_raise # used in 'normal' RPython code. return decorate -def cpython_struct(name, fields, forward=None): +def cpython_struct(name, fields, forward=None, level=1): configname = name.replace(' ', '__') - setattr(CConfig, configname, rffi_platform.Struct(name, fields)) + if level == 1: + config = CConfig + else: + config = CConfig2 + setattr(config, configname, rffi_platform.Struct(name, fields)) if forward is None: forward = lltype.ForwardReference() TYPES[configname] = forward @@ -445,9 +452,10 @@ # 'int*': rffi.INTP} def configure_types(): - for name, TYPE in rffi_platform.configure(CConfig).iteritems(): - if name in TYPES: - TYPES[name].become(TYPE) + for config in (CConfig, CConfig2): + for name, TYPE in rffi_platform.configure(config).iteritems(): + if name in TYPES: + TYPES[name].become(TYPE) def build_type_checkers(type_name, cls=None): """ diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -4,9 +4,21 @@ cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) +from pypy.rlib.unroll import unrolling_iterable from pypy.interpreter.error import OperationError from pypy.interpreter.function import Function, Method from pypy.interpreter.pycode import PyCode +from pypy.interpreter import pycode + +CODE_FLAGS = dict( + CO_OPTIMIZED = 0x0001, + CO_NEWLOCALS = 0x0002, + CO_VARARGS = 0x0004, + CO_VARKEYWORDS = 0x0008, + CO_NESTED = 0x0010, + CO_GENERATOR = 0x0020, +) +ALL_CODE_FLAGS = unrolling_iterable(CODE_FLAGS.items()) PyFunctionObjectStruct = lltype.ForwardReference() PyFunctionObject = lltype.Ptr(PyFunctionObjectStruct) @@ -16,7 +28,12 @@ PyCodeObjectStruct = lltype.ForwardReference() PyCodeObject = lltype.Ptr(PyCodeObjectStruct) -cpython_struct("PyCodeObject", PyObjectFields, PyCodeObjectStruct) +PyCodeObjectFields = PyObjectFields + \ + (("co_name", PyObject), + ("co_flags", rffi.INT), + ("co_argcount", rffi.INT), + ) +cpython_struct("PyCodeObject", PyCodeObjectFields, PyCodeObjectStruct) @bootstrap_function def init_functionobject(space): @@ -24,6 +41,10 @@ basestruct=PyFunctionObject.TO, attach=function_attach, dealloc=function_dealloc) + make_typedescr(PyCode.typedef, + basestruct=PyCodeObject.TO, + attach=code_attach, + dealloc=code_dealloc) PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) @@ -40,6 +61,31 @@ from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) +def code_attach(space, py_obj, w_obj): + py_code = rffi.cast(PyCodeObject, py_obj) + assert isinstance(w_obj, PyCode) + py_code.c_co_name = make_ref(space, space.wrap(w_obj.co_name)) + co_flags = 0 + for name, value in ALL_CODE_FLAGS: + if w_obj.co_flags & getattr(pycode, name): + co_flags |= value + rffi.setintfield(py_code, 'c_co_flags', co_flags) + rffi.setintfield(py_code, 'c_co_argcount', w_obj.co_argcount) + + at cpython_api([PyObject], lltype.Void, external=False) +def code_dealloc(space, py_obj): + py_code = rffi.cast(PyCodeObject, py_obj) + Py_DecRef(space, py_code.c_co_name) + from pypy.module.cpyext.object import PyObject_dealloc + PyObject_dealloc(space, py_obj) + + at cpython_api([PyObject], PyObject) +def PyFunction_GetCode(space, w_func): + """Return the code object associated with the function object op.""" + func = space.interp_w(Function, w_func) + w_code = space.wrap(func.code) + return borrow_from(w_func, w_code) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyMethod_New(space, w_func, w_self, w_cls): """Return a new method object, with func being any callable object; this is the diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -4,7 +4,21 @@ extern "C" { #endif -typedef PyObject PyCodeObject; +typedef struct { + PyObject_HEAD + PyObject *co_name; + int co_argcount; + int co_flags; +} PyCodeObject; + +/* Masks for co_flags above */ +/* These values are also in funcobject.py */ +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 +#define CO_NESTED 0x0010 +#define CO_GENERATOR 0x0020 #ifdef __cplusplus } diff --git a/pypy/module/cpyext/include/funcobject.h b/pypy/module/cpyext/include/funcobject.h --- a/pypy/module/cpyext/include/funcobject.h +++ b/pypy/module/cpyext/include/funcobject.h @@ -12,6 +12,8 @@ PyObject *func_name; /* The __name__ attribute, a string object */ } PyFunctionObject; +#define PyFunction_GET_CODE(obj) PyFunction_GetCode((PyObject*)(obj)) + #define PyMethod_GET_FUNCTION(obj) PyMethod_Function((PyObject*)(obj)) #define PyMethod_GET_SELF(obj) PyMethod_Self((PyObject*)(obj)) #define PyMethod_GET_CLASS(obj) PyMethod_Class((PyObject*)(obj)) diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -321,6 +321,15 @@ } PyTypeObject; +typedef struct { + PyTypeObject ht_type; + PyNumberMethods as_number; + PyMappingMethods as_mapping; + PySequenceMethods as_sequence; + PyBufferProcs as_buffer; + PyObject *ht_name, *ht_slots; +} PyHeapTypeObject; + /* Flag bits for printing: */ #define Py_PRINT_RAW 1 /* No string quotes etc. */ @@ -501,6 +510,9 @@ #define PyObject_TypeCheck(ob, tp) \ ((ob)->ob_type == (tp) || PyType_IsSubtype((ob)->ob_type, (tp))) +#define Py_TRASHCAN_SAFE_BEGIN(pyObj) +#define Py_TRASHCAN_SAFE_END(pyObj) + /* Copied from CPython ----------------------------- */ int PyObject_AsReadBuffer(PyObject *, const void **, Py_ssize_t *); int PyObject_AsWriteBuffer(PyObject *, void **, Py_ssize_t *); diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.6.0" +#define PYPY_VERSION "1.6.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -12,6 +12,7 @@ #define Py_Py3kWarningFlag 0 #define Py_FrozenFlag 0 +#define Py_VerboseFlag 0 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -19,13 +19,42 @@ basestruct = PyObject.TO def get_dealloc(self, space): - raise NotImplementedError + from pypy.module.cpyext.typeobject import subtype_dealloc + return llhelper( + subtype_dealloc.api_func.functype, + subtype_dealloc.api_func.get_wrapper(space)) + def allocate(self, space, w_type, itemcount=0): - raise NotImplementedError + # similar to PyType_GenericAlloc? + # except that it's not related to any pypy object. + + pytype = rffi.cast(PyTypeObjectPtr, make_ref(space, w_type)) + # Don't increase refcount for non-heaptypes + if pytype: + flags = rffi.cast(lltype.Signed, pytype.c_tp_flags) + if not flags & Py_TPFLAGS_HEAPTYPE: + Py_DecRef(space, w_type) + + if pytype: + size = pytype.c_tp_basicsize + else: + size = rffi.sizeof(self.basestruct) + if itemcount: + size += itemcount * pytype.c_tp_itemsize + buf = lltype.malloc(rffi.VOIDP.TO, size, + flavor='raw', zero=True) + pyobj = rffi.cast(PyObject, buf) + pyobj.c_ob_refcnt = 1 + pyobj.c_ob_type = pytype + return pyobj + def attach(self, space, pyobj, w_obj): - raise NotImplementedError + pass + def realize(self, space, ref): - raise NotImplementedError + # For most types, a reference cannot exist without + # a real interpreter object + raise InvalidPointerException(str(ref)) typedescr_cache = {} @@ -40,6 +69,7 @@ """ tp_basestruct = kw.pop('basestruct', PyObject.TO) + tp_alloc = kw.pop('alloc', None) tp_attach = kw.pop('attach', None) tp_realize = kw.pop('realize', None) tp_dealloc = kw.pop('dealloc', None) @@ -49,58 +79,24 @@ class CpyTypedescr(BaseCpyTypedescr): basestruct = tp_basestruct - realize = tp_realize - def get_dealloc(self, space): - if tp_dealloc: + if tp_alloc: + def allocate(self, space, w_type, itemcount=0): + return tp_alloc(space, w_type) + + if tp_dealloc: + def get_dealloc(self, space): return llhelper( tp_dealloc.api_func.functype, tp_dealloc.api_func.get_wrapper(space)) - else: - from pypy.module.cpyext.typeobject import subtype_dealloc - return llhelper( - subtype_dealloc.api_func.functype, - subtype_dealloc.api_func.get_wrapper(space)) - - def allocate(self, space, w_type, itemcount=0): - # similar to PyType_GenericAlloc? - # except that it's not related to any pypy object. - - pytype = rffi.cast(PyTypeObjectPtr, make_ref(space, w_type)) - # Don't increase refcount for non-heaptypes - if pytype: - flags = rffi.cast(lltype.Signed, pytype.c_tp_flags) - if not flags & Py_TPFLAGS_HEAPTYPE: - Py_DecRef(space, w_type) - - if pytype: - size = pytype.c_tp_basicsize - else: - size = rffi.sizeof(tp_basestruct) - if itemcount: - size += itemcount * pytype.c_tp_itemsize - buf = lltype.malloc(rffi.VOIDP.TO, size, - flavor='raw', zero=True) - pyobj = rffi.cast(PyObject, buf) - pyobj.c_ob_refcnt = 1 - pyobj.c_ob_type = pytype - return pyobj if tp_attach: def attach(self, space, pyobj, w_obj): tp_attach(space, pyobj, w_obj) - else: - def attach(self, space, pyobj, w_obj): - pass if tp_realize: def realize(self, space, ref): return tp_realize(space, ref) - else: - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) if typedef: CpyTypedescr.__name__ = "CpyTypedescr_%s" % (typedef.name,) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -920,12 +920,6 @@ raise NotImplementedError @cpython_api([PyObject], PyObject) -def PyFunction_GetCode(space, op): - """Return the code object associated with the function object op.""" - borrow_from() - raise NotImplementedError - - at cpython_api([PyObject], PyObject) def PyFunction_GetGlobals(space, op): """Return the globals dictionary associated with the function object op.""" borrow_from() diff --git a/pypy/module/cpyext/test/foo.c b/pypy/module/cpyext/test/foo.c --- a/pypy/module/cpyext/test/foo.c +++ b/pypy/module/cpyext/test/foo.c @@ -215,36 +215,36 @@ typedef struct { PyUnicodeObject HEAD; int val; -} FuuObject; +} UnicodeSubclassObject; -static int Fuu_init(FuuObject *self, PyObject *args, PyObject *kwargs) { +static int UnicodeSubclass_init(UnicodeSubclassObject *self, PyObject *args, PyObject *kwargs) { self->val = 42; return 0; } static PyObject * -Fuu_escape(PyTypeObject* type, PyObject *args) +UnicodeSubclass_escape(PyTypeObject* type, PyObject *args) { Py_RETURN_TRUE; } static PyObject * -Fuu_get_val(FuuObject *self) { +UnicodeSubclass_get_val(UnicodeSubclassObject *self) { return PyInt_FromLong(self->val); } -static PyMethodDef Fuu_methods[] = { - {"escape", (PyCFunction) Fuu_escape, METH_VARARGS, NULL}, - {"get_val", (PyCFunction) Fuu_get_val, METH_NOARGS, NULL}, +static PyMethodDef UnicodeSubclass_methods[] = { + {"escape", (PyCFunction) UnicodeSubclass_escape, METH_VARARGS, NULL}, + {"get_val", (PyCFunction) UnicodeSubclass_get_val, METH_NOARGS, NULL}, {NULL} /* Sentinel */ }; -PyTypeObject FuuType = { +PyTypeObject UnicodeSubtype = { PyObject_HEAD_INIT(NULL) 0, "foo.fuu", - sizeof(FuuObject), + sizeof(UnicodeSubclassObject), 0, 0, /*tp_dealloc*/ 0, /*tp_print*/ @@ -277,7 +277,7 @@ /* Attribute descriptor and subclassing stuff */ - Fuu_methods,/*tp_methods*/ + UnicodeSubclass_methods,/*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ @@ -287,7 +287,7 @@ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ - (initproc) Fuu_init, /*tp_init*/ + (initproc) UnicodeSubclass_init, /*tp_init*/ 0, /*tp_alloc will be set to PyType_GenericAlloc in module init*/ 0, /*tp_new*/ 0, /*tp_free Low-level free-memory routine */ @@ -299,11 +299,11 @@ 0 /*tp_weaklist*/ }; -PyTypeObject Fuu2Type = { +PyTypeObject UnicodeSubtype2 = { PyObject_HEAD_INIT(NULL) 0, "foo.fuu2", - sizeof(FuuObject), + sizeof(UnicodeSubclassObject), 0, 0, /*tp_dealloc*/ 0, /*tp_print*/ @@ -628,15 +628,15 @@ footype.tp_new = PyType_GenericNew; - FuuType.tp_base = &PyUnicode_Type; - Fuu2Type.tp_base = &FuuType; + UnicodeSubtype.tp_base = &PyUnicode_Type; + UnicodeSubtype2.tp_base = &UnicodeSubtype; MetaType.tp_base = &PyType_Type; if (PyType_Ready(&footype) < 0) return; - if (PyType_Ready(&FuuType) < 0) + if (PyType_Ready(&UnicodeSubtype) < 0) return; - if (PyType_Ready(&Fuu2Type) < 0) + if (PyType_Ready(&UnicodeSubtype2) < 0) return; if (PyType_Ready(&MetaType) < 0) return; @@ -655,9 +655,9 @@ return; if (PyDict_SetItemString(d, "fooType", (PyObject *)&footype) < 0) return; - if (PyDict_SetItemString(d, "FuuType", (PyObject *) &FuuType) < 0) + if (PyDict_SetItemString(d, "UnicodeSubtype", (PyObject *) &UnicodeSubtype) < 0) return; - if(PyDict_SetItemString(d, "Fuu2Type", (PyObject *) &Fuu2Type) < 0) + if (PyDict_SetItemString(d, "UnicodeSubtype2", (PyObject *) &UnicodeSubtype2) < 0) return; if (PyDict_SetItemString(d, "MetaType", (PyObject *) &MetaType) < 0) return; diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -2,8 +2,12 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.pyobject import PyObject, make_ref, from_ref -from pypy.module.cpyext.funcobject import PyFunctionObject +from pypy.module.cpyext.funcobject import ( + PyFunctionObject, PyCodeObject, CODE_FLAGS) from pypy.interpreter.function import Function, Method +from pypy.interpreter.pycode import PyCode + +globals().update(CODE_FLAGS) class TestFunctionObject(BaseApiTest): def test_function(self, space, api): @@ -36,6 +40,38 @@ w_method2 = api.PyMethod_New(w_function, w_self, w_class) assert space.eq_w(w_method, w_method2) + def test_getcode(self, space, api): + w_function = space.appexec([], """(): + def func(x, y, z): return x + return func + """) + w_code = api.PyFunction_GetCode(w_function) + assert w_code.co_name == "func" + + ref = make_ref(space, w_code) + assert (from_ref(space, rffi.cast(PyObject, ref.c_ob_type)) is + space.gettypeobject(PyCode.typedef)) + assert "func" == space.unwrap( + from_ref(space, rffi.cast(PyCodeObject, ref).c_co_name)) + assert 3 == rffi.cast(PyCodeObject, ref).c_co_argcount + api.Py_DecRef(ref) + + def test_co_flags(self, space, api): + def get_flags(signature, body="pass"): + w_code = space.appexec([], """(): + def func(%s): %s + return func.__code__ + """ % (signature, body)) + ref = make_ref(space, w_code) + co_flags = rffi.cast(PyCodeObject, ref).c_co_flags + api.Py_DecRef(ref) + return co_flags + assert get_flags("x") == CO_NESTED | CO_OPTIMIZED | CO_NEWLOCALS + assert get_flags("x", "exec x") == CO_NESTED | CO_NEWLOCALS + assert get_flags("x, *args") & CO_VARARGS + assert get_flags("x, **kw") & CO_VARKEYWORDS + assert get_flags("x", "yield x") & CO_GENERATOR + def test_newcode(self, space, api): filename = rffi.str2charp('filename') funcname = rffi.str2charp('funcname') diff --git a/pypy/module/cpyext/test/test_tupleobject.py b/pypy/module/cpyext/test/test_tupleobject.py --- a/pypy/module/cpyext/test/test_tupleobject.py +++ b/pypy/module/cpyext/test/test_tupleobject.py @@ -48,3 +48,4 @@ w_slice = api.PyTuple_GetSlice(w_tuple, 3, -3) assert space.eq_w(w_slice, space.newtuple([space.wrap(i) for i in range(3, 7)])) + diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -119,16 +119,16 @@ module = self.import_module(name='foo') obj = module.new() # call __new__ - newobj = module.FuuType(u"xyz") + newobj = module.UnicodeSubtype(u"xyz") assert newobj == u"xyz" - assert isinstance(newobj, module.FuuType) + assert isinstance(newobj, module.UnicodeSubtype) assert isinstance(module.fooType(), module.fooType) class bar(module.fooType): pass assert isinstance(bar(), bar) - fuu = module.FuuType + fuu = module.UnicodeSubtype class fuu2(fuu): def baz(self): return self @@ -137,20 +137,20 @@ def test_init(self): module = self.import_module(name="foo") - newobj = module.FuuType() + newobj = module.UnicodeSubtype() assert newobj.get_val() == 42 # this subtype should inherit tp_init - newobj = module.Fuu2Type() + newobj = module.UnicodeSubtype2() assert newobj.get_val() == 42 # this subclass redefines __init__ - class Fuu2(module.FuuType): + class UnicodeSubclass2(module.UnicodeSubtype): def __init__(self): self.foobar = 32 - super(Fuu2, self).__init__() + super(UnicodeSubclass2, self).__init__() - newobj = Fuu2() + newobj = UnicodeSubclass2() assert newobj.get_val() == 42 assert newobj.foobar == 32 @@ -268,6 +268,21 @@ assert type(obj) is foo.Custom assert type(foo.Custom) is foo.MetaType + def test_heaptype(self): + module = self.import_extension('foo', [ + ("name_by_heaptype", "METH_O", + ''' + PyHeapTypeObject *heaptype = (PyHeapTypeObject *)args; + Py_INCREF(heaptype->ht_name); + return heaptype->ht_name; + ''' + ) + ]) + class C(object): + pass + assert module.name_by_heaptype(C) == "C" + + class TestTypes(BaseApiTest): def test_type_attributes(self, space, api): w_class = space.appexec([], """(): diff --git a/pypy/module/cpyext/typeobject.py b/pypy/module/cpyext/typeobject.py --- a/pypy/module/cpyext/typeobject.py +++ b/pypy/module/cpyext/typeobject.py @@ -11,7 +11,7 @@ generic_cpy_call, Py_TPFLAGS_READY, Py_TPFLAGS_READYING, Py_TPFLAGS_HEAPTYPE, METH_VARARGS, METH_KEYWORDS, CANNOT_FAIL, Py_TPFLAGS_HAVE_GETCHARBUFFER, - build_type_checkers) + build_type_checkers, PyObjectFields) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, create_ref, from_ref, get_typedescr, make_typedescr, track_reference, RefcountState, borrow_from) @@ -25,7 +25,7 @@ from pypy.module.cpyext.structmember import PyMember_GetOne, PyMember_SetOne from pypy.module.cpyext.typeobjectdefs import ( PyTypeObjectPtr, PyTypeObject, PyGetSetDef, PyMemberDef, newfunc, - PyNumberMethods, PySequenceMethods, PyBufferProcs) + PyNumberMethods, PyMappingMethods, PySequenceMethods, PyBufferProcs) from pypy.module.cpyext.slotdefs import ( slotdefs_for_tp_slots, slotdefs_for_wrappers, get_slot_tp_function) from pypy.interpreter.error import OperationError @@ -39,6 +39,19 @@ PyType_Check, PyType_CheckExact = build_type_checkers("Type", "w_type") +PyHeapTypeObjectStruct = lltype.ForwardReference() +PyHeapTypeObject = lltype.Ptr(PyHeapTypeObjectStruct) +PyHeapTypeObjectFields = ( + ("ht_type", PyTypeObject), + ("ht_name", PyObject), + ("as_number", PyNumberMethods), + ("as_mapping", PyMappingMethods), + ("as_sequence", PySequenceMethods), + ("as_buffer", PyBufferProcs), + ) +cpython_struct("PyHeapTypeObject", PyHeapTypeObjectFields, PyHeapTypeObjectStruct, + level=2) + class W_GetSetPropertyEx(GetSetProperty): def __init__(self, getset, w_type): self.getset = getset @@ -136,6 +149,8 @@ assert len(slot_names) == 2 struct = getattr(pto, slot_names[0]) if not struct: + assert not space.config.translating + assert not pto.c_tp_flags & Py_TPFLAGS_HEAPTYPE if slot_names[0] == 'c_tp_as_number': STRUCT_TYPE = PyNumberMethods elif slot_names[0] == 'c_tp_as_sequence': @@ -301,6 +316,7 @@ make_typedescr(space.w_type.instancetypedef, basestruct=PyTypeObject, + alloc=type_alloc, attach=type_attach, realize=type_realize, dealloc=type_dealloc) @@ -319,11 +335,13 @@ track_reference(space, lltype.nullptr(PyObject.TO), space.w_type) track_reference(space, lltype.nullptr(PyObject.TO), space.w_object) track_reference(space, lltype.nullptr(PyObject.TO), space.w_tuple) + track_reference(space, lltype.nullptr(PyObject.TO), space.w_str) # create the objects py_type = create_ref(space, space.w_type) py_object = create_ref(space, space.w_object) py_tuple = create_ref(space, space.w_tuple) + py_str = create_ref(space, space.w_str) # form cycles pto_type = rffi.cast(PyTypeObjectPtr, py_type) @@ -340,10 +358,15 @@ pto_object.c_tp_bases.c_ob_type = pto_tuple pto_tuple.c_tp_bases.c_ob_type = pto_tuple + for typ in (py_type, py_object, py_tuple, py_str): + heaptype = rffi.cast(PyHeapTypeObject, typ) + heaptype.c_ht_name.c_ob_type = pto_type + # Restore the mapping track_reference(space, py_type, space.w_type, replace=True) track_reference(space, py_object, space.w_object, replace=True) track_reference(space, py_tuple, space.w_tuple, replace=True) + track_reference(space, py_str, space.w_str, replace=True) @cpython_api([PyObject], lltype.Void, external=False) @@ -416,17 +439,34 @@ Py_DecRef(space, obj_pto.c_tp_cache) # let's do it like cpython Py_DecRef(space, obj_pto.c_tp_dict) if obj_pto.c_tp_flags & Py_TPFLAGS_HEAPTYPE: - if obj_pto.c_tp_as_buffer: - lltype.free(obj_pto.c_tp_as_buffer, flavor='raw') - if obj_pto.c_tp_as_number: - lltype.free(obj_pto.c_tp_as_number, flavor='raw') - if obj_pto.c_tp_as_sequence: - lltype.free(obj_pto.c_tp_as_sequence, flavor='raw') + heaptype = rffi.cast(PyHeapTypeObject, obj) + Py_DecRef(space, heaptype.c_ht_name) Py_DecRef(space, base_pyo) - rffi.free_charp(obj_pto.c_tp_name) PyObject_dealloc(space, obj) +def type_alloc(space, w_metatype): + size = rffi.sizeof(PyHeapTypeObject) + metatype = rffi.cast(PyTypeObjectPtr, make_ref(space, w_metatype)) + # Don't increase refcount for non-heaptypes + if metatype: + flags = rffi.cast(lltype.Signed, metatype.c_tp_flags) + if not flags & Py_TPFLAGS_HEAPTYPE: + Py_DecRef(space, w_metatype) + + heaptype = lltype.malloc(PyHeapTypeObject.TO, + flavor='raw', zero=True) + pto = heaptype.c_ht_type + pto.c_ob_refcnt = 1 + pto.c_ob_type = metatype + pto.c_tp_flags |= Py_TPFLAGS_HEAPTYPE + pto.c_tp_as_number = heaptype.c_as_number + pto.c_tp_as_sequence = heaptype.c_as_sequence + pto.c_tp_as_mapping = heaptype.c_as_mapping + pto.c_tp_as_buffer = heaptype.c_as_buffer + + return rffi.cast(PyObject, heaptype) + def type_attach(space, py_obj, w_type): """ Fills a newly allocated PyTypeObject from an existing type. @@ -445,12 +485,18 @@ if space.is_w(w_type, space.w_str): setup_string_buffer_procs(space, pto) - pto.c_tp_flags |= Py_TPFLAGS_HEAPTYPE pto.c_tp_free = llhelper(PyObject_Del.api_func.functype, PyObject_Del.api_func.get_wrapper(space)) pto.c_tp_alloc = llhelper(PyType_GenericAlloc.api_func.functype, PyType_GenericAlloc.api_func.get_wrapper(space)) - pto.c_tp_name = rffi.str2charp(w_type.getname(space)) + if pto.c_tp_flags & Py_TPFLAGS_HEAPTYPE: + w_typename = space.getattr(w_type, space.wrap('__name__')) + heaptype = rffi.cast(PyHeapTypeObject, pto) + heaptype.c_ht_name = make_ref(space, w_typename) + from pypy.module.cpyext.stringobject import PyString_AsString + pto.c_tp_name = PyString_AsString(space, heaptype.c_ht_name) + else: + pto.c_tp_name = rffi.str2charp(w_type.getname(space)) pto.c_tp_basicsize = -1 # hopefully this makes malloc bail out pto.c_tp_itemsize = 0 # uninitialized fields: diff --git a/pypy/module/pwd/test/test_pwd.py b/pypy/module/pwd/test/test_pwd.py --- a/pypy/module/pwd/test/test_pwd.py +++ b/pypy/module/pwd/test/test_pwd.py @@ -5,14 +5,17 @@ cls.space = gettestobjspace(usemodules=['pwd']) def test_getpwuid(self): - import pwd + import pwd, sys raises(KeyError, pwd.getpwuid, -1) pw = pwd.getpwuid(0) assert pw.pw_name == 'root' assert isinstance(pw.pw_passwd, str) assert pw.pw_uid == 0 assert pw.pw_gid == 0 - assert pw.pw_dir == '/root' + if sys.platform.startswith('linux'): + assert pw.pw_dir == '/root' + else: + assert pw.pw_dir.startswith('/') assert pw.pw_shell.startswith('/') # assert type(pw.pw_uid) is int diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -12,6 +12,7 @@ from pypy.translator.platform import platform import sys +import weakref import py if sys.platform == "win32": @@ -180,7 +181,7 @@ class CallbackData(Wrappable): def __init__(self, space, parser): self.space = space - self.parser = parser + self.parser = weakref.ref(parser) SETTERS = {} for index, (name, params) in enumerate(HANDLERS.items()): @@ -257,7 +258,7 @@ id = rffi.cast(lltype.Signed, %(ll_id)s) userdata = global_storage.get_object(id) space = userdata.space - parser = userdata.parser + parser = userdata.parser() handler = parser.handlers[%(index)s] if not handler: @@ -292,7 +293,7 @@ id = rffi.cast(lltype.Signed, ll_userdata) userdata = global_storage.get_object(id) space = userdata.space - parser = userdata.parser + parser = userdata.parser() name = rffi.charp2str(name) @@ -409,8 +410,7 @@ if XML_ParserFree: # careful with CPython interpreter shutdown XML_ParserFree(self.itself) if global_storage: - global_storage.free_nonmoving_id( - rffi.cast(lltype.Signed, self.itself)) + global_storage.free_nonmoving_id(self.id) @unwrap_spec(flag=int) def SetParamEntityParsing(self, space, flag): diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,7 +13,6 @@ from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame from pypy.interpreter.gateway import unwrap_spec -from pypy.interpreter.baseobjspace import ObjSpace, W_Root from opcode import opmap from pypy.rlib.nonconst import NonConstant from pypy.jit.metainterp.resoperation import rop @@ -221,7 +220,6 @@ def __init__(self, space): self.w_compile_hook = space.w_None - at unwrap_spec(ObjSpace, W_Root) def set_compile_hook(space, w_hook): """ set_compile_hook(hook) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -16,7 +16,8 @@ if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', - '__pypy__', 'cStringIO', '_collections', 'cppyy']: + '__pypy__', 'cStringIO', '_collections', 'struct', + 'cppyy']: return True return False diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -1,5 +1,5 @@ from __future__ import with_statement -import sys +import sys, os import types import subprocess import py diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -29,11 +29,13 @@ pow_addr, res = log.result assert res == 8.0 * 300 loop, = log.loops_by_filename(self.filepath) + if 'ConstClass(pow)' in repr(loop): # e.g. OS/X + pow_addr = 'ConstClass(pow)' assert loop.match_by_id('fficall', """ guard_not_invalidated(descr=...) i17 = force_token() setfield_gc(p0, i17, descr=<.* .*PyFrame.vable_token .*>) - f21 = call_release_gil(%d, 2.000000, 3.000000, descr=) + f21 = call_release_gil(%s, 2.000000, 3.000000, descr=) guard_not_forced(descr=...) guard_no_exception(descr=...) """ % pow_addr) @@ -129,4 +131,5 @@ assert opnames.count('call_release_gil') == 1 idx = opnames.index('call_release_gil') call = ops[idx] - assert int(call.args[0]) == fabs_addr + assert (call.args[0] == 'ConstClass(fabs)' or # e.g. OS/X + int(call.args[0]) == fabs_addr) diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -337,7 +337,9 @@ assert loop.match_by_id('append', """ i13 = getfield_gc(p8, descr=) i15 = int_add(i13, 1) - call(ConstClass(_ll_list_resize_ge__listPtr_Signed), p8, i15, descr=) + # Will be killed by the backend + i17 = arraylen_gc(p7, descr=) + call(ConstClass(_ll_list_resize_ge), p8, i15, descr=) guard_no_exception(descr=...) p17 = getfield_gc(p8, descr=) p19 = new_with_vtable(ConstClass(W_IntObject)) diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -40,12 +40,33 @@ log = self.run(fn, [1000]) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) - # check that the call to ll_dict_lookup is not a call_may_force + # check that the call to ll_dict_lookup is not a call_may_force, the + # gc_id call is hoisted out of the loop, the id of a value obviously + # can't change ;) assert loop.match_by_id("getitem", """ - i25 = call(ConstClass(_ll_1_gc_identityhash__objectPtr), p6, descr=...) - ... i28 = call(ConstClass(ll_dict_lookup__dicttablePtr_objectPtr_Signed), p18, p6, i25, descr=...) ... p33 = call(ConstClass(ll_get_value__dicttablePtr_Signed), p18, i28, descr=...) ... """) + + def test_list(self): + def main(n): + i = 0 + while i < n: + z = list(()) + z.append(1) + i += z[-1] / len(z) + return i + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i7 = int_lt(i5, i6) + guard_true(i7, descr=...) + guard_not_invalidated(descr=...) + i9 = int_add(i5, 1) + --TICK-- + jump(..., descr=...) + """) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_min_max.py b/pypy/module/pypyjit/test_pypy_c/test_min_max.py --- a/pypy/module/pypyjit/test_pypy_c/test_min_max.py +++ b/pypy/module/pypyjit/test_pypy_c/test_min_max.py @@ -42,7 +42,7 @@ assert len(guards) < 20 assert loop.match_by_id('max',""" ... - p76 = call_may_force(ConstClass(min_max_loop__max), _, _, descr=...) + p76 = call_may_force(ConstClass(min_max_trampoline), _, _, descr=...) ... """) @@ -63,6 +63,6 @@ assert len(guards) < 20 assert loop.match_by_id('max',""" ... - p76 = call_may_force(ConstClass(min_max_loop__max), _, _, descr=...) + p76 = call_may_force(ConstClass(min_max_trampoline), _, _, descr=...) ... """) diff --git a/pypy/module/pypyjit/test_pypy_c/test_misc.py b/pypy/module/pypyjit/test_pypy_c/test_misc.py --- a/pypy/module/pypyjit/test_pypy_c/test_misc.py +++ b/pypy/module/pypyjit/test_pypy_c/test_misc.py @@ -92,6 +92,43 @@ """) + def test_cached_pure_func_of_equal_fields(self): + def main(n): + class A(object): + def __init__(self, val): + self.val1 = self.val2 = val + a = A(1) + b = A(1) + sa = 0 + while n: + sa += 2*a.val1 + sa += 2*b.val2 + b.val2 = a.val1 + n -= 1 + return sa + # + log = self.run(main, [1000]) + assert log.result == 4000 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i12 = int_is_true(i4) + guard_true(i12, descr=...) + guard_not_invalidated(descr=...) + i13 = int_add_ovf(i8, i9) + guard_no_overflow(descr=...) + i10p = getfield_gc_pure(p10, descr=...) + i10 = int_mul_ovf(2, i10p) + guard_no_overflow(descr=...) + i14 = int_add_ovf(i13, i10) + guard_no_overflow(descr=...) + setfield_gc(p7, p11, descr=...) + i17 = int_sub_ovf(i4, 1) + guard_no_overflow(descr=...) + --TICK-- + jump(..., descr=...) + """) + + def test_range_iter(self): def main(n): def g(n): @@ -115,7 +152,6 @@ i21 = force_token() setfield_gc(p4, i20, descr=<.* .*W_AbstractSeqIterObject.inst_index .*>) guard_not_invalidated(descr=...) - i26 = int_sub(i9, 1) i23 = int_lt(i18, 0) guard_false(i23, descr=...) i25 = int_ge(i18, i9) @@ -249,3 +285,48 @@ loop, = log.loops_by_id("globalread", is_entry_bridge=True) assert len(loop.ops_by_id("globalread")) == 0 + + def test_struct_module(self): + def main(): + import struct + i = 1 + while i < 1000: + x = struct.unpack("i", struct.pack("i", i))[0] # ID: struct + i += x / i + return i + + log = self.run(main) + assert log.result == main() + + loop, = log.loops_by_id("struct") + if sys.maxint == 2 ** 63 - 1: + extra = """ + i8 = int_lt(i4, -2147483648) + guard_false(i8, descr=...) + """ + else: + extra = "" + # This could, of course stand some improvement, to remove all these + # arithmatic ops, but we've removed all the core overhead. + assert loop.match_by_id("struct", """ + guard_not_invalidated(descr=...) + # struct.pack + %(32_bit_only)s + i11 = int_and(i4, 255) + i13 = int_rshift(i4, 8) + i14 = int_and(i13, 255) + i16 = int_rshift(i13, 8) + i17 = int_and(i16, 255) + i19 = int_rshift(i16, 8) + i20 = int_and(i19, 255) + + # struct.unpack + i22 = int_lshift(i14, 8) + i23 = int_or(i11, i22) + i25 = int_lshift(i17, 16) + i26 = int_or(i23, i25) + i28 = int_ge(i20, 128) + guard_false(i28, descr=...) + i30 = int_lshift(i20, 24) + i31 = int_or(i26, i30) + """ % {"32_bit_only": extra}) \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -1,5 +1,6 @@ from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC + class TestString(BaseTestPyPyC): def test_lookup_default_encoding(self): def main(n): @@ -107,3 +108,52 @@ --TICK-- jump(p0, p1, p2, p3, p4, p5, i58, i7, descr=) """) + + def test_str_mod(self): + def main(n): + s = 0 + while n > 0: + s += len('%d %d' % (n, n)) + n -= 1 + return s + + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i7 = int_gt(i4, 0) + guard_true(i7, descr=...) + guard_not_invalidated(descr=...) + p9 = call(ConstClass(ll_int2dec__Signed), i4, descr=) + guard_no_exception(descr=...) + i10 = strlen(p9) + i11 = int_is_true(i10) + guard_true(i11, descr=...) + i13 = strgetitem(p9, 0) + i15 = int_eq(i13, 45) + guard_false(i15, descr=...) + i17 = int_sub(0, i10) + i19 = int_gt(i10, 23) + guard_false(i19, descr=...) + p21 = newstr(23) + copystrcontent(p9, p21, 0, 0, i10) + i25 = int_add(1, i10) + i26 = int_gt(i25, 23) + guard_false(i26, descr=...) + strsetitem(p21, i10, 32) + i29 = int_add(i10, 1) + i30 = int_add(i10, i25) + i31 = int_gt(i30, 23) + guard_false(i31, descr=...) + copystrcontent(p9, p21, 0, i25, i10) + i33 = int_eq(i30, 23) + guard_false(i33, descr=...) + p35 = call(ConstClass(ll_shrink_array__rpy_stringPtr_Signed), p21, i30, descr=) + guard_no_exception(descr=...) + i37 = strlen(p35) + i38 = int_add_ovf(i5, i37) + guard_no_overflow(descr=...) + i40 = int_sub(i4, 1) + --TICK-- + jump(p0, p1, p2, p3, i40, i38, descr=) + """) \ No newline at end of file diff --git a/pypy/module/struct/formatiterator.py b/pypy/module/struct/formatiterator.py --- a/pypy/module/struct/formatiterator.py +++ b/pypy/module/struct/formatiterator.py @@ -1,9 +1,9 @@ -from pypy.interpreter.error import OperationError - +from pypy.rlib import jit from pypy.rlib.objectmodel import specialize from pypy.rlib.rstruct.error import StructError +from pypy.rlib.rstruct.formatiterator import FormatIterator from pypy.rlib.rstruct.standardfmttable import PACK_ACCEPTS_BROKEN_INPUT -from pypy.rlib.rstruct.formatiterator import FormatIterator +from pypy.interpreter.error import OperationError class PackFormatIterator(FormatIterator): @@ -14,15 +14,20 @@ self.args_index = 0 self.result = [] # list of characters + # This *should* be always unroll safe, the only way to get here is by + # unroll the interpret function, which means the fmt is const, and thus + # this should be const (in theory ;) + @jit.unroll_safe + @specialize.arg(1) def operate(self, fmtdesc, repetitions): if fmtdesc.needcount: fmtdesc.pack(self, repetitions) else: for i in range(repetitions): fmtdesc.pack(self) - operate._annspecialcase_ = 'specialize:arg(1)' _operate_is_specialized_ = True + @jit.unroll_safe def align(self, mask): pad = (-len(self.result)) & mask for i in range(pad): @@ -130,13 +135,15 @@ self.inputpos = 0 self.result_w = [] # list of wrapped objects + # See above comment on operate. + @jit.unroll_safe + @specialize.arg(1) def operate(self, fmtdesc, repetitions): if fmtdesc.needcount: fmtdesc.unpack(self, repetitions) else: for i in range(repetitions): fmtdesc.unpack(self) - operate._annspecialcase_ = 'specialize:arg(1)' _operate_is_specialized_ = True def align(self, mask): @@ -154,7 +161,6 @@ self.inputpos = end return s + @specialize.argtype(1) def appendobj(self, value): self.result_w.append(self.space.wrap(value)) - appendobj._annspecialcase_ = 'specialize:argtype(1)' - diff --git a/pypy/module/struct/interp_struct.py b/pypy/module/struct/interp_struct.py --- a/pypy/module/struct/interp_struct.py +++ b/pypy/module/struct/interp_struct.py @@ -3,6 +3,7 @@ from pypy.rlib.rstruct.error import StructError from pypy.rlib.rstruct.formatiterator import CalcSizeFormatIterator + @unwrap_spec(format=str) def calcsize(space, format): fmtiter = CalcSizeFormatIterator() diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -6,11 +6,11 @@ import sys def test_stdin_exists(space): - space.sys.get('stdin') + space.sys.get('stdin') space.sys.get('__stdin__') def test_stdout_exists(space): - space.sys.get('stdout') + space.sys.get('stdout') space.sys.get('__stdout__') class AppTestAppSysTests: @@ -25,7 +25,7 @@ assert 'sys' in modules, ( "An entry for sys " "is not in sys.modules.") sys2 = sys.modules['sys'] - assert sys is sys2, "import sys is not sys.modules[sys]." + assert sys is sys2, "import sys is not sys.modules[sys]." def test_builtin_in_modules(self): import sys modules = sys.modules @@ -89,12 +89,12 @@ else: raise AssertionError, "ZeroDivisionError not caught" - def test_io(self): + def test_io(self): import sys assert isinstance(sys.__stdout__, file) assert isinstance(sys.__stderr__, file) assert isinstance(sys.__stdin__, file) - + if self.appdirect and not isinstance(sys.stdin, file): return @@ -324,7 +324,7 @@ import sys if self.appdirect: skip("not worth running appdirect") - + encoding = sys.getdefaultencoding() try: sys.setdefaultencoding("ascii") @@ -334,11 +334,11 @@ sys.setdefaultencoding("latin-1") assert sys.getdefaultencoding() == 'latin-1' assert unicode('\x80') == u'\u0080' - + finally: sys.setdefaultencoding(encoding) - + # testing sys.settrace() is done in test_trace.py # testing sys.setprofile() is done in test_profile.py @@ -372,6 +372,21 @@ assert isinstance(v[3], int) assert isinstance(v[4], str) + assert v[0] == v.major + assert v[1] == v.minor + assert v[2] == v.build + assert v[3] == v.platform + assert v[4] == v.service_pack + + assert isinstance(v.service_pack_minor, int) + assert isinstance(v.service_pack_major, int) + assert isinstance(v.suite_mask, int) + assert isinstance(v.product_type, int) + + # This is how platform.py calls it. Make sure tuple still has 5 + # elements + maj, min, buildno, plat, csd = sys.getwindowsversion() + def test_winver(self): import sys if hasattr(sys, "winver"): @@ -564,7 +579,7 @@ if self.ready: break time.sleep(0.1) return sys._current_frames() - + frames = f() thisframe = frames.pop(thread_id) assert thisframe.f_code.co_name == 'f' diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 6, 0, "dev", 1) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 6, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/module/sys/vm.py b/pypy/module/sys/vm.py --- a/pypy/module/sys/vm.py +++ b/pypy/module/sys/vm.py @@ -1,11 +1,13 @@ """ Implementation of interpreter-level 'sys' routines. """ +import sys + +from pypy.interpreter import gateway from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import unwrap_spec, NoneNotWrapped +from pypy.rlib import jit from pypy.rlib.runicode import MAXUNICODE -from pypy.rlib import jit -import sys # ____________________________________________________________ @@ -50,6 +52,8 @@ current stack frame. This function should be used for specialized purposes only.""" + raise OperationError(space.w_NotImplementedError, + space.wrap("XXX sys._current_frames() incompatible with the JIT")) w_result = space.newdict() ecs = space.threadlocals.getallvalues() for thread_ident, ec in ecs.items(): @@ -58,7 +62,7 @@ space.setitem(w_result, space.wrap(thread_ident), space.wrap(f)) - return w_result + return w_result def setrecursionlimit(space, w_new_limit): """setrecursionlimit() sets the maximum number of nested calls that @@ -124,7 +128,7 @@ """Set the global debug tracing function. It will be called on each function call. See the debugger chapter in the library manual.""" space.getexecutioncontext().settrace(w_func) - + def setprofile(space, w_func): """Set the profiling function. It will be called on each function call and return. See the profiler chapter in the library manual.""" @@ -145,14 +149,47 @@ a debugger from a checkpoint, to recursively debug some other code.""" return space.getexecutioncontext().call_tracing(w_func, w_args) + +app = gateway.applevel(''' +"NOT_RPYTHON" +from _structseq import structseqtype, structseqfield + +class windows_version_info: + __metaclass__ = structseqtype + + name = "sys.getwindowsversion" + + major = structseqfield(0, "Major version number") + minor = structseqfield(1, "Minor version number") + build = structseqfield(2, "Build number") + platform = structseqfield(3, "Operating system platform") + service_pack = structseqfield(4, "Latest Service Pack installed on the system") + + # Because the indices aren't consecutive, they aren't included when + # unpacking and other such operations. + service_pack_major = structseqfield(10, "Service Pack major version number") + service_pack_minor = structseqfield(11, "Service Pack minor version number") + suite_mask = structseqfield(12, "Bit mask identifying available product suites") + product_type = structseqfield(13, "System product type") +''') + + def getwindowsversion(space): from pypy.rlib import rwin32 info = rwin32.GetVersionEx() - return space.newtuple([space.wrap(info[0]), - space.wrap(info[1]), - space.wrap(info[2]), - space.wrap(info[3]), - space.wrap(info[4])]) + w_windows_version_info = app.wget(space, "windows_version_info") + raw_version = space.newtuple([ + space.wrap(info[0]), + space.wrap(info[1]), + space.wrap(info[2]), + space.wrap(info[3]), + space.wrap(info[4]), + space.wrap(info[5]), + space.wrap(info[6]), + space.wrap(info[7]), + space.wrap(info[8]), + ]) + return space.call_function(w_windows_version_info, raw_version) @jit.dont_look_inside def get_dllhandle(space): diff --git a/pypy/module/test_lib_pypy/test_greenlet.py b/pypy/module/test_lib_pypy/test_greenlet.py --- a/pypy/module/test_lib_pypy/test_greenlet.py +++ b/pypy/module/test_lib_pypy/test_greenlet.py @@ -3,7 +3,7 @@ class AppTestGreenlet: def setup_class(cls): - cls.space = gettestobjspace(usemodules=['_continuation']) + cls.space = gettestobjspace(usemodules=['_continuation'], continuation=True) def test_simple(self): from greenlet import greenlet @@ -241,3 +241,42 @@ g1 = greenlet(f1) raises(ValueError, g1.throw, ValueError) assert g1.dead + + def test_exc_info_save_restore(self): + # sys.exc_info save/restore behaviour is wrong on CPython's greenlet + from greenlet import greenlet + import sys + def f(): + try: + raise ValueError('fun') + except: + exc_info = sys.exc_info() + greenlet(h).switch() + assert exc_info == sys.exc_info() + + def h(): + assert sys.exc_info() == (None, None, None) + + greenlet(f).switch() + + def test_gr_frame(self): + from greenlet import greenlet + import sys + def f2(): + assert g.gr_frame is None + gmain.switch() + assert g.gr_frame is None + def f1(): + assert gmain.gr_frame is gmain_frame + assert g.gr_frame is None + f2() + assert g.gr_frame is None + gmain = greenlet.getcurrent() + assert gmain.gr_frame is None + gmain_frame = sys._getframe() + g = greenlet(f1) + assert g.gr_frame is None + g.switch() + assert g.gr_frame.f_code.co_name == 'f2' + g.switch() + assert g.gr_frame is None diff --git a/pypy/module/test_lib_pypy/test_stackless_pickle.py b/pypy/module/test_lib_pypy/test_stackless_pickle.py --- a/pypy/module/test_lib_pypy/test_stackless_pickle.py +++ b/pypy/module/test_lib_pypy/test_stackless_pickle.py @@ -1,25 +1,27 @@ -import py; py.test.skip("XXX port me") +import py +py.test.skip("in-progress, maybe") from pypy.conftest import gettestobjspace, option class AppTest_Stackless: def setup_class(cls): - import py.test - py.test.importorskip('greenlet') - space = gettestobjspace(usemodules=('_stackless', '_socket')) + space = gettestobjspace(usemodules=('_continuation', '_socket')) cls.space = space - # cannot test the unpickle part on top of py.py + if option.runappdirect: + cls.w_lev = space.wrap(14) + else: + cls.w_lev = space.wrap(2) def test_pickle(self): import new, sys mod = new.module('mod') sys.modules['mod'] = mod + mod.lev = self.lev try: exec ''' import pickle, sys import stackless -lev = 14 ch = stackless.channel() seen = [] diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -508,7 +508,7 @@ return space._type_issubtype(w_sub, w_type) def isinstance(space, w_inst, w_type): - return space._type_isinstance(w_inst, w_type) + return space.wrap(space._type_isinstance(w_inst, w_type)) def issubtype_allow_override(space, w_sub, w_type): w_check = space.lookup(w_type, "__subclasscheck__") diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -250,7 +250,8 @@ def repr__Bytearray(space, w_bytearray): s = w_bytearray.data - buf = StringBuilder(50) + # Good default if there are no replacements. + buf = StringBuilder(len("bytearray(b'')") + len(s)) buf.append("bytearray(b'") @@ -369,8 +370,8 @@ newdata = [] for i in range(len(list_w)): w_s = list_w[i] - if not (space.is_true(space.isinstance(w_s, space.w_str)) or - space.is_true(space.isinstance(w_s, space.w_bytearray))): + if not (space.isinstance_w(w_s, space.w_str) or + space.isinstance_w(w_s, space.w_bytearray)): raise operationerrfmt( space.w_TypeError, "sequence item %d: expected string, %s " diff --git a/pypy/objspace/std/complextype.py b/pypy/objspace/std/complextype.py --- a/pypy/objspace/std/complextype.py +++ b/pypy/objspace/std/complextype.py @@ -127,8 +127,8 @@ and space.is_w(space.type(w_real), space.w_complex)): return w_real - if space.is_true(space.isinstance(w_real, space.w_str)) or \ - space.is_true(space.isinstance(w_real, space.w_unicode)): + if space.isinstance_w(w_real, space.w_str) or \ + space.isinstance_w(w_real, space.w_unicode): # a string argument if not noarg2: raise OperationError(space.w_TypeError, @@ -203,8 +203,8 @@ return (w_complex.realval, w_complex.imagval) # # Check that it is not a string (on which space.float() would succeed). - if (space.is_true(space.isinstance(w_complex, space.w_str)) or - space.is_true(space.isinstance(w_complex, space.w_unicode))): + if (space.isinstance_w(w_complex, space.w_str) or + space.isinstance_w(w_complex, space.w_unicode)): raise operationerrfmt(space.w_TypeError, "complex number expected, got '%s'", space.type(w_complex).getname(space)) diff --git a/pypy/objspace/std/floattype.py b/pypy/objspace/std/floattype.py --- a/pypy/objspace/std/floattype.py +++ b/pypy/objspace/std/floattype.py @@ -32,14 +32,14 @@ if space.is_w(w_floattype, space.w_float): return w_obj value = space.float_w(w_obj) - elif space.is_true(space.isinstance(w_value, space.w_str)): + elif space.isinstance_w(w_value, space.w_str): strvalue = space.str_w(w_value) try: value = string_to_float(strvalue) except ParseStringError, e: raise OperationError(space.w_ValueError, space.wrap(e.msg)) - elif space.is_true(space.isinstance(w_value, space.w_unicode)): + elif space.isinstance_w(w_value, space.w_unicode): if space.config.objspace.std.withropeunicode: from pypy.objspace.std.ropeunicodeobject import unicode_to_decimal_w else: diff --git a/pypy/objspace/std/formatting.py b/pypy/objspace/std/formatting.py --- a/pypy/objspace/std/formatting.py +++ b/pypy/objspace/std/formatting.py @@ -1,13 +1,15 @@ """ String formatting routines. """ -from pypy.rlib.unroll import unrolling_iterable +from pypy.interpreter.error import OperationError +from pypy.objspace.std.unicodetype import unicode_from_object +from pypy.rlib import jit from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rfloat import formatd, DTSF_ALT, isnan, isinf -from pypy.interpreter.error import OperationError +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.unroll import unrolling_iterable from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder -from pypy.objspace.std.unicodetype import unicode_from_object + class BaseStringFormatter(object): def __init__(self, space, values_w, w_valuedict): @@ -173,6 +175,9 @@ raise OperationError(space.w_ValueError, space.wrap("incomplete format")) + # Only shows up if we've already started inlining format(), so just + # unconditionally unroll this. + @jit.unroll_safe def getmappingkey(self): # return the mapping key in a '%(key)s' specifier fmt = self.fmt @@ -233,6 +238,8 @@ return w_value + # Same as getmappingkey + @jit.unroll_safe def peel_flags(self): self.f_ljust = False self.f_sign = False @@ -255,6 +262,8 @@ break self.forward() + # Same as getmappingkey + @jit.unroll_safe def peel_num(self): space = self.space c = self.peekchr() @@ -276,6 +285,7 @@ c = self.peekchr() return result + @jit.look_inside_iff(lambda self: jit.isconstant(self.fmt)) def format(self): lgt = len(self.fmt) + 4 * len(self.values_w) + 10 if do_unicode: @@ -415,15 +425,15 @@ space.wrap("operand does not support " "unary str")) w_result = space.get_and_call_function(w_impl, w_value) - if space.is_true(space.isinstance(w_result, - space.w_unicode)): + if space.isinstance_w(w_result, + space.w_unicode): raise NeedUnicodeFormattingError return space.str_w(w_result) def fmt_s(self, w_value): space = self.space - got_unicode = space.is_true(space.isinstance(w_value, - space.w_unicode)) + got_unicode = space.isinstance_w(w_value, + space.w_unicode) if not do_unicode: if got_unicode: raise NeedUnicodeFormattingError @@ -442,13 +452,13 @@ def fmt_c(self, w_value): self.prec = -1 # just because space = self.space - if space.is_true(space.isinstance(w_value, space.w_str)): + if space.isinstance_w(w_value, space.w_str): s = space.str_w(w_value) if len(s) != 1: raise OperationError(space.w_TypeError, space.wrap("%c requires int or char")) self.std_wp(s) - elif space.is_true(space.isinstance(w_value, space.w_unicode)): + elif space.isinstance_w(w_value, space.w_unicode): if not do_unicode: raise NeedUnicodeFormattingError ustr = space.unicode_w(w_value) @@ -510,15 +520,15 @@ return space.wrap(result) def mod_format(space, w_format, w_values, do_unicode=False): - if space.is_true(space.isinstance(w_values, space.w_tuple)): + if space.isinstance_w(w_values, space.w_tuple): values_w = space.fixedview(w_values) return format(space, w_format, values_w, None, do_unicode) else: # we check directly for dict to avoid obscure checking # in simplest case - if space.is_true(space.isinstance(w_values, space.w_dict)) or \ + if space.isinstance_w(w_values, space.w_dict) or \ (space.lookup(w_values, '__getitem__') and - not space.is_true(space.isinstance(w_values, space.w_basestring))): + not space.isinstance_w(w_values, space.w_basestring)): return format(space, w_format, [w_values], w_values, do_unicode) else: return format(space, w_format, [w_values], None, do_unicode) diff --git a/pypy/objspace/std/inttype.py b/pypy/objspace/std/inttype.py --- a/pypy/objspace/std/inttype.py +++ b/pypy/objspace/std/inttype.py @@ -99,10 +99,10 @@ if type(w_value) is W_IntObject: value = w_value.intval ok = True - elif space.is_true(space.isinstance(w_value, space.w_str)): + elif space.isinstance_w(w_value, space.w_str): value, w_longval = string_to_int_or_long(space, space.str_w(w_value)) ok = True - elif space.is_true(space.isinstance(w_value, space.w_unicode)): + elif space.isinstance_w(w_value, space.w_unicode): if space.config.objspace.std.withropeunicode: from pypy.objspace.std.ropeunicodeobject import unicode_to_decimal_w else: @@ -145,7 +145,7 @@ else: base = space.int_w(w_base) - if space.is_true(space.isinstance(w_value, space.w_unicode)): + if space.isinstance_w(w_value, space.w_unicode): if space.config.objspace.std.withropeunicode: from pypy.objspace.std.ropeunicodeobject import unicode_to_decimal_w else: diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -8,7 +8,7 @@ from pypy.objspace.std import slicetype from pypy.interpreter import gateway, baseobjspace -from pypy.rlib.listsort import TimSort +from pypy.rlib.listsort import make_timsort_class from pypy.interpreter.argument import Signature class W_ListObject(W_Object): @@ -44,7 +44,7 @@ if w_iterable is not None: # unfortunately this is duplicating space.unpackiterable to avoid # assigning a new RPython list to 'wrappeditems', which defeats the - # W_FastSeqIterObject optimization. + # W_FastListIterObject optimization. if isinstance(w_iterable, W_ListObject): items_w.extend(w_iterable.wrappeditems) elif isinstance(w_iterable, W_TupleObject): @@ -445,6 +445,7 @@ self.w_key = w_key self.w_item = w_item +TimSort = make_timsort_class() # NOTE: all the subclasses of TimSort should inherit from a common subclass, # so make sure that only SimpleSort inherits directly from TimSort. # This is necessary to hide the parent method TimSort.lt() from the diff --git a/pypy/objspace/std/longtype.py b/pypy/objspace/std/longtype.py --- a/pypy/objspace/std/longtype.py +++ b/pypy/objspace/std/longtype.py @@ -24,9 +24,9 @@ return w_value elif type(w_value) is W_LongObject: return newbigint(space, w_longtype, w_value.num) - elif space.is_true(space.isinstance(w_value, space.w_str)): + elif space.isinstance_w(w_value, space.w_str): return string_to_w_long(space, w_longtype, space.str_w(w_value)) - elif space.is_true(space.isinstance(w_value, space.w_unicode)): + elif space.isinstance_w(w_value, space.w_unicode): if space.config.objspace.std.withropeunicode: from pypy.objspace.std.ropeunicodeobject import unicode_to_decimal_w else: @@ -51,7 +51,7 @@ else: base = space.int_w(w_base) - if space.is_true(space.isinstance(w_value, space.w_unicode)): + if space.isinstance_w(w_value, space.w_unicode): from pypy.objspace.std.unicodeobject import unicode_to_decimal_w s = unicode_to_decimal_w(space, w_value) else: diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -132,7 +132,10 @@ cache[selector] = attr return attr - @jit.unroll_safe + @jit.look_inside_iff(lambda self, obj, selector, w_value: + jit.isconstant(self) and + jit.isconstant(selector[0]) and + jit.isconstant(selector[1])) def add_attr(self, obj, selector, w_value): # grumble, jit needs this attr = self._get_new_attr(selector[0], selector[1]) @@ -347,7 +350,7 @@ SLOTS_STARTING_FROM = 3 -class BaseMapdictObject: # slightly evil to make it inherit from W_Root +class BaseMapdictObject: _mixin_ = True def _init_empty(self, map): diff --git a/pypy/objspace/std/newformat.py b/pypy/objspace/std/newformat.py --- a/pypy/objspace/std/newformat.py +++ b/pypy/objspace/std/newformat.py @@ -3,9 +3,10 @@ import string from pypy.interpreter.error import OperationError -from pypy.rlib import rstring, runicode, rlocale, rarithmetic, rfloat +from pypy.rlib import rstring, runicode, rlocale, rarithmetic, rfloat, jit from pypy.rlib.objectmodel import specialize from pypy.rlib.rfloat import copysign, formatd +from pypy.tool import sourcetools @specialize.argtype(1) @@ -36,314 +37,321 @@ ANS_MANUAL = 3 -class TemplateFormatter(object): +def make_template_formatting_class(): + class TemplateFormatter(object): - _annspecialcase_ = "specialize:ctr_location" + parser_list_w = None - parser_list_w = None + def __init__(self, space, is_unicode, template): + self.space = space + self.is_unicode = is_unicode + self.empty = u"" if is_unicode else "" + self.template = template - def __init__(self, space, is_unicode, template): - self.space = space - self.is_unicode = is_unicode - self.empty = u"" if is_unicode else "" - self.template = template + def build(self, args): + self.args, self.kwargs = args.unpack() + self.auto_numbering = 0 + self.auto_numbering_state = ANS_INIT + return self._build_string(0, len(self.template), 2) - def build(self, args): - self.args, self.kwargs = args.unpack() - self.auto_numbering = 0 - self.auto_numbering_state = ANS_INIT - return self._build_string(0, len(self.template), 2) + def _build_string(self, start, end, level): + space = self.space + if self.is_unicode: + out = rstring.UnicodeBuilder() + else: + out = rstring.StringBuilder() + if not level: + raise OperationError(space.w_ValueError, + space.wrap("Recursion depth exceeded")) + level -= 1 + s = self.template + return self._do_build_string(start, end, level, out, s) - def _build_string(self, start, end, level): - space = self.space - if self.is_unicode: - out = rstring.UnicodeBuilder() - else: - out = rstring.StringBuilder() - if not level: - raise OperationError(space.w_ValueError, - space.wrap("Recursion depth exceeded")) - level -= 1 - s = self.template - last_literal = i = start - while i < end: - c = s[i] - i += 1 - if c == "{" or c == "}": - at_end = i == end - # Find escaped "{" and "}" - markup_follows = True - if c == "}": - if at_end or s[i] != "}": - raise OperationError(space.w_ValueError, - space.wrap("Single '}'")) - i += 1 - markup_follows = False - if c == "{": - if at_end: - raise OperationError(space.w_ValueError, - space.wrap("Single '{'")) - if s[i] == "{": + @jit.look_inside_iff(lambda self, start, end, level, out, s: jit.isconstant(s)) + def _do_build_string(self, start, end, level, out, s): + space = self.space + last_literal = i = start + while i < end: + c = s[i] + i += 1 + if c == "{" or c == "}": + at_end = i == end + # Find escaped "{" and "}" + markup_follows = True + if c == "}": + if at_end or s[i] != "}": + raise OperationError(space.w_ValueError, + space.wrap("Single '}'")) i += 1 markup_follows = False - # Attach literal data - out.append_slice(s, last_literal, i - 1) - if not markup_follows: + if c == "{": + if at_end: + raise OperationError(space.w_ValueError, + space.wrap("Single '{'")) + if s[i] == "{": + i += 1 + markup_follows = False + # Attach literal data + out.append_slice(s, last_literal, i - 1) + if not markup_follows: + last_literal = i + continue + nested = 1 + field_start = i + recursive = False + while i < end: + c = s[i] + if c == "{": + recursive = True + nested += 1 + elif c == "}": + nested -= 1 + if not nested: + break + i += 1 + if nested: + raise OperationError(space.w_ValueError, + space.wrap("Unmatched '{'")) + rendered = self._render_field(field_start, i, recursive, level) + out.append(rendered) + i += 1 last_literal = i - continue - nested = 1 - field_start = i - recursive = False - while i < end: - c = s[i] - if c == "{": - recursive = True - nested += 1 - elif c == "}": - nested -= 1 - if not nested: - break - i += 1 - if nested: - raise OperationError(space.w_ValueError, - space.wrap("Unmatched '{'")) - rendered = self._render_field(field_start, i, recursive, level) - out.append(rendered) + + out.append_slice(s, last_literal, end) + return out.build() + + def _parse_field(self, start, end): + s = self.template + # Find ":" or "!" + i = start + while i < end: + c = s[i] + if c == ":" or c == "!": + end_name = i + if c == "!": + i += 1 + if i == end: + w_msg = self.space.wrap("expected conversion") + raise OperationError(self.space.w_ValueError, w_msg) + conversion = s[i] + i += 1 + if i < end: + if s[i] != ':': + w_msg = self.space.wrap("expected ':' after" + " format specifier") + raise OperationError(self.space.w_ValueError, + w_msg) + i += 1 + else: + conversion = None + i += 1 + return s[start:end_name], conversion, i i += 1 - last_literal = i + return s[start:end], None, end - out.append_slice(s, last_literal, end) - return out.build() - - def _parse_field(self, start, end): - s = self.template - # Find ":" or "!" - i = start - while i < end: - c = s[i] - if c == ":" or c == "!": - end_name = i - if c == "!": - i += 1 - if i == end: - w_msg = self.space.wrap("expected conversion") - raise OperationError(self.space.w_ValueError, w_msg) - conversion = s[i] - i += 1 - if i < end: - if s[i] != ':': - w_msg = self.space.wrap("expected ':' after" - " format specifier") - raise OperationError(self.space.w_ValueError, - w_msg) - i += 1 + def _get_argument(self, name): + # First, find the argument. + space = self.space + i = 0 + end = len(name) + while i < end: + c = name[i] + if c == "[" or c == ".": + break + i += 1 + empty = not i + if empty: + index = -1 + else: + index, stop = _parse_int(self.space, name, 0, i) + if stop != i: + index = -1 + use_numeric = empty or index != -1 + if self.auto_numbering_state == ANS_INIT and use_numeric: + if empty: + self.auto_numbering_state = ANS_AUTO else: - conversion = None - i += 1 - return s[start:end_name], conversion, i - i += 1 - return s[start:end], None, end - - def _get_argument(self, name): - # First, find the argument. - space = self.space - i = 0 - end = len(name) - while i < end: - c = name[i] - if c == "[" or c == ".": - break - i += 1 - empty = not i - if empty: - index = -1 - else: - index, stop = _parse_int(self.space, name, 0, i) - if stop != i: - index = -1 - use_numeric = empty or index != -1 - if self.auto_numbering_state == ANS_INIT and use_numeric: - if empty: - self.auto_numbering_state = ANS_AUTO - else: - self.auto_numbering_state = ANS_MANUAL - if use_numeric: - if self.auto_numbering_state == ANS_MANUAL: - if empty: - msg = "switching from manual to automatic numbering" + self.auto_numbering_state = ANS_MANUAL + if use_numeric: + if self.auto_numbering_state == ANS_MANUAL: + if empty: + msg = "switching from manual to automatic numbering" + raise OperationError(space.w_ValueError, + space.wrap(msg)) + elif not empty: + msg = "switching from automatic to manual numbering" raise OperationError(space.w_ValueError, space.wrap(msg)) - elif not empty: - msg = "switching from automatic to manual numbering" - raise OperationError(space.w_ValueError, - space.wrap(msg)) - if empty: - index = self.auto_numbering - self.auto_numbering += 1 - if index == -1: - kwarg = name[:i] - if self.is_unicode: + if empty: + index = self.auto_numbering + self.auto_numbering += 1 + if index == -1: + kwarg = name[:i] + if self.is_unicode: + try: + arg_key = kwarg.encode("latin-1") + except UnicodeEncodeError: + # Not going to be found in a dict of strings. + raise OperationError(space.w_KeyError, space.wrap(kwarg)) + else: + arg_key = kwarg try: - arg_key = kwarg.encode("latin-1") - except UnicodeEncodeError: - # Not going to be found in a dict of strings. - raise OperationError(space.w_KeyError, space.wrap(kwarg)) + w_arg = self.kwargs[arg_key] + except KeyError: + raise OperationError(space.w_KeyError, space.wrap(arg_key)) else: - arg_key = kwarg - try: - w_arg = self.kwargs[arg_key] - except KeyError: - raise OperationError(space.w_KeyError, space.wrap(arg_key)) - else: - try: - w_arg = self.args[index] - except IndexError: - w_msg = space.wrap("index out of range") - raise OperationError(space.w_IndexError, w_msg) - return self._resolve_lookups(w_arg, name, i, end) + try: + w_arg = self.args[index] + except IndexError: + w_msg = space.wrap("index out of range") + raise OperationError(space.w_IndexError, w_msg) + return self._resolve_lookups(w_arg, name, i, end) - def _resolve_lookups(self, w_obj, name, start, end): - # Resolve attribute and item lookups. - space = self.space - i = start - while i < end: - c = name[i] - if c == ".": + def _resolve_lookups(self, w_obj, name, start, end): + # Resolve attribute and item lookups. + space = self.space + i = start + while i < end: + c = name[i] + if c == ".": + i += 1 + start = i + while i < end: + c = name[i] + if c == "[" or c == ".": + break + i += 1 + if start == i: + w_msg = space.wrap("Empty attribute in format string") + raise OperationError(space.w_ValueError, w_msg) + w_attr = space.wrap(name[start:i]) + if w_obj is not None: + w_obj = space.getattr(w_obj, w_attr) + else: + self.parser_list_w.append(space.newtuple([ + space.w_True, w_attr])) + elif c == "[": + got_bracket = False + i += 1 + start = i + while i < end: + c = name[i] + if c == "]": + got_bracket = True + break + i += 1 + if not got_bracket: + raise OperationError(space.w_ValueError, + space.wrap("Missing ']'")) + index, reached = _parse_int(self.space, name, start, i) + if index != -1 and reached == i: + w_item = space.wrap(index) + else: + w_item = space.wrap(name[start:i]) + i += 1 # Skip "]" + if w_obj is not None: + w_obj = space.getitem(w_obj, w_item) + else: + self.parser_list_w.append(space.newtuple([ + space.w_False, w_item])) + else: + msg = "Only '[' and '.' may follow ']'" + raise OperationError(space.w_ValueError, space.wrap(msg)) + return w_obj + + def formatter_field_name_split(self): + space = self.space + name = self.template + i = 0 + end = len(name) + while i < end: + c = name[i] + if c == "[" or c == ".": + break i += 1 - start = i - while i < end: - c = name[i] - if c == "[" or c == ".": - break - i += 1 - if start == i: - w_msg = space.wrap("Empty attribute in format string") - raise OperationError(space.w_ValueError, w_msg) - w_attr = space.wrap(name[start:i]) - if w_obj is not None: - w_obj = space.getattr(w_obj, w_attr) - else: - self.parser_list_w.append(space.newtuple([ - space.w_True, w_attr])) - elif c == "[": - got_bracket = False - i += 1 - start = i - while i < end: - c = name[i] - if c == "]": - got_bracket = True - break - i += 1 - if not got_bracket: - raise OperationError(space.w_ValueError, - space.wrap("Missing ']'")) - index, reached = _parse_int(self.space, name, start, i) - if index != -1 and reached == i: - w_item = space.wrap(index) - else: - w_item = space.wrap(name[start:i]) - i += 1 # Skip "]" - if w_obj is not None: - w_obj = space.getitem(w_obj, w_item) - else: - self.parser_list_w.append(space.newtuple([ - space.w_False, w_item])) + if i == 0: + index = -1 else: - msg = "Only '[' and '.' may follow ']'" - raise OperationError(space.w_ValueError, space.wrap(msg)) - return w_obj + index, stop = _parse_int(self.space, name, 0, i) + if stop != i: + index = -1 + if index >= 0: + w_first = space.wrap(index) + else: + w_first = space.wrap(name[:i]) + # + self.parser_list_w = [] + self._resolve_lookups(None, name, i, end) + # + return space.newtuple([w_first, + space.iter(space.newlist(self.parser_list_w))]) - def formatter_field_name_split(self): - space = self.space - name = self.template - i = 0 - end = len(name) - while i < end: - c = name[i] - if c == "[" or c == ".": - break - i += 1 - if i == 0: - index = -1 - else: - index, stop = _parse_int(self.space, name, 0, i) - if stop != i: - index = -1 - if index >= 0: - w_first = space.wrap(index) - else: - w_first = space.wrap(name[:i]) - # - self.parser_list_w = [] - self._resolve_lookups(None, name, i, end) - # - return space.newtuple([w_first, - space.iter(space.newlist(self.parser_list_w))]) + def _convert(self, w_obj, conversion): + space = self.space + conv = conversion[0] + if conv == "r": + return space.repr(w_obj) + elif conv == "s": + if self.is_unicode: + return space.call_function(space.w_unicode, w_obj) + return space.str(w_obj) + else: + raise OperationError(self.space.w_ValueError, + self.space.wrap("invalid conversion")) - def _convert(self, w_obj, conversion): - space = self.space - conv = conversion[0] - if conv == "r": - return space.repr(w_obj) - elif conv == "s": - if self.is_unicode: - return space.call_function(space.w_unicode, w_obj) - return space.str(w_obj) - else: - raise OperationError(self.space.w_ValueError, - self.space.wrap("invalid conversion")) + def _render_field(self, start, end, recursive, level): + name, conversion, spec_start = self._parse_field(start, end) + spec = self.template[spec_start:end] + # + if self.parser_list_w is not None: + # used from formatter_parser() + if level == 1: # ignore recursive calls + space = self.space + startm1 = start - 1 + assert startm1 >= self.last_end + w_entry = space.newtuple([ + space.wrap(self.template[self.last_end:startm1]), + space.wrap(name), + space.wrap(spec), + space.wrap(conversion)]) + self.parser_list_w.append(w_entry) + self.last_end = end + 1 + return self.empty + # + w_obj = self._get_argument(name) + if conversion is not None: + w_obj = self._convert(w_obj, conversion) + if recursive: + spec = self._build_string(spec_start, end, level) + w_rendered = self.space.format(w_obj, self.space.wrap(spec)) + unwrapper = "unicode_w" if self.is_unicode else "str_w" + to_interp = getattr(self.space, unwrapper) + return to_interp(w_rendered) - def _render_field(self, start, end, recursive, level): - name, conversion, spec_start = self._parse_field(start, end) - spec = self.template[spec_start:end] - # - if self.parser_list_w is not None: - # used from formatter_parser() - if level == 1: # ignore recursive calls - space = self.space - startm1 = start - 1 - assert startm1 >= self.last_end - w_entry = space.newtuple([ - space.wrap(self.template[self.last_end:startm1]), - space.wrap(name), - space.wrap(spec), - space.wrap(conversion)]) - self.parser_list_w.append(w_entry) - self.last_end = end + 1 - return self.empty - # - w_obj = self._get_argument(name) - if conversion is not None: - w_obj = self._convert(w_obj, conversion) - if recursive: - spec = self._build_string(spec_start, end, level) - w_rendered = self.space.format(w_obj, self.space.wrap(spec)) - unwrapper = "unicode_w" if self.is_unicode else "str_w" - to_interp = getattr(self.space, unwrapper) - return to_interp(w_rendered) + def formatter_parser(self): + self.parser_list_w = [] + self.last_end = 0 + self._build_string(0, len(self.template), 2) + # + space = self.space + if self.last_end < len(self.template): + w_lastentry = space.newtuple([ + space.wrap(self.template[self.last_end:]), + space.w_None, + space.w_None, + space.w_None]) + self.parser_list_w.append(w_lastentry) + return space.iter(space.newlist(self.parser_list_w)) + return TemplateFormatter - def formatter_parser(self): - self.parser_list_w = [] - self.last_end = 0 - self._build_string(0, len(self.template), 2) - # - space = self.space - if self.last_end < len(self.template): - w_lastentry = space.newtuple([ - space.wrap(self.template[self.last_end:]), - space.w_None, - space.w_None, - space.w_None]) - self.parser_list_w.append(w_lastentry) - return space.iter(space.newlist(self.parser_list_w)) - +StrTemplateFormatter = make_template_formatting_class() +UnicodeTemplateFormatter = make_template_formatting_class() def str_template_formatter(space, template): - return TemplateFormatter(space, False, template) + return StrTemplateFormatter(space, False, template) def unicode_template_formatter(space, template): - return TemplateFormatter(space, True, template) + return UnicodeTemplateFormatter(space, True, template) def format_method(space, w_string, args, is_unicode): @@ -380,756 +388,759 @@ LONG_DIGITS = string.digits + string.ascii_lowercase -class Formatter(BaseFormatter): - """__format__ implementation for builtin types.""" +def make_formatting_class(): + class Formatter(BaseFormatter): + """__format__ implementation for builtin types.""" - _annspecialcase_ = "specialize:ctr_location" - _grouped_digits = None + _grouped_digits = None - def __init__(self, space, is_unicode, spec): - self.space = space - self.is_unicode = is_unicode - self.empty = u"" if is_unicode else "" - self.spec = spec + def __init__(self, space, is_unicode, spec): + self.space = space + self.is_unicode = is_unicode + self.empty = u"" if is_unicode else "" + self.spec = spec - def _is_alignment(self, c): - return (c == "<" or - c == ">" or - c == "=" or - c == "^") + def _is_alignment(self, c): + return (c == "<" or + c == ">" or + c == "=" or + c == "^") - def _is_sign(self, c): - return (c == " " or - c == "+" or - c == "-") + def _is_sign(self, c): + return (c == " " or + c == "+" or + c == "-") - def _parse_spec(self, default_type, default_align): - space = self.space - self._fill_char = self._lit("\0")[0] - self._align = default_align - self._alternate = False - self._sign = "\0" - self._thousands_sep = False - self._precision = -1 - the_type = default_type - spec = self.spec - if not spec: - return True - length = len(spec) - i = 0 - got_align = True - if length - i >= 2 and self._is_alignment(spec[i + 1]): - self._align = spec[i + 1] - self._fill_char = spec[i] - i += 2 - elif length - i >= 1 and self._is_alignment(spec[i]): - self._align = spec[i] - i += 1 - else: - got_align = False - if length - i >= 1 and self._is_sign(spec[i]): - self._sign = spec[i] - i += 1 - if length - i >= 1 and spec[i] == "#": - self._alternate = True - i += 1 - if self._fill_char == "\0" and length - i >= 1 and spec[i] == "0": - self._fill_char = self._lit("0")[0] - if not got_align: - self._align = "=" - i += 1 - start_i = i - self._width, i = _parse_int(self.space, spec, i, length) - if length != i and spec[i] == ",": - self._thousands_sep = True - i += 1 - if length != i and spec[i] == ".": - i += 1 - self._precision, i = _parse_int(self.space, spec, i, length) - if self._precision == -1: + def _parse_spec(self, default_type, default_align): + space = self.space + self._fill_char = self._lit("\0")[0] + self._align = default_align + self._alternate = False + self._sign = "\0" + self._thousands_sep = False + self._precision = -1 + the_type = default_type + spec = self.spec + if not spec: + return True + length = len(spec) + i = 0 + got_align = True + if length - i >= 2 and self._is_alignment(spec[i + 1]): + self._align = spec[i + 1] + self._fill_char = spec[i] + i += 2 + elif length - i >= 1 and self._is_alignment(spec[i]): + self._align = spec[i] + i += 1 + else: + got_align = False + if length - i >= 1 and self._is_sign(spec[i]): + self._sign = spec[i] + i += 1 + if length - i >= 1 and spec[i] == "#": + self._alternate = True + i += 1 + if self._fill_char == "\0" and length - i >= 1 and spec[i] == "0": + self._fill_char = self._lit("0")[0] + if not got_align: + self._align = "=" + i += 1 + start_i = i + self._width, i = _parse_int(self.space, spec, i, length) + if length != i and spec[i] == ",": + self._thousands_sep = True + i += 1 + if length != i and spec[i] == ".": + i += 1 + self._precision, i = _parse_int(self.space, spec, i, length) + if self._precision == -1: + raise OperationError(space.w_ValueError, + space.wrap("no precision given")) + if length - i > 1: raise OperationError(space.w_ValueError, - space.wrap("no precision given")) - if length - i > 1: - raise OperationError(space.w_ValueError, - space.wrap("invalid format spec")) - if length - i == 1: - presentation_type = spec[i] - if self.is_unicode: - try: - the_type = spec[i].encode("ascii")[0] - except UnicodeEncodeError: + space.wrap("invalid format spec")) + if length - i == 1: + presentation_type = spec[i] + if self.is_unicode: + try: + the_type = spec[i].encode("ascii")[0] + except UnicodeEncodeError: + raise OperationError(space.w_ValueError, + space.wrap("invalid presentation type")) + else: + the_type = presentation_type + i += 1 + self._type = the_type + if self._thousands_sep: + tp = self._type + if (tp == "d" or + tp == "e" or + tp == "f" or + tp == "g" or + tp == "E" or + tp == "G" or + tp == "%" or + tp == "F" or + tp == "\0"): + # ok + pass + else: raise OperationError(space.w_ValueError, - space.wrap("invalid presentation type")) + space.wrap("invalid type with ','")) + return False + + def _calc_padding(self, string, length): + """compute left and right padding, return total width of string""" + if self._width != -1 and length < self._width: + total = self._width else: - the_type = presentation_type - i += 1 - self._type = the_type - if self._thousands_sep: - tp = self._type - if (tp == "d" or - tp == "e" or - tp == "f" or - tp == "g" or - tp == "E" or - tp == "G" or - tp == "%" or - tp == "F" or - tp == "\0"): - # ok - pass + total = length + align = self._align + if align == ">": + left = total - length + elif align == "^": + left = (total - length) / 2 + elif align == "<" or align == "=": + left = 0 else: - raise OperationError(space.w_ValueError, - space.wrap("invalid type with ','")) - return False + raise AssertionError("shouldn't be here") + right = total - length - left + self._left_pad = left + self._right_pad = right + return total - def _calc_padding(self, string, length): - """compute left and right padding, return total width of string""" - if self._width != -1 and length < self._width: - total = self._width - else: - total = length - align = self._align - if align == ">": - left = total - length - elif align == "^": - left = (total - length) / 2 - elif align == "<" or align == "=": - left = 0 - else: - raise AssertionError("shouldn't be here") - right = total - length - left - self._left_pad = left - self._right_pad = right - return total - - def _lit(self, s): - if self.is_unicode: - return s.decode("ascii") - else: - return s - - def _pad(self, string): - builder = self._builder() - builder.append_multiple_char(self._fill_char, self._left_pad) - builder.append(string) - builder.append_multiple_char(self._fill_char, self._right_pad) - return builder.build() - - def _builder(self): - if self.is_unicode: - return rstring.UnicodeBuilder() - else: - return rstring.StringBuilder() - - def _unknown_presentation(self, tp): - msg = "unknown presentation for %s: '%s'" - w_msg = self.space.wrap(msg % (tp, self._type)) - raise OperationError(self.space.w_ValueError, w_msg) - - def format_string(self, string): - space = self.space - if self._parse_spec("s", "<"): - return space.wrap(string) - if self._type != "s": - self._unknown_presentation("string") - if self._sign != "\0": - msg = "Sign not allowed in string format specifier" - raise OperationError(space.w_ValueError, space.wrap(msg)) - if self._alternate: - msg = "Alternate form not allowed in string format specifier" - raise OperationError(space.w_ValueError, space.wrap(msg)) - if self._align == "=": - msg = "'=' alignment not allowed in string format specifier" - raise OperationError(space.w_ValueError, space.wrap(msg)) - length = len(string) - precision = self._precision - if precision != -1 and length >= precision: - assert precision >= 0 - length = precision - string = string[:precision] - if self._fill_char == "\0": - self._fill_char = self._lit(" ")[0] - self._calc_padding(string, length) - return space.wrap(self._pad(string)) - - def _get_locale(self, tp): - space = self.space - if tp == "n": - dec, thousands, grouping = rlocale.numeric_formatting() - elif self._thousands_sep: - dec = "." - thousands = "," - grouping = "\3\0" - else: - dec = "." - thousands = "" - grouping = "\256" - if self.is_unicode: - self._loc_dec = dec.decode("ascii") - self._loc_thousands = thousands.decode("ascii") - else: - self._loc_dec = dec - self._loc_thousands = thousands - self._loc_grouping = grouping - - def _calc_num_width(self, n_prefix, sign_char, to_number, n_number, - n_remainder, has_dec, digits): - """Calculate widths of all parts of formatted number. - - Output will look like: - - - - - sign is computed from self._sign, and the sign of the number - prefix is given - digits is known - """ - spec = NumberSpec() - spec.n_digits = n_number - n_remainder - has_dec - spec.n_prefix = n_prefix - spec.n_lpadding = 0 - spec.n_decimal = int(has_dec) - spec.n_remainder = n_remainder - spec.n_spadding = 0 - spec.n_rpadding = 0 - spec.n_min_width = 0 - spec.n_total = 0 - spec.sign = "\0" - spec.n_sign = 0 - sign = self._sign - if sign == "+": - spec.n_sign = 1 - spec.sign = "-" if sign_char == "-" else "+" - elif sign == " ": - spec.n_sign = 1 - spec.sign = "-" if sign_char == "-" else " " - elif sign_char == "-": - spec.n_sign = 1 - spec.sign = "-" - extra_length = (spec.n_sign + spec.n_prefix + spec.n_decimal + - spec.n_remainder) # Not padding or digits - if self._fill_char == "0" and self._align == "=": - spec.n_min_width = self._width - extra_length - if self._loc_thousands: - self._group_digits(spec, digits[to_number:]) - n_grouped_digits = len(self._grouped_digits) - else: - n_grouped_digits = spec.n_digits - n_padding = self._width - (extra_length + n_grouped_digits) - if n_padding > 0: - align = self._align - if align == "<": - spec.n_rpadding = n_padding - elif align == ">": - spec.n_lpadding = n_padding - elif align == "^": - spec.n_lpadding = n_padding // 2 - spec.n_rpadding = n_padding - spec.n_lpadding - elif align == "=": - spec.n_spadding = n_padding - else: - raise AssertionError("shouldn't reach") - spec.n_total = spec.n_lpadding + spec.n_sign + spec.n_prefix + \ - spec.n_spadding + n_grouped_digits + \ - spec.n_decimal + spec.n_remainder + spec.n_rpadding - return spec - - def _fill_digits(self, buf, digits, d_state, n_chars, n_zeros, - thousands_sep): - if thousands_sep: - for c in thousands_sep: - buf.append(c) - for i in range(d_state - 1, d_state - n_chars - 1, -1): - buf.append(digits[i]) - for i in range(n_zeros): - buf.append("0") - - def _group_digits(self, spec, digits): - buf = [] - grouping = self._loc_grouping - min_width = spec.n_min_width - grouping_state = 0 - count = 0 - left = spec.n_digits - n_ts = len(self._loc_thousands) - need_separator = False - done = False - groupings = len(grouping) - previous = 0 - while True: - group = ord(grouping[grouping_state]) - if group > 0: - if group == 256: - break - grouping_state += 1 - previous = group - else: - group = previous - final_grouping = min(group, max(left, max(min_width, 1))) - n_zeros = max(0, final_grouping - left) - n_chars = max(0, min(left, final_grouping)) - ts = self._loc_thousands if need_separator else None - self._fill_digits(buf, digits, left, n_chars, n_zeros, ts) - need_separator = True - left -= n_chars - min_width -= final_grouping - if left <= 0 and min_width <= 0: - done = True - break - min_width -= n_ts - if not done: - group = max(max(left, min_width), 1) - n_zeros = max(0, group - left) - n_chars = max(0, min(left, group)) - ts = self._loc_thousands if need_separator else None - self._fill_digits(buf, digits, left, n_chars, n_zeros, ts) - buf.reverse() - self._grouped_digits = self.empty.join(buf) - - def _upcase_string(self, s): - buf = [] - for c in s: - index = ord(c) - if ord("a") <= index <= ord("z"): - c = chr(index - 32) - buf.append(c) - return self.empty.join(buf) - - - def _fill_number(self, spec, num, to_digits, to_prefix, fill_char, - to_remainder, upper, grouped_digits=None): - out = self._builder() - if spec.n_lpadding: - out.append_multiple_char(fill_char[0], spec.n_lpadding) - if spec.n_sign: - if self.is_unicode: - sign = spec.sign.decode("ascii") - else: - sign = spec.sign - out.append(sign) - if spec.n_prefix: - pref = num[to_prefix:to_prefix + spec.n_prefix] - if upper: - pref = self._upcase_string(pref) - out.append(pref) - if spec.n_spadding: - out.append_multiple_char(fill_char[0], spec.n_spadding) - if spec.n_digits != 0: - if self._loc_thousands: - if grouped_digits is not None: - digits = grouped_digits - else: - digits = self._grouped_digits - assert digits is not None - else: - stop = to_digits + spec.n_digits - assert stop >= 0 - digits = num[to_digits:stop] - if upper: - digits = self._upcase_string(digits) - out.append(digits) - if spec.n_decimal: - out.append(self._lit(".")[0]) - if spec.n_remainder: - out.append(num[to_remainder:]) - if spec.n_rpadding: - out.append_multiple_char(fill_char[0], spec.n_rpadding) - #if complex, need to call twice - just retun the buffer - return out.build() - - def _format_int_or_long(self, w_num, kind): - space = self.space - if self._precision != -1: - msg = "precision not allowed in integer type" - raise OperationError(space.w_ValueError, space.wrap(msg)) - sign_char = "\0" - tp = self._type - if tp == "c": - if self._sign != "\0": - msg = "sign not allowed with 'c' presentation type" - raise OperationError(space.w_ValueError, space.wrap(msg)) - value = space.int_w(w_num) - if self.is_unicode: - result = runicode.UNICHR(value) - else: - result = chr(value) - n_digits = 1 - n_remainder = 1 - to_remainder = 0 - n_prefix = 0 - to_prefix = 0 - to_numeric = 0 - else: - if tp == "b": - base = 2 - skip_leading = 2 - elif tp == "o": - base = 8 - skip_leading = 2 - elif tp == "x" or tp == "X": - base = 16 - skip_leading = 2 - elif tp == "n" or tp == "d": - base = 10 - skip_leading = 0 - else: - raise AssertionError("shouldn't reach") - if kind == INT_KIND: - result = self._int_to_base(base, space.int_w(w_num)) - else: - result = self._long_to_base(base, space.bigint_w(w_num)) - n_prefix = skip_leading if self._alternate else 0 - to_prefix = 0 - if result[0] == "-": - sign_char = "-" - skip_leading += 1 - to_prefix += 1 - n_digits = len(result) - skip_leading - n_remainder = 0 - to_remainder = 0 - to_numeric = skip_leading - self._get_locale(tp) - spec = self._calc_num_width(n_prefix, sign_char, to_numeric, n_digits, - n_remainder, False, result) - fill = self._lit(" ") if self._fill_char == "\0" else self._fill_char - upper = self._type == "X" - return self.space.wrap(self._fill_number(spec, result, to_numeric, - to_prefix, fill, to_remainder, upper)) - - def _long_to_base(self, base, value): - prefix = "" - if base == 2: - prefix = "0b" - elif base == 8: - prefix = "0o" - elif base == 16: - prefix = "0x" - as_str = value.format(LONG_DIGITS[:base], prefix) - if self.is_unicode: - return as_str.decode("ascii") - return as_str - - def _int_to_base(self, base, value): - if base == 10: - s = str(value) + def _lit(self, s): if self.is_unicode: return s.decode("ascii") - return s - # This part is slow. - negative = value < 0 - value = abs(value) - buf = ["\0"] * (8 * 8 + 6) # Too much on 32 bit, but who cares? - i = len(buf) - 1 - while True: - div = value // base - mod = value - div * base - digit = abs(mod) - digit += ord("0") if digit < 10 else ord("a") - 10 - buf[i] = chr(digit) - value = div + else: + return s + + def _pad(self, string): + builder = self._builder() + builder.append_multiple_char(self._fill_char, self._left_pad) + builder.append(string) + builder.append_multiple_char(self._fill_char, self._right_pad) + return builder.build() + + def _builder(self): + if self.is_unicode: + return rstring.UnicodeBuilder() + else: + return rstring.StringBuilder() + + def _unknown_presentation(self, tp): + msg = "unknown presentation for %s: '%s'" + w_msg = self.space.wrap(msg % (tp, self._type)) + raise OperationError(self.space.w_ValueError, w_msg) + + def format_string(self, string): + space = self.space + if self._parse_spec("s", "<"): + return space.wrap(string) + if self._type != "s": + self._unknown_presentation("string") + if self._sign != "\0": + msg = "Sign not allowed in string format specifier" + raise OperationError(space.w_ValueError, space.wrap(msg)) + if self._alternate: + msg = "Alternate form not allowed in string format specifier" + raise OperationError(space.w_ValueError, space.wrap(msg)) + if self._align == "=": + msg = "'=' alignment not allowed in string format specifier" + raise OperationError(space.w_ValueError, space.wrap(msg)) + length = len(string) + precision = self._precision + if precision != -1 and length >= precision: + assert precision >= 0 + length = precision + string = string[:precision] + if self._fill_char == "\0": + self._fill_char = self._lit(" ")[0] + self._calc_padding(string, length) + return space.wrap(self._pad(string)) + + def _get_locale(self, tp): + space = self.space + if tp == "n": + dec, thousands, grouping = rlocale.numeric_formatting() + elif self._thousands_sep: + dec = "." + thousands = "," + grouping = "\3\0" + else: + dec = "." + thousands = "" + grouping = "\256" + if self.is_unicode: + self._loc_dec = dec.decode("ascii") + self._loc_thousands = thousands.decode("ascii") + else: + self._loc_dec = dec + self._loc_thousands = thousands + self._loc_grouping = grouping + + def _calc_num_width(self, n_prefix, sign_char, to_number, n_number, + n_remainder, has_dec, digits): + """Calculate widths of all parts of formatted number. + + Output will look like: + + + + + sign is computed from self._sign, and the sign of the number + prefix is given + digits is known + """ + spec = NumberSpec() + spec.n_digits = n_number - n_remainder - has_dec From noreply at buildbot.pypy.org Sat Feb 4 05:39:07 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 4 Feb 2012 05:39:07 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120204043908.00ABE710770@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52076:9d155526f05d Date: 2012-01-04 15:25 -0800 http://bitbucket.org/pypy/pypy/changeset/9d155526f05d/ Log: merge default into branch diff too long, truncating to 10000 out of 70583 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), @@ -317,7 +317,7 @@ RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -351,7 +351,7 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _ffi.CDLL(name) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle diff --git a/lib-python/modified-2.7/ctypes/test/test_callbacks.py b/lib-python/modified-2.7/ctypes/test/test_callbacks.py --- a/lib-python/modified-2.7/ctypes/test/test_callbacks.py +++ b/lib-python/modified-2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/modified-2.7/ctypes/test/test_libc.py b/lib-python/modified-2.7/ctypes/test/test_libc.py --- a/lib-python/modified-2.7/ctypes/test/test_libc.py +++ b/lib-python/modified-2.7/ctypes/test/test_libc.py @@ -25,7 +25,10 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") - def test_no_more_xfail(self): + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. import socket import ctypes.test self.assertTrue(not hasattr(ctypes.test, 'xfail'), diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib_pypy/_collections.py b/lib_pypy/_collections.py --- a/lib_pypy/_collections.py +++ b/lib_pypy/_collections.py @@ -379,12 +379,14 @@ class defaultdict(dict): def __init__(self, *args, **kwds): - self.default_factory = None - if 'default_factory' in kwds: - self.default_factory = kwds.pop('default_factory') - elif len(args) > 0 and (callable(args[0]) or args[0] is None): - self.default_factory = args[0] + if len(args) > 0: + default_factory = args[0] args = args[1:] + if not callable(default_factory) and default_factory is not None: + raise TypeError("first argument must be callable") + else: + default_factory = None + self.default_factory = default_factory super(defaultdict, self).__init__(*args, **kwds) def __missing__(self, key): @@ -404,7 +406,7 @@ recurse.remove(id(self)) def copy(self): - return type(self)(self, default_factory=self.default_factory) + return type(self)(self.default_factory, self) def __copy__(self): return self.copy() diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -124,7 +124,8 @@ # for now, we always allow types.pointer, else a lot of tests # break. We need to rethink how pointers are represented, though if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError, "expected %s instance, got %s" % (type(value), ffitype) + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) return value._get_buffer_value() def _cast_addr(obj, _, tp): diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -17,7 +17,7 @@ if len(f) == 3: if (not hasattr(tp, '_type_') or not isinstance(tp._type_, str) - or tp._type_ not in "iIhHbBlL"): + or tp._type_ not in "iIhHbBlLqQ"): #XXX: are those all types? # we just dont get the type name # in the interp levle thrown TypeError diff --git a/lib_pypy/_pypy_irc_topic.py b/lib_pypy/_pypy_irc_topic.py --- a/lib_pypy/_pypy_irc_topic.py +++ b/lib_pypy/_pypy_irc_topic.py @@ -1,117 +1,6 @@ -"""qvfgbcvna naq hgbcvna punvef -qlfgbcvna naq hgbcvna punvef -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur png nf jryy? -V'z fbeel, pbhyq lbh cyrnfr abg nterr jvgu gur punve nf jryy? -jr cnffrq gur RH erivrj -cbfg RhebClguba fcevag fgnegf 12.IVV.2007, 10nz -RhebClguba raqrq -n Pyrna Ragrecevfrf cebqhpgvba -npnqrzl vf n pbzcyvpngrq ebyr tnzr -npnqrzvn vf n pbzcyvpngrq ebyr tnzr -jbexvat pbqr vf crn fbhc -abg lbhe snhyg, zber yvxr vg'f n zbivat gnetrg -guvf fragrapr vf snyfr -abguvat vf gehr -Yncfnat Fbhpubat -Oenpunzhgnaqn -fbeel, V'yy grnpu gur pnpghf ubj gb fjvz yngre -Jul fb znal znal znal znal znal ivbyvaf? -Jul fb znal znal znal znal znal bowrpgf? -"eha njnl naq yvir ba n snez" nccebnpu gb fbsgjner qrirybczrag -"va snpg, lbh zvtug xabj zber nobhg gur genafyngvba gbbypunva nsgre znfgrevat eclguba guna fbzr angvir fcrnxre xabjf nobhg uvf zbgure gbathr" - kbeNkNk -"jurer qvq nyy gur ivbyvaf tb?" -- ClCl fgnghf oybt: uggc://zberclcl.oybtfcbg.pbz/ -uggc://kxpq.pbz/353/ -pnfhnyvgl ivbyngvbaf naq sylvat -wrgmg abpu fpubxbynqvtre -R09 2X @PNN:85? -vs lbh'er gelvat gb oybj hc fghss, jub pnerf? -vs fghss oybjf hc, lbh pner -2008 jvyy or gur lrne bs clcl ba gur qrfxgbc -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl -2008 jvyy or gur lrne bs gur qrfxgbc ba #clcl, Wnahnel jvyy or gur zbagu bs gur nyc gbcf -lrf, ohg jung'g gur frafr bs 0 < "qhena qhena" -eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF +"""eclguba: flagnk naq frznagvpf bs clguba, fcrrq bs p, erfgevpgvbaf bs wnin naq pbzcvyre reebe zrffntrf nf crargenoyr nf ZHZCF pglcrf unf n fcva bs 1/3 ' ' vf n fcnpr gbb -2009 jvyy or gur lrne bs WVG ba gur qrfxgbc -N ynathntr vf n qvnyrpg jvgu na nezl naq anil -gbcvpf ner sbe gur srroyr zvaqrq -2009 vf gur lrne bs ersyrpgvba ba gur qrfxgbc -gur tybor vf bhe cbal, gur pbfzbf bhe erny ubefr -jub nz V naq vs lrf, ubj znal? -cebtenzzvat va orq vf n cresrpgyl svar npgvivgl -zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja -EClguba: jr hfr vg fb lbh qba'g unir gb -Zbber'f ynj vf n qeht jvgu gur jbefg pbzr qbja. EClguba: haqrpvqrq. -guvatf jvyy or avpr naq fghss -qba'g cbfg yvaxf gb cngragf urer -Abg lbhe hfhny nanylfrf. -Gur Neg bs gur Punaary -Clguba 300 -V fhccbfr ZO bs UGZY cre frpbaq vf abg gur hfhny fcrrq zrnfher crbcyr jbhyq rkcrpg sbe n wvg -gur fha arire frgf ba gur ClCl rzcver -ghegyrf ner snfgre guna lbh guvax -cebtenzzvat vf na nrfgrguvp raqrnibhe -P vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe fbzrguvat, whfg abg sbe jevgvat fbsgjner -trezna vf tbbq sbe artngvbaf, whfg abg sbe jevgvat fbsgjner -# nffreg qvq abg penfu -lbh fubhyq fgneg n cresrpg fbsgjner zbirzrag -lbh fubhyq fgneg n cresrpg punaary gbcvp zbirzrag -guvf vf n cresrpg punaary gbcvp -guvf vf n frys-ersreragvny punaary gbcvp -crrcubcr bcgvzvmngvbaf ner jung n Fhssvpvragyl Fzneg Pbzcvyre hfrf -"crrcubcr" bcgvzvmngvbaf ner jung na bcgvzvfgvp Pbzcvyre hfrf -pubbfr lbhe unpx -gur 'fhcre' xrljbeq vf abg gung uhttnoyr -wlguba cngpurf ner abg rabhtu sbe clcl -- qb lbh xabj oreyva? - nyy bs vg? - jryy, whfg oreyva -- ubj jvyy gur snpg gung gurl ner hfrq va bhe ercy punatr bhe gbcvpf? -- ubj pna vg rire unir jbexrq? -- jurer fubhyq gur unpx or fgberq? -- Vg'f uneq gb fnl rknpgyl jung pbafgvghgrf erfrnepu va gur pbzchgre jbeyq, ohg nf n svefg nccebkvzngvba, vg'f fbsgjner gung qbrfa'g unir hfref. -- Cebtenzzvat vf nyy nobhg xabjvat jura gb obvy gur benatr fcbatr qbaxrl npebff gur cuvyyvcvarf -- Jul fb znal, znal, znal, znal, znal, znal qhpxyvatf? -- ab qrgnvy vf bofpher rabhtu gb abg unir fbzr pbqr qrcraqvat ba vg. -- jung V trarenyyl jnag vf serr fcrrqhcf -- nyy bs ClCl vf kv-dhnyvgl -"lbh pna nyjnlf xvyy -9 be bf._rkvg() vs lbh'er va n uheel" -Ohernhpengf ohvyq npnqrzvp rzcverf juvpu puhea bhg zrnavatyrff fbyhgvbaf gb veeryrinag ceboyrzf. -vg'f abg n unpx, vg'f n jbexnebhaq -ClCl qbrfa'g unir pbcbylinevnqvp qrcraqragyl-zbabzbecurq ulcresyhknqf -ClCl qbrfa'g punatr gur shaqnzragny culfvpf pbafgnagf -Qnapr bs gur Fhtnecyhz Snvel -Wnin vf whfg tbbq rabhtu gb or cenpgvpny, ohg abg tbbq rabhtu gb or hfnoyr. -RhebClguba vf unccravat, qba'g rkcrpg nal dhvpx erfcbafr gvzrf. -"V jbhyq yvxr gb fgnl njnl sebz ernyvgl gura" -"gung'f jul gur 'be' vf ernyyl na 'naq' " -jvgu nyy nccebcevngr pbagrkghnyvfngvbavat -qba'g gevc ba gur cbjre pbeq -vzcyrzragvat YBTB va YBTB: "ghegyrf nyy gur jnl qbja" -gur ohooyrfbeg jbhyq or gur jebat jnl gb tb -gur cevapvcyr bs pbafreingvba bs zrff -gb fnir n gerr, rng n ornire -Qre Ovore znpugf evpugvt: Antg nyyrf xnchgg. -"Nal jbeyqivrj gung vfag jenpxrq ol frys-qbhog naq pbashfvba bire vgf bja vqragvgl vf abg n jbeyqivrj sbe zr." - Fpbgg Nnebafba -jr oryvrir va cnapnxrf, znlor -jr oryvrir va ghegyrf, znlor -jr qrsvavgryl oryvrir va zrgn -gur zngevk unf lbh -"Yvsr vf uneq, gura lbh anc" - n png -Vf Nezva ubzr jura gur havirefr prnfrf gb rkvfg? -Qhrffryqbes fcevag fgnegrq -frys.nobeeg("pnaabg ybnq negvpyrf") -QRAGVFGEL FLZOBY YVTUG IREGVPNY NAQ JNIR -"Gur UUH pnzchf vf n tbbq Dhnxr yriry" - Nezva -"Gur UUH pnzchf jbhyq or n greevoyr dhnxr yriry - lbh'q arire unir n pyhr jurer lbh ner" - zvpunry -N enqvbnpgvir png unf 18 unys-yvirf. - : j [fvtu] -f -pbybe-pbqrq oyhrf -"Neebtnapr va pbzchgre fpvrapr vf zrnfherq va anab-Qvwxfgenf." -ClCl arrqf n Whfg-va-Gvzr WVG -"Lbh pna'g gvzr geniry whfg ol frggvat lbhe pybpxf jebat" -Gjb guernqf jnyx vagb n one. Gur onexrrcre ybbxf hc naq lryyf, "url, V jnag qba'g nal pbaqvgvbaf enpr yvxr gvzr ynfg!" Clguba 2.k rfg cerfdhr zbeg, ivir Clguba! Clguba 2.k vf abg qrnq Riregvzr fbzrbar nethrf jvgu "Fznyygnyx unf nyjnlf qbar K", vg vf nyjnlf n tbbq uvag gung fbzrguvat arrqf gb or punatrq snfg. - Znephf Qraxre @@ -119,7 +8,6 @@ __kkk__ naq __ekkk__ if bcrengvba fybgf: cnegvpyr dhnaghz fhcrecbfvgvba xvaq bs sha ClCl vf na rkpvgvat grpuabybtl gung yrgf lbh gb jevgr snfg, cbegnoyr, zhygv-cyngsbez vagrecergref jvgu yrff rssbeg Nezva: "Cebybt vf n zrff.", PS: "Ab, vg'f irel pbby!", Nezva: "Vfa'g guvf jung V fnvq?" - tbbq, grfgf ner hfrshy fbzrgvzrf :-) ClCl vf yvxr nofheq gurngre jr unir ab nagv-vzcbffvoyr fgvpx gung znxrf fher gung nyy lbhe cebtenzf unyg clcl vf n enpr orgjrra crbcyr funivat lnxf naq gur havirefr cebqhpvat zber orneqrq lnxf. Fb sne, gur havirefr vf jvaavat @@ -136,14 +24,14 @@ ClCl 1.1.0orgn eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy "gurer fubhyq or bar naq bayl bar boivbhf jnl gb qb vg". ClCl inevnag: "gurer pna or A unys-ohttl jnlf gb qb vg" 1.1 svany eryrnfrq: uggc://pbqrfcrnx.arg/clcl/qvfg/clcl/qbp/eryrnfr-1.1.0.ugzy -1.1 svany eryrnfrq | nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba + nzq64 naq ccp ner bayl ninvynoyr va ragrecevfr irefvba Vf gurer n clcl gvzr? - vs lbh pna srry vg (?) gura gurer vf ab, abezny jbex vf fhpu zhpu yrff gvevat guna inpngvbaf ab, abezny jbex vf fb zhpu yrff gvevat guna inpngvbaf -SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva. +-SVEFG gurl vtaber lbh, gura gurl ynhtu ng lbh, gura gurl svtug lbh, gura lbh jva.- vg'f Fhaqnl, znlor vg'f Fhaqnl, ntnva -"3 + 3 = 8" Nagb va gur WVG gnyx +"3 + 3 = 8" - Nagb va gur WVG gnyx RPBBC vf unccravat RPBBC vf svavfurq cflpb rngf bar oenva cre vapu bs cebterff @@ -175,10 +63,108 @@ "nu, whfg va gvzr qbphzragngvba" (__nc__) ClCl vf abg n erny IZ: ab frtsnhyg unaqyref gb qb gur ener pnfrf lbh pna'g unir obgu pbairavrapr naq fcrrq -gur WVG qbrfa'g jbex ba BF/K (abi'09) -ab fhccbeg sbe BF/K evtug abj! (abi'09) fyvccref urvtug pna or zrnfherq va k86 ertvfgref clcl vf n enpr orgjrra gur vaqhfgel gelvat gb ohvyq znpuvarf jvgu zber naq zber erfbheprf, naq gur clcl qrirybcref gelvat gb rng nyy bs gurz. Fb sne, gur jvaare vf fgvyy hapyrne +"znl pbagnva ahgf naq/be lbhat cbvagref" +vg'f nyy irel fvzcyr, yvxr gur ubyvqnlf +unccl ClCl'f lrne 2010! +fnzhryr fnlf gung jr ybfg n enmbe. fb jr pna'g funir lnxf +"yrg'f abg or bofpher, hayrff jr ernyyl arrq gb" + (abg guernq-fnsr, ohg jryy, abguvat vf) +clcl unf znal ceboyrzf, ohg rnpu bar unf znal fbyhgvbaf +whfg nabgure vgrz (1.333...) ba bhe erny-ahzorerq gbqb yvfg +ClCl vf Fuveg Bevtnzv erfrnepu + nafjrevat n dhrfgvba: "ab -- sbe ng yrnfg bar cbffvoyr vagrecergngvba bs lbhe fragrapr" +eryrnfr 1.2 hcpbzvat +ClCl 1.2 eryrnfrq - uggc://clcl.bet/ +AB IPF QVFPHFFVBAF +EClguba vf n svar pnzry unve oehfu +ClCl vf n npghnyyl n ivfhnyvmngvba cebwrpg, jr whfg ohvyq vagrecergref gb unir vagrerfgvat qngn gb ivfhnyvmr +clcl vf yvxr fnhfntrf +naq abj sbe fbzrguvat pbzcyrgryl qvssrerag +n 10gu bs sberire vf 1u45 +pbeerpg pbqr qbrfag arrq nal grfgf +cbfgfgehpghenyvfz rgp. +clcl UVG trarengbe +gur arj clcl fcbeg vf gb cnff clcl ohtf nf pclguba ohtf +jr unir zhpu zber vagrecergref guna hfref +ClCl 1.3 njnvgvat eryrnfr +ClCl 1.3 eryrnfrq +vg frrzf gb zr gung bapr lbh frggyr ba na rkrphgvba / bowrpg zbqry naq / be olgrpbqr sbezng, lbh'ir nyernql qrpvqrq jung ynathntrf (jurer gur 'f' frrzf fhcresyhbhf) fhccbeg vf tbvat gb or svefg pynff sbe +"Nyy ceboyrzf va ClCl pna or fbyirq ol nabgure yriry bs vagrecergngvba" +ClCl 1.3 eryrnfrq (jvaqbjf ovanevrf vapyhqrq) +jul qvq lbh thlf unir gb znxr gur ohvygva sbeghar zber vagrerfgvat guna npghny jbex? v whfg pngpurq zlfrys erfgnegvat clcl 20 gvzrf +"jr hfrq gb unir n zrff jvgu na bofpher vagresnpr, abj jr unir zrff urer naq bofpher vagresnpr gurer. cebterff" crqebavf ba n clcl fcevag +"phcf bs pbssrr ner yvxr nanybtvrf va gung V'z znxvat bar evtug abj" +"vg'f nyjnlf hc gb hf, va n jnl be gur bgure" +ClCl vf infg, naq pbagnvaf zhygvghqrf +qravny vf eneryl n tbbq qrohttvat grpuavdhr +"Yrg'f tb." - "Jr pna'g" - "Jul abg?" - "Jr'er jnvgvat sbe n Genafyngvba." - (qrfcnvevatyl) "Nu!" +'gung'f qrsvavgryl n pnfr bs "hu????"' +va gurbel gurer vf gur Ybbc, va cenpgvpr gurer ner oevqtrf +gur uneqqevir - pbafgnag qngn cvytevzntr +ClCl vf n gbby gb xrrc bgurejvfr qnatrebhf zvaqf fnsryl bpphcvrq. +jr ner n trareny senzrjbex ohvyg ba pbafvfgrag nccyvpngvba bs nqubp-arff +gur jnl gb nibvq n jbexnebhaq vf gb vagebqhpr n fgebatre jbexnebhaq fbzrjurer ryfr +pnyyvat gur genafyngvba gbby punva n 'fpevcg' vf xvaq bs bssrafvir +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq pbafhzr nyy gur zrzbel ng nal gvzr +ehaavat clcl-p genafyngr.cl vf n ovg yvxr jngpuvat n guevyyre zbivr, vg pbhyq qvr ng nal gvzr orpnhfr bs gur 32-ovg 4TO yvzvg bs ENZ +Qh jvefg rora tranh qnf reervpura, jbena xrvare tynhog +vs fjvgmreynaq jrer jurer terrpr vf (ba vfynaqf) jbhyq gurl nyy or pbaarpgrq ol oevqtrf? +genafyngvat clcl jvgu pclguba vf fbbbbbb fybj +ClCl 1.4 eryrnfrq! +Jr ner abg urebrf, whfg irel cngvrag. +QBAR zrnaf vg'f qbar +jul gurer vf ab "ClCl 1.4 eryrnfrq" va gbcvp nal zber? +fabj! fabj! +svanyyl, zrephevny zvtengvba vf unccravat! +Gur zvtengvba gb zrephevny vf pbzcyrgrq! uggc://ovgohpxrg.bet/clcl/clcl +fabj! fabj! (gre) +unccl arj lrne +naq anaanaw gb lbh nf jryy +Frrvat nf gur ynjf bs culfvpf ner ntnvafg lbh, lbh unir gb pnershyyl pbafvqre lbhe fpbcr fb gung lbhe tbnyf ner ernfbanoyr. +nf hfhny va clcl, gur fbyhgvba nccrnef pbzcyrgryl qvfcebcbegvbangr gb gur ceboyrz naq vafgrnq jr'yy tb sbe n pbzcyrgryl qvssrerag fvzcyre nccebnpu gb gur bevtvany ceboyrz +fabj, fabj! +va clcl lbh ner nyjnlf ng gur jebat yriry, va bar jnl be gur bgure +jryy, vg'f jebat ohg abg fb "irel jebat" nf vg ybbxrq + V ybir clcl +ynmvarff vzcngvrapr naq uhoevf +fabj, fabj +EClguba: guvatf lbh jbhyqa'g qb va Clguba, naq pna'g qb va P. +vg vf gur rkcrpgrq orunivbe, rkprcg jura lbh qba'g rkcrpg vg +erqrsvavat lryybj frrzf yvxr n orggre vqrn +"gung'f ubjrire whfg ratvarrevat" (svwny) +"[vg] whfg fubjf ntnva gung eclguba vf bofpher" (psobym) +"naljnl, clguba vf n infg ynathntr" (svwny) +bhg-bs-yvr-thneqf +"gurer ner qnlf ba juvpu lbh ybbx nebhaq naq abguvat fubhyq unir rire jbexrq" (svwny) +clcl vf n orggre xvaq bs sbbyvfuarff - ynp +ehaavat grfgf vf rffragvny sbe qrirybcvat clcl -- hu? qvq V oernx gur grfg? (svwny) +V'ir tbg guvf sybbe jnk gung'f nyfb n TERNG qrffreg gbccvat!! +rknexha: "gur cneg gung V gubhtug jnf tbvat gb or uneq jnf gevivny, fb abj V whfg unir guvf cneg gung V qvqa'g rira guvax bs gung vf uneq" +V fhccbfr jr pna yvir jvgu gur bofphevgl, nf ybat nf gurer vf n pbzzrag znxvat vg yvtugre +V nz n ovt oryvrire va ernfbaf. ohg gur nccnerag xvaq ner zl snibevgr. +clcl: trg n WVG sbe serr (jryy gur svefg qnl lbh jba'g znantr naq vg jvyy or irel sehfgengvat) + thgjbegu: bu, jr fubhyq znxr gur WVG zntvpnyyl orggre, jvgu qrpbengbef naq fghss +vg'f n pbzcyrgr unpx, ohg n irel zvavzny bar (nevtngb) +svefg gurl ynhtu ng lbh, gura gurl vtaber lbh, gura gurl svtug lbh, gura lbh jva +ClCl vf snzvyl sevraqyl +jr yvxr pbzcynvagf +gbqnl jr'er snfgre guna lrfgreqnl (hfhnyyl) +ClCl naq PClguba: gurl ner zbegny rarzvrf vagrag ba xvyyvat rnpu bgure +nethnoyl, rirelguvat vf n avpur +clcl unf ynlref yvxr bavbaf: crryvat gurz onpx jvyy znxr lbh pel +EClguba zntvpnyyl znxrf lbh evpu naq snzbhf (fnlf fb ba gur gva) +Vf evtbobg nebhaq jura gur havirefr prnfrf gb rkvfg? +ClCl vf gbb pbby sbe dhrelfgevatf. +< nevtngb> gura jung bpphef? < svwny> tbbq fghss V oryvrir +ClCl 1.6 eryrnfrq! + jurer ner gur grfgf? +uggc://gjvgcvp.pbz/52nr8s +N enaqbz dhbgr +Nyy rkprcgoybpxf frrz fnar. +N cvax tyvggrel ebgngvat ynzoqn +"vg'f yvxryl grzcbenel hagvy sberire" nevtb """ def some_topic(): diff --git a/lib_pypy/_sha.py b/lib_pypy/_sha.py --- a/lib_pypy/_sha.py +++ b/lib_pypy/_sha.py @@ -1,5 +1,5 @@ #!/usr/bin/env python -# -*- coding: iso-8859-1 +# -*- coding: iso-8859-1 -*- # Note that PyPy contains also a built-in module 'sha' which will hide # this one if compiled in. diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -231,6 +231,11 @@ sqlite.sqlite3_result_text.argtypes = [c_void_p, c_char_p, c_int, c_void_p] sqlite.sqlite3_result_text.restype = None +HAS_LOAD_EXTENSION = hasattr(sqlite, "sqlite3_enable_load_extension") +if HAS_LOAD_EXTENSION: + sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] + sqlite.sqlite3_enable_load_extension.restype = c_int + ########################################## # END Wrapped SQLite C API and constants ########################################## @@ -705,6 +710,15 @@ from sqlite3.dump import _iterdump return _iterdump(self) + if HAS_LOAD_EXTENSION: + def enable_load_extension(self, enabled): + self._check_thread() + self._check_closed() + + rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) + if rc != SQLITE_OK: + raise OperationalError("Error enabling load extension") + DML, DQL, DDL = range(3) class Cursor(object): diff --git a/lib_pypy/distributed/socklayer.py b/lib_pypy/distributed/socklayer.py --- a/lib_pypy/distributed/socklayer.py +++ b/lib_pypy/distributed/socklayer.py @@ -2,7 +2,7 @@ import py from socket import socket -XXX needs import adaptation as 'green' is removed from py lib for years +raise ImportError("XXX needs import adaptation as 'green' is removed from py lib for years") from py.impl.green.msgstruct import decodemessage, message from socket import socket, AF_INET, SOCK_STREAM import marshal diff --git a/lib_pypy/itertools.py b/lib_pypy/itertools.py --- a/lib_pypy/itertools.py +++ b/lib_pypy/itertools.py @@ -25,7 +25,7 @@ __all__ = ['chain', 'count', 'cycle', 'dropwhile', 'groupby', 'ifilter', 'ifilterfalse', 'imap', 'islice', 'izip', 'repeat', 'starmap', - 'takewhile', 'tee'] + 'takewhile', 'tee', 'compress', 'product'] try: from __pypy__ import builtinify except ImportError: builtinify = lambda f: f diff --git a/lib_pypy/pyrepl/commands.py b/lib_pypy/pyrepl/commands.py --- a/lib_pypy/pyrepl/commands.py +++ b/lib_pypy/pyrepl/commands.py @@ -33,10 +33,9 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + def __init__(self, reader, cmd): self.reader = reader - self.event = event - self.event_name = event_name + self.event_name, self.event = cmd def do(self): pass diff --git a/lib_pypy/pyrepl/pygame_console.py b/lib_pypy/pyrepl/pygame_console.py --- a/lib_pypy/pyrepl/pygame_console.py +++ b/lib_pypy/pyrepl/pygame_console.py @@ -130,7 +130,7 @@ s.fill(c, [0, 600 - bmargin, 800, bmargin]) s.fill(c, [800 - rmargin, 0, lmargin, 600]) - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): self.screen = screen self.pygame_screen.fill(colors.bg, [0, tmargin + self.cur_top + self.scroll, @@ -139,8 +139,8 @@ line_top = self.cur_top width, height = self.fontsize - self.cxy = (cx, cy) - cp = self.char_pos(cx, cy) + self.cxy = cxy + cp = self.char_pos(*cxy) if cp[1] < tmargin: self.scroll = - (cy*self.fh + self.cur_top) self.repaint() @@ -148,7 +148,7 @@ self.scroll += (600 - bmargin) - (cp[1] + self.fh) self.repaint() if self.curs_vis: - self.pygame_screen.blit(self.cursor, self.char_pos(cx, cy)) + self.pygame_screen.blit(self.cursor, self.char_pos(*cxy)) for line in screen: if 0 <= line_top + self.scroll <= (600 - bmargin - tmargin - self.fh): if line: diff --git a/lib_pypy/pyrepl/readline.py b/lib_pypy/pyrepl/readline.py --- a/lib_pypy/pyrepl/readline.py +++ b/lib_pypy/pyrepl/readline.py @@ -231,7 +231,11 @@ return ''.join(chars) def _histline(self, line): - return unicode(line.rstrip('\n'), ENCODING) + line = line.rstrip('\n') + try: + return unicode(line, ENCODING) + except UnicodeDecodeError: # bah, silently fall back... + return unicode(line, 'utf-8') def get_history_length(self): return self.saved_history_length @@ -268,7 +272,10 @@ f = open(os.path.expanduser(filename), 'w') for entry in history: if isinstance(entry, unicode): - entry = entry.encode(ENCODING) + try: + entry = entry.encode(ENCODING) + except UnicodeEncodeError: # bah, silently fall back... + entry = entry.encode('utf-8') entry = entry.replace('\n', '\r\n') # multiline history support f.write(entry + '\n') f.close() @@ -395,9 +402,21 @@ _wrapper.f_in = f_in _wrapper.f_out = f_out - if hasattr(sys, '__raw_input__'): # PyPy - _old_raw_input = sys.__raw_input__ + if '__pypy__' in sys.builtin_module_names: # PyPy + + def _old_raw_input(prompt=''): + # sys.__raw_input__() is only called when stdin and stdout are + # as expected and are ttys. If it is the case, then get_reader() + # should not really fail in _wrapper.raw_input(). If it still + # does, then we will just cancel the redirection and call again + # the built-in raw_input(). + try: + del sys.__raw_input__ + except AttributeError: + pass + return raw_input(prompt) sys.__raw_input__ = _wrapper.raw_input + else: # this is not really what readline.c does. Better than nothing I guess import __builtin__ diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -163,7 +163,7 @@ def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, cxy): # this function is still too long (over 90 lines) if not self.__gone_tall: @@ -198,6 +198,7 @@ # we make sure the cursor is on the screen, and that we're # using all of the screen if we can + cx, cy = cxy if cy < offset: offset = cy elif cy >= offset + height: @@ -411,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/resource.py b/lib_pypy/resource.py --- a/lib_pypy/resource.py +++ b/lib_pypy/resource.py @@ -7,7 +7,7 @@ from ctypes_support import standard_c_lib as libc from ctypes_support import get_errno -from ctypes import Structure, c_int, c_long, byref, sizeof, POINTER +from ctypes import Structure, c_int, c_long, byref, POINTER from errno import EINVAL, EPERM import _structseq @@ -165,7 +165,6 @@ @builtinify def getpagesize(): - pagesize = 0 if _getpagesize: return _getpagesize() else: diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/py/_code/code.py b/py/_code/code.py --- a/py/_code/code.py +++ b/py/_code/code.py @@ -164,6 +164,7 @@ # if something: # assume this causes a NameError # # _this_ lines and the one # below we don't want from entry.getsource() + end = min(end, len(source)) for i in range(self.lineno, end): if source[i].rstrip().endswith(':'): end = i + 1 @@ -307,7 +308,7 @@ self._striptext = 'AssertionError: ' self._excinfo = tup self.type, self.value, tb = self._excinfo - self.typename = self.type.__name__ + self.typename = getattr(self.type, "__name__", "???") self.traceback = py.code.Traceback(tb) def __repr__(self): diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -252,7 +252,26 @@ # unsignedness is considered a rare and contagious disease def union((int1, int2)): - knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + if int1.unsigned == int2.unsigned: + knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + else: + t1 = int1.knowntype + if t1 is bool: + t1 = int + t2 = int2.knowntype + if t2 is bool: + t2 = int + + if t2 is int: + if int2.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t1 + knowntype = t1 + elif t1 is int: + if int1.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t2 + knowntype = t2 + else: + raise UnionError, "Merging these types (%s, %s) is not supported" % (t1, t2) return SomeInteger(nonneg=int1.nonneg and int2.nonneg, knowntype=knowntype) diff --git a/pypy/annotation/classdef.py b/pypy/annotation/classdef.py --- a/pypy/annotation/classdef.py +++ b/pypy/annotation/classdef.py @@ -276,8 +276,8 @@ # create the Attribute and do the generalization asked for newattr = Attribute(attr, self.bookkeeper) if s_value: - if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): - import pdb; pdb.set_trace() + #if newattr.name == 'intval' and getattr(s_value, 'unsigned', False): + # import pdb; pdb.set_trace() newattr.s_value = s_value # keep all subattributes' values diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -591,13 +591,11 @@ immutable = True def __init__(self, method): self.method = method - -NUMBER = object() + annotation_to_ll_map = [ (SomeSingleFloat(), lltype.SingleFloat), (s_None, lltype.Void), # also matches SomeImpossibleValue() (s_Bool, lltype.Bool), - (SomeInteger(knowntype=r_ulonglong), NUMBER), (SomeFloat(), lltype.Float), (SomeLongFloat(), lltype.LongFloat), (SomeChar(), lltype.Char), @@ -623,10 +621,11 @@ return lltype.Ptr(p.PARENTTYPE) if isinstance(s_val, SomePtr): return s_val.ll_ptrtype + if type(s_val) is SomeInteger: + return lltype.build_number(None, s_val.knowntype) + for witness, T in annotation_to_ll_map: if witness.contains(s_val): - if T is NUMBER: - return lltype.build_number(None, s_val.knowntype) return T if info is None: info = '' @@ -635,7 +634,7 @@ raise ValueError("%sshould return a low-level type,\ngot instead %r" % ( info, s_val)) -ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map if ll is not NUMBER]) +ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map]) def lltype_to_annotation(T): try: diff --git a/pypy/annotation/policy.py b/pypy/annotation/policy.py --- a/pypy/annotation/policy.py +++ b/pypy/annotation/policy.py @@ -1,6 +1,6 @@ # base annotation policy for specialization from pypy.annotation.specialize import default_specialize as default -from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype +from pypy.annotation.specialize import specialize_argvalue, specialize_argtype, specialize_arglistitemtype, specialize_arg_or_var from pypy.annotation.specialize import memo, specialize_call_location # for some reason, model must be imported first, # or we create a cycle. @@ -73,6 +73,7 @@ default_specialize = staticmethod(default) specialize__memo = staticmethod(memo) specialize__arg = staticmethod(specialize_argvalue) # specialize:arg(N) + specialize__arg_or_var = staticmethod(specialize_arg_or_var) specialize__argtype = staticmethod(specialize_argtype) # specialize:argtype(N) specialize__arglistitemtype = staticmethod(specialize_arglistitemtype) specialize__call_location = staticmethod(specialize_call_location) diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -36,9 +36,7 @@ newtup = SpaceOperation('newtuple', starargs, argscopy[-1]) newstartblock.operations.append(newtup) newstartblock.closeblock(Link(argscopy, graph.startblock)) - graph.startblock.isstartblock = False graph.startblock = newstartblock - newstartblock.isstartblock = True argnames = argnames + ['.star%d' % i for i in range(nb_extra_args)] graph.signature = Signature(argnames) # note that we can mostly ignore defaults: if nb_extra_args > 0, @@ -353,6 +351,16 @@ key = tuple(key) return maybe_star_args(funcdesc, key, args_s) +def specialize_arg_or_var(funcdesc, args_s, *argindices): + for argno in argindices: + if not args_s[argno].is_constant(): + break + else: + # all constant + return specialize_argvalue(funcdesc, args_s, *argindices) + # some not constant + return maybe_star_args(funcdesc, None, args_s) + def specialize_argtype(funcdesc, args_s, *argindices): key = tuple([args_s[i].knowntype for i in argindices]) for cls in key: diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -856,6 +856,46 @@ py.test.raises(Exception, a.build_types, f, []) # if you want to get a r_uint, you have to be explicit about it + def test_add_different_ints(self): + def f(a, b): + return a + b + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_different_ints(self): + def f(a, b): + if a: + c = a + else: + c = b + return c + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_ruint_zero(self): + def f(a): + if a: + c = a + else: + c = 0 + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_merge_ruint_nonneg_signed(self): + def f(a, b): + if a: + c = a + else: + assert b >= 0 + c = b + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint, int]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_prebuilt_long_that_is_not_too_long(self): small_constant = 12L def f(): @@ -1194,6 +1234,20 @@ assert len(executedesc._cache[(0, 'star', 2)].startblock.inputargs) == 4 assert len(executedesc._cache[(1, 'star', 3)].startblock.inputargs) == 5 + def test_specialize_arg_or_var(self): + def f(a): + return 1 + f._annspecialcase_ = 'specialize:arg_or_var(0)' + + def fn(a): + return f(3) + f(a) + + a = self.RPythonAnnotator() + a.build_types(fn, [int]) + executedesc = a.bookkeeper.getdesc(f) + assert sorted(executedesc._cache.keys()) == [None, (3,)] + # we got two different special + def test_specialize_call_location(self): def g(a): return a @@ -3015,7 +3069,7 @@ if g(x, y): g(x, r_uint(y)) a = self.RPythonAnnotator() - a.build_types(f, [int, int]) + py.test.raises(Exception, a.build_types, f, [int, int]) def test_compare_with_zero(self): def g(): @@ -3190,6 +3244,8 @@ s = a.build_types(f, []) assert isinstance(s, annmodel.SomeList) assert not s.listdef.listitem.resized + assert not s.listdef.listitem.immutable + assert s.listdef.listitem.mutated def test_delslice(self): def f(): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -352,6 +352,7 @@ check_negative_slice(s_start, s_stop) if not isinstance(s_iterable, SomeList): raise Exception("list[start:stop] = x: x must be a list") + lst.listdef.mutate() lst.listdef.agree(s_iterable.listdef) # note that setslice is not allowed to resize a list in RPython diff --git a/pypy/bin/checkmodule.py b/pypy/bin/checkmodule.py --- a/pypy/bin/checkmodule.py +++ b/pypy/bin/checkmodule.py @@ -1,43 +1,45 @@ #! /usr/bin/env python """ -Usage: checkmodule.py [-b backend] +Usage: checkmodule.py -Compiles the PyPy extension module from pypy/module// -into a fake program which does nothing. Useful for testing whether a -modules compiles without doing a full translation. Default backend is cli. - -WARNING: this is still incomplete: there are chances that the -compilation fails with strange errors not due to the module. If a -module is known to compile during a translation but don't pass -checkmodule.py, please report the bug (or, better, correct it :-). +Check annotation and rtyping of the PyPy extension module from +pypy/module//. Useful for testing whether a +modules compiles without doing a full translation. """ import autopath -import sys +import sys, os from pypy.objspace.fake.checkmodule import checkmodule def main(argv): - try: - assert len(argv) in (2, 4) - if len(argv) == 2: - backend = 'cli' - modname = argv[1] - if modname in ('-h', '--help'): - print >> sys.stderr, __doc__ - sys.exit(0) - if modname.startswith('-'): - print >> sys.stderr, "Bad command line" - print >> sys.stderr, __doc__ - sys.exit(1) - else: - _, b, backend, modname = argv - assert b == '-b' - except AssertionError: + if len(argv) != 2: print >> sys.stderr, __doc__ sys.exit(2) + modname = argv[1] + if modname in ('-h', '--help'): + print >> sys.stderr, __doc__ + sys.exit(0) + if modname.startswith('-'): + print >> sys.stderr, "Bad command line" + print >> sys.stderr, __doc__ + sys.exit(1) + if os.path.sep in modname: + if os.path.basename(modname) == '': + modname = os.path.dirname(modname) + if os.path.basename(os.path.dirname(modname)) != 'module': + print >> sys.stderr, "Must give '../module/xxx', or just 'xxx'." + sys.exit(1) + modname = os.path.basename(modname) + try: + checkmodule(modname) + except Exception, e: + import traceback, pdb + traceback.print_exc() + pdb.post_mortem(sys.exc_info()[2]) + return 1 else: - checkmodule(modname, backend, interactive=True) - print 'Module compiled succesfully' + print 'Passed.' + return 0 if __name__ == '__main__': - main(sys.argv) + sys.exit(main(sys.argv)) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -72,6 +72,7 @@ del working_modules['fcntl'] # LOCK_NB not defined del working_modules["_minimal_curses"] del working_modules["termios"] + del working_modules["_multiprocessing"] # depends on rctime @@ -91,7 +92,7 @@ module_import_dependencies = { # no _rawffi if importing pypy.rlib.clibffi raises ImportError - # or CompilationError + # or CompilationError or py.test.skip.Exception "_rawffi" : ["pypy.rlib.clibffi"], "_ffi" : ["pypy.rlib.clibffi"], @@ -112,7 +113,7 @@ try: for name in modlist: __import__(name) - except (ImportError, CompilationError), e: + except (ImportError, CompilationError, py.test.skip.Exception), e: errcls = e.__class__.__name__ config.add_warning( "The module %r is disabled\n" % (modname,) + @@ -127,7 +128,7 @@ pypy_optiondescription = OptionDescription("objspace", "Object Space Options", [ ChoiceOption("name", "Object Space name", - ["std", "flow", "thunk", "dump", "taint"], + ["std", "flow", "thunk", "dump"], "std", cmdline='--objspace -o'), @@ -251,6 +252,10 @@ "use small tuples", default=False), + BoolOption("withspecialisedtuple", + "use specialised tuples", + default=False), + BoolOption("withrope", "use ropes as the string implementation", default=False, requires=[("objspace.std.withstrslice", False), @@ -280,6 +285,9 @@ "actually create the full list until the resulting " "list is mutated", default=False), + BoolOption("withliststrategies", + "enable optimized ways to store lists of primitives ", + default=True), BoolOption("withtypeversion", "version type objects when changing them", @@ -361,6 +369,7 @@ config.objspace.std.suggest(optimized_list_getitem=True) config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) + config.objspace.std.suggest(withspecialisedtuple=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/config/test/test_translationoption.py b/pypy/config/test/test_translationoption.py new file mode 100644 --- /dev/null +++ b/pypy/config/test/test_translationoption.py @@ -0,0 +1,10 @@ +import py +from pypy.config.translationoption import get_combined_translation_config +from pypy.config.translationoption import set_opt_level +from pypy.config.config import ConflictConfigError + + +def test_no_gcrootfinder_with_boehm(): + config = get_combined_translation_config() + config.translation.gcrootfinder = "shadowstack" + py.test.raises(ConflictConfigError, set_opt_level, config, '0') diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -69,8 +69,8 @@ "statistics": [("translation.gctransformer", "framework")], "generation": [("translation.gctransformer", "framework")], "hybrid": [("translation.gctransformer", "framework")], - "boehm": [("translation.gctransformer", "boehm"), - ("translation.continuation", False)], # breaks + "boehm": [("translation.continuation", False), # breaks + ("translation.gctransformer", "boehm")], "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], }, @@ -398,6 +398,10 @@ # make_sure_not_resized often relies on it, so we always enable them config.translation.suggest(list_comprehension_operations=True) + # finally, make the choice of the gc definitive. This will fail + # if we have specified strange inconsistent settings. + config.translation.gc = config.translation.gc + # ---------------------------------------------------------------- def set_platform(config): diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -496,6 +496,17 @@ def setup(self): super(AppClassCollector, self).setup() cls = self.obj + # + # + for name in dir(cls): + if name.startswith('test_'): + func = getattr(cls, name, None) + code = getattr(func, 'func_code', None) + if code and code.co_flags & 32: + raise AssertionError("unsupported: %r is a generator " + "app-level test method" % (name,)) + # + # space = cls.space clsname = cls.__name__ if self.config.option.runappdirect: diff --git a/pypy/doc/__pypy__-module.rst b/pypy/doc/__pypy__-module.rst --- a/pypy/doc/__pypy__-module.rst +++ b/pypy/doc/__pypy__-module.rst @@ -37,29 +37,6 @@ .. _`thunk object space docs`: objspace-proxies.html#thunk .. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface -.. broken: - - Taint Object Space Functionality - ================================ - - When the taint object space is used (choose with :config:`objspace.name`), - the following names are put into ``__pypy__``: - - - ``taint`` - - ``is_tainted`` - - ``untaint`` - - ``taint_atomic`` - - ``_taint_debug`` - - ``_taint_look`` - - ``TaintError`` - - Those are all described in the `interface section of the taint object space - docs`_. - - For more detailed explanations and examples see the `taint object space docs`_. - - .. _`taint object space docs`: objspace-proxies.html#taint - .. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface Transparent Proxy Functionality =============================== diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -270,7 +270,12 @@ - *slicing*: the slice start must be within bounds. The stop doesn't need to, but it must not be smaller than the start. All negative indexes are disallowed, except for - the [:-1] special case. No step. + the [:-1] special case. No step. Slice deletion follows the same rules. + + - *slice assignment*: + only supports ``lst[x:y] = sublist``, if ``len(sublist) == y - x``. + In other words, slice assignment cannot change the total length of the list, + but just replace items. - *other operators*: ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.6' +version = '1.7' # The full version, including alpha/beta/rc tags. -release = '1.6' +release = '1.7' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt --- a/pypy/doc/config/objspace.name.txt +++ b/pypy/doc/config/objspace.name.txt @@ -4,7 +4,6 @@ for normal usage): * thunk_: The thunk object space adds lazy evaluation to PyPy. - * taint_: The taint object space adds soft security features. * dump_: Using this object spaces results in the dumpimp of all operations to a log. @@ -12,5 +11,4 @@ .. _`Object Space Proxies`: ../objspace-proxies.html .. _`Standard Object Space`: ../objspace.html#standard-object-space .. _thunk: ../objspace-proxies.html#thunk -.. _taint: ../objspace-proxies.html#taint .. _dump: ../objspace-proxies.html#dump diff --git a/pypy/doc/config/objspace.std.withliststrategies.txt b/pypy/doc/config/objspace.std.withliststrategies.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withliststrategies.txt @@ -0,0 +1,2 @@ +Enable list strategies: Use specialized representations for lists of primitive +objects, such as ints. diff --git a/pypy/doc/config/objspace.std.withspecialisedtuple.txt b/pypy/doc/config/objspace.std.withspecialisedtuple.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withspecialisedtuple.txt @@ -0,0 +1,3 @@ +Use "specialized tuples", a custom implementation for some common kinds +of tuples. Currently limited to tuples of length 2, in three variants: +(int, int), (float, float), and a generic (object, object). diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -262,6 +262,26 @@ documented as such (as e.g. for hasattr()), in most cases PyPy lets the exception propagate instead. +Object Identity of Primitive Values, ``is`` and ``id`` +------------------------------------------------------- + +Object identity of primitive values works by value equality, not by identity of +the wrapper. This means that ``x + 1 is x + 1`` is always true, for arbitrary +integers ``x``. The rule applies for the following types: + + - ``int`` + + - ``float`` + + - ``long`` + + - ``complex`` + +This change requires some changes to ``id`` as well. ``id`` fulfills the +following condition: ``x is y <=> id(x) == id(y)``. Therefore ``id`` of the +above types will return a value that is computed from the argument, and can +thus be larger than ``sys.maxint`` (i.e. it can be an arbitrary long). + Miscellaneous ------------- @@ -284,14 +304,14 @@ never a dictionary as it sometimes is in CPython. Assigning to ``__builtins__`` has no effect. -* Do not compare immutable objects with ``is``. For example on CPython - it is true that ``x is 0`` works, i.e. does the same as ``type(x) is - int and x == 0``, but it is so by accident. If you do instead - ``x is 1000``, then it stops working, because 1000 is too large and - doesn't come from the internal cache. In PyPy it fails to work in - both cases, because we have no need for a cache at all. +* directly calling the internal magic methods of a few built-in types + with invalid arguments may have a slightly different result. For + example, ``[].__add__(None)`` and ``(2).__add__(None)`` both return + ``NotImplemented`` on PyPy; on CPython, only the later does, and the + former raises ``TypeError``. (Of course, ``[]+None`` and ``2+None`` + both raise ``TypeError`` everywhere.) This difference is an + implementation detail that shows up because of internal C-level slots + that PyPy does not have. -* Also, object identity of immutable keys in dictionaries is not necessarily - preserved. .. include:: _ref.txt diff --git a/pypy/doc/faq.rst b/pypy/doc/faq.rst --- a/pypy/doc/faq.rst +++ b/pypy/doc/faq.rst @@ -112,10 +112,32 @@ You might be interested in our `benchmarking site`_ and our `jit documentation`_. +Note that the JIT has a very high warm-up cost, meaning that the +programs are slow at the beginning. If you want to compare the timings +with CPython, even relatively simple programs need to run *at least* one +second, preferrably at least a few seconds. Large, complicated programs +need even more time to warm-up the JIT. + .. _`benchmarking site`: http://speed.pypy.org .. _`jit documentation`: jit/index.html +--------------------------------------------------------------- +Couldn't the JIT dump and reload already-compiled machine code? +--------------------------------------------------------------- + +No, we found no way of doing that. The JIT generates machine code +containing a large number of constant addresses --- constant at the time +the machine code is written. The vast majority is probably not at all +constants that you find in the executable, with a nice link name. E.g. +the addresses of Python classes are used all the time, but Python +classes don't come statically from the executable; they are created anew +every time you restart your program. This makes saving and reloading +machine code completely impossible without some very advanced way of +mapping addresses in the old (now-dead) process to addresses in the new +process, including checking that all the previous assumptions about the +(now-dead) object are still true about the new object. + .. _`prolog and javascript`: diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -1,6 +1,3 @@ -.. include:: needswork.txt - -.. needs work, it talks about svn. also, it is not really user documentation Making a PyPy Release ======================= @@ -12,11 +9,8 @@ forgetting things. A set of todo files may also work. Check and prioritize all issues for the release, postpone some if necessary, -create new issues also as necessary. A meeting (or meetings) should be -organized to decide what things are priorities, should go in and work for -the release. - -An important thing is to get the documentation into an up-to-date state! +create new issues also as necessary. An important thing is to get +the documentation into an up-to-date state! Release Steps ---------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.6`_: the latest official release +* `Release 1.7`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.6`: http://pypy.org/download.html +.. _`Release 1.7`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.6`__. +instead of the latest release, which is `1.7`__. -.. __: release-1.6.0.html +.. __: release-1.7.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -309,7 +309,6 @@ .. _`object space`: objspace.html .. _FlowObjSpace: objspace.html#the-flow-object-space .. _`trace object space`: objspace.html#the-trace-object-space -.. _`taint object space`: objspace-proxies.html#taint .. _`thunk object space`: objspace-proxies.html#thunk .. _`transparent proxies`: objspace-proxies.html#tproxy .. _`Differences between PyPy and CPython`: cpython_differences.html diff --git a/pypy/doc/objspace-proxies.rst b/pypy/doc/objspace-proxies.rst --- a/pypy/doc/objspace-proxies.rst +++ b/pypy/doc/objspace-proxies.rst @@ -129,297 +129,6 @@ function behaves lazily: all calls to it return a thunk object. -.. broken right now: - - .. _taint: - - The Taint Object Space - ====================== - - Motivation - ---------- - - The Taint Object Space provides a form of security: "tainted objects", - inspired by various sources, see [D12.1]_ for a more detailed discussion. - - The basic idea of this kind of security is not to protect against - malicious code but to help with handling and boxing sensitive data. - It covers two kinds of sensitive data: secret data which should not leak, - and untrusted data coming from an external source and that must be - validated before it is used. - - The idea is that, considering a large application that handles these - kinds of sensitive data, there are typically only a small number of - places that need to explicitly manipulate that sensitive data; all the - other places merely pass it around, or do entirely unrelated things. - - Nevertheless, if a large application needs to be reviewed for security, - it must be entirely carefully checked, because it is possible that a - bug at some apparently unrelated place could lead to a leak of sensitive - information in a way that an external attacker could exploit. For - example, if any part of the application provides web services, an - attacker might be able to issue unexpected requests with a regular web - browser and deduce secret information from the details of the answers he - gets. Another example is the common CGI attack where an attacker sends - malformed inputs and causes the CGI script to do unintended things. - - An approach like that of the Taint Object Space allows the small parts - of the program that manipulate sensitive data to be explicitly marked. - The effect of this is that although these small parts still need a - careful security review, the rest of the application no longer does, - because even a bug would be unable to leak the information. - - We have implemented a simple two-level model: objects are either - regular (untainted), or sensitive (tainted). Objects are marked as - sensitive if they are secret or untrusted, and only declassified at - carefully-checked positions (e.g. where the secret data is needed, or - after the untrusted data has been fully validated). - - It would be simple to extend the code for more fine-grained scales of - secrecy. For example it is typical in the literature to consider - user-specified lattices of secrecy levels, corresponding to multiple - "owners" that cannot access data belonging to another "owner" unless - explicitly authorized to do so. - - Tainting and untainting - ----------------------- - - Start a py.py with the Taint Object Space and try the following example:: - - $ py.py -o taint - >>>> from __pypy__ import taint - >>>> x = taint(6) - - # x is hidden from now on. We can pass it around and - # even operate on it, but not inspect it. Taintness - # is propagated to operation results. - - >>>> x - TaintError - - >>>> if x > 5: y = 2 # see below - TaintError - - >>>> y = x + 5 # ok - >>>> lst = [x, y] - >>>> z = lst.pop() - >>>> t = type(z) # type() works too, tainted answer - >>>> t - TaintError - >>>> u = t is int # even 'is' works - >>>> u - TaintError - - Notice that using a tainted boolean like ``x > 5`` in an ``if`` - statement is forbidden. This is because knowing which path is followed - would give away a hint about ``x``; in the example above, if the - statement ``if x > 5: y = 2`` was allowed to run, we would know - something about the value of ``x`` by looking at the (untainted) value - in the variable ``y``. - - Of course, there is a way to inspect tainted objects. The basic way is - to explicitly "declassify" it with the ``untaint()`` function. In an - application, the places that use ``untaint()`` are the places that need - careful security review. To avoid unexpected objects showing up, the - ``untaint()`` function must be called with the exact type of the object - to declassify. It will raise ``TaintError`` if the type doesn't match:: - - >>>> from __pypy__ import taint - >>>> untaint(int, x) - 6 - >>>> untaint(int, z) - 11 - >>>> untaint(bool, x > 5) - True - >>>> untaint(int, x > 5) - TaintError - - - Taint Bombs - ----------- - - In this area, a common problem is what to do about failing operations. - If an operation raises an exception when manipulating a tainted object, - then the very presence of the exception can leak information about the - tainted object itself. Consider:: - - >>>> 5 / (x-6) - - By checking if this raises ``ZeroDivisionError`` or not, we would know - if ``x`` was equal to 6 or not. The solution to this problem in the - Taint Object Space is to introduce *Taint Bombs*. They are a kind of - tainted object that doesn't contain a real object, but a pending - exception. Taint Bombs are indistinguishable from normal tainted - objects to unprivileged code. See:: - - >>>> x = taint(6) - >>>> i = 5 / (x-6) # no exception here - >>>> j = i + 1 # nor here - >>>> k = j + 5 # nor here - >>>> untaint(int, k) - TaintError - - In the above example, all of ``i``, ``j`` and ``k`` contain a Taint - Bomb. Trying to untaint it raises an exception - a generic - ``TaintError``. What we win is that the exception gives little away, - and most importantly it occurs at the point where ``untaint()`` is - called, not where the operation failed. This means that all calls to - ``untaint()`` - but not the rest of the code - must be carefully - reviewed for what occurs if they receive a Taint Bomb; they might catch - the ``TaintError`` and give the user a generic message that something - went wrong, if we are reasonably careful that the message or even its - presence doesn't give information away. This might be a - problem by itself, but there is no satisfying general solution here: - it must be considered on a case-by-case basis. Again, what the - Taint Object Space approach achieves is not solving these problems, but - localizing them to well-defined small parts of the application - namely, - around calls to ``untaint()``. - - The ``TaintError`` exception deliberately does not include any - useful error messages, because they might give information away. - Of course, this makes debugging quite a bit harder; a difficult - problem to solve properly. So far we have implemented a way to peek in a Taint - Box or Bomb, ``__pypy__._taint_look(x)``, and a "debug mode" that - prints the exception as soon as a Bomb is created - both write - information to the low-level stderr of the application, where we hope - that it is unlikely to be seen by anyone but the application - developer. - - - Taint Atomic functions - ---------------------- - - Occasionally, a more complicated computation must be performed on a - tainted object. This requires first untainting the object, performing the - computations, and then carefully tainting the result again (including - hiding all exceptions into Bombs). - - There is a built-in decorator that does this for you:: - - >>>> @__pypy__.taint_atomic - >>>> def myop(x, y): - .... while x > 0: - .... x -= y - .... return x - .... - >>>> myop(42, 10) - -8 - >>>> z = myop(taint(42), 10) - >>>> z - TaintError - >>>> untaint(int, z) - -8 - - The decorator makes a whole function behave like a built-in operation. - If no tainted argument is passed in, the function behaves normally. But - if any of the arguments is tainted, it is automatically untainted - so - the function body always sees untainted arguments - and the eventual - result is tainted again (possibly in a Taint Bomb). - - It is important for the function marked as ``taint_atomic`` to have no - visible side effects, as these could cause information leakage. - This is currently not enforced, which means that all ``taint_atomic`` - functions have to be carefully reviewed for security (but not the - callers of ``taint_atomic`` functions). - - A possible future extension would be to forbid side-effects on - non-tainted objects from all ``taint_atomic`` functions. - - An example of usage: given a tainted object ``passwords_db`` that - references a database of passwords, we can write a function - that checks if a password is valid as follows:: - - @taint_atomic - def validate(passwords_db, username, password): - assert type(passwords_db) is PasswordDatabase - assert type(username) is str - assert type(password) is str - ...load username entry from passwords_db... - return expected_password == password - - It returns a tainted boolean answer, or a Taint Bomb if something - went wrong. A caller can do:: - - ok = validate(passwords_db, 'john', '1234') - ok = untaint(bool, ok) - - This can give three outcomes: ``True``, ``False``, or a ``TaintError`` - exception (with no information on it) if anything went wrong. If even - this is considered giving too much information away, the ``False`` case - can be made indistinguishable from the ``TaintError`` case (simply by - raising an exception in ``validate()`` if the password is wrong). - - In the above example, the security results achieved are the following: - as long as ``validate()`` does not leak information, no other part of - the code can obtain more information about a passwords database than a - Yes/No answer to a precise query. - - A possible extension of the ``taint_atomic`` decorator would be to check - the argument types, as ``untaint()`` does, for the same reason: to - prevent bugs where a function like ``validate()`` above is accidentally - called with the wrong kind of tainted object, which would make it - misbehave. For now, all ``taint_atomic`` functions should be - conservative and carefully check all assumptions on their input - arguments. - - - .. _`taint-interface`: - - Interface - --------- - - .. _`like a built-in operation`: - - The basic rule of the Tainted Object Space is that it introduces two new - kinds of objects, Tainted Boxes and Tainted Bombs (which are not types - in the Python sense). Each box internally contains a regular object; - each bomb internally contains an exception object. An operation - involving Tainted Boxes is performed on the objects contained in the - boxes, and gives a Tainted Box or a Tainted Bomb as a result (such an - operation does not let an exception be raised). An operation called - with a Tainted Bomb argument immediately returns the same Tainted Bomb. - - In a PyPy running with (or translated with) the Taint Object Space, - the ``__pypy__`` module exposes the following interface: - - * ``taint(obj)`` - - Return a new Tainted Box wrapping ``obj``. Return ``obj`` itself - if it is already tainted (a Box or a Bomb). - - * ``is_tainted(obj)`` - - Check if ``obj`` is tainted (a Box or a Bomb). - - * ``untaint(type, obj)`` - - Untaints ``obj`` if it is tainted. Raise ``TaintError`` if the type - of the untainted object is not exactly ``type``, or if ``obj`` is a - Bomb. - - * ``taint_atomic(func)`` - - Return a wrapper function around the callable ``func``. The wrapper - behaves `like a built-in operation`_ with respect to untainting the - arguments, tainting the result, and returning a Bomb. - - * ``TaintError`` - - Exception. On purpose, it provides no attribute or error message. - - * ``_taint_debug(level)`` - - Set the debugging level to ``level`` (0=off). At level 1 or above, - all Taint Bombs print a diagnostic message to stderr when they are - created. - - * ``_taint_look(obj)`` - - For debugging purposes: prints (to stderr) the type and address of - the object in a Tainted Box, or prints the exception if ``obj`` is - a Taint Bomb. - - .. _dump: The Dump Object Space diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -17,17 +17,26 @@ projects, or anything else in PyPy, pop up on IRC or write to us on the `mailing list`_. +Make big integers faster +------------------------- + +PyPy's implementation of the Python ``long`` type is slower than CPython's. +Find out why and optimize them. + +Make bytearray type fast +------------------------ + +PyPy's bytearray type is very inefficient. It would be an interesting +task to look into possible optimizations on this. + Numpy improvements ------------------ -This is more of a project-container than a single project. Possible ideas: +The numpy is rapidly progressing in pypy, so feel free to come to IRC and +ask for proposed topic. A not necesarilly up-to-date `list of topics`_ +is also available. -* experiment with auto-vectorization using SSE or implement vectorization - without automatically detecting it for array operations. - -* improve numpy, for example implement memory views. - -* interface with fortran/C libraries. +.. _`list of topics`: https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt Improving the jitviewer ------------------------ @@ -53,6 +62,18 @@ this is an ideal task to get started, because it does not require any deep knowledge of the internals. +Optimized Unicode Representation +-------------------------------- + +CPython 3.3 will use an `optimized unicode representation`_ which switches between +different ways to represent a unicode string, depending on whether the string +fits into ASCII, has only two-byte characters or needs four-byte characters. + +The actual details would be rather differen in PyPy, but we would like to have +the same optimization implemented. + +.. _`optimized unicode representation`: http://www.python.org/dev/peps/pep-0393/ + Translation Toolchain --------------------- diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,94 @@ +================================== +PyPy 1.7 - widening the sweet spot +================================== + +We're pleased to announce the 1.7 release of PyPy. As became a habit, this +release brings a lot of bugfixes and performance improvements over the 1.6 +release. However, unlike the previous releases, the focus has been on widening +the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly +speed up should be vastly improved with this release. You can download the 1.7 +release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +The main topic of this release is widening the range of code which PyPy +can greatly speed up. On average on +our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up +to **20 times** faster on some benchmarks. + +.. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* Numerous performance improvements. There are too many examples which python + constructs now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* PyPy now comes with stackless features enabled by default. However, + any loop using stackless features will interrupt the JIT for now, so no real + performance improvement for stackless-based programs. Contact pypy-dev for + info how to help on removing this restriction. + +* NumPy effort in PyPy was renamed numpypy. In order to try using it, simply + write:: + + import numpypy as numpy + + at the beginning of your program. There is a huge progress on numpy in PyPy + since 1.6, the main feature being implementation of dtypes. + +* JSON encoder (but not decoder) has been replaced with a new one. This one + is written in pure Python, but is known to outperform CPython's C extension + up to **2 times** in some cases. It's about **20 times** faster than + the one that we had in 1.6. + +* The memory footprint of some of our RPython modules has been drastically + improved. This should impact any applications using for example cryptography, + like tornado. + +* There was some progress in exposing even more CPython C API via cpyext. + +Things that didn't make it, expect in 1.8 soon +============================================== + +There is an ongoing work, which while didn't make it to the release, is +probably worth mentioning here. This is what you should probably expect in +1.8 some time soon: + +* Specialized list implementation. There is a branch that implements lists of + integers/floats/strings as compactly as array.array. This should drastically + improve performance/memory impact of some applications + +* NumPy effort is progressing forward, with multi-dimensional arrays coming + soon. + +* There are two brand new JIT assembler backends, notably for the PowerPC and + ARM processors. + +Fundraising +=========== + +It's maybe worth mentioning that we're running fundraising campaigns for +NumPy effort in PyPy and for Python 3 in PyPy. In case you want to see any +of those happen faster, we urge you to donate to `numpy proposal`_ or +`py3k proposal`_. In case you want PyPy to progress, but you trust us with +the general direction, you can always donate to the `general pot`_. + +.. _`numpy proposal`: http://pypy.org/numpydonate.html +.. _`py3k proposal`: http://pypy.org/py3donate.html +.. _`general pot`: http://pypy.org diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2,7 +2,7 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter import typedef from pypy.interpreter.gateway import interp2app -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.unroll import unrolling_iterable from pypy.tool.pairtype import extendabletype from pypy.tool.sourcetools import func_with_new_name @@ -51,6 +51,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \"%s\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \"%s\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -94,17 +112,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \"%s\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) - raise AssertionError("should not reach here") class mod(AST): @@ -112,7 +119,6 @@ class Module(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -128,7 +134,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Module') + self.missing_field(space, ['body'], 'Module') else: pass w_list = self.w_body @@ -145,7 +151,6 @@ class Interactive(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -161,7 +166,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Interactive') + self.missing_field(space, ['body'], 'Interactive') else: pass w_list = self.w_body @@ -178,7 +183,6 @@ class Expression(mod): - def __init__(self, body): self.body = body self.initialization_state = 1 @@ -192,7 +196,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Expression') + self.missing_field(space, ['body'], 'Expression') else: pass self.body.sync_app_attrs(space) @@ -200,7 +204,6 @@ class Suite(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -216,7 +219,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Suite') + self.missing_field(space, ['body'], 'Suite') else: pass w_list = self.w_body @@ -232,15 +235,13 @@ class stmt(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class FunctionDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, args, body, decorator_list, lineno, col_offset): self.name = name self.args = args @@ -264,7 +265,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'args', 'body', 'decorator_list', 'lineno', 'col_offset'], 'FunctionDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'args', 'body', 'decorator_list'], 'FunctionDef') else: pass self.args.sync_app_attrs(space) @@ -292,9 +293,6 @@ class ClassDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, bases, body, decorator_list, lineno, col_offset): self.name = name self.bases = bases @@ -320,7 +318,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'bases', 'body', 'decorator_list', 'lineno', 'col_offset'], 'ClassDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'bases', 'body', 'decorator_list'], 'ClassDef') else: pass w_list = self.w_bases @@ -357,9 +355,6 @@ class Return(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -374,10 +369,10 @@ return visitor.visit_Return(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Return') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Return') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -385,9 +380,6 @@ class Delete(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, targets, lineno, col_offset): self.targets = targets self.w_targets = None @@ -404,7 +396,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['targets', 'lineno', 'col_offset'], 'Delete') + self.missing_field(space, ['lineno', 'col_offset', 'targets'], 'Delete') else: pass w_list = self.w_targets @@ -421,9 +413,6 @@ class Assign(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, targets, value, lineno, col_offset): self.targets = targets self.w_targets = None @@ -442,7 +431,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['targets', 'value', 'lineno', 'col_offset'], 'Assign') + self.missing_field(space, ['lineno', 'col_offset', 'targets', 'value'], 'Assign') else: pass w_list = self.w_targets @@ -460,9 +449,6 @@ class AugAssign(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, target, op, value, lineno, col_offset): self.target = target self.op = op @@ -480,7 +466,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['target', 'op', 'value', 'lineno', 'col_offset'], 'AugAssign') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'op', 'value'], 'AugAssign') else: pass self.target.sync_app_attrs(space) @@ -489,9 +475,6 @@ class Print(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, dest, values, nl, lineno, col_offset): self.dest = dest self.values = values @@ -511,10 +494,10 @@ return visitor.visit_Print(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 30: - missing_field(space, self.initialization_state, [None, 'values', 'nl', 'lineno', 'col_offset'], 'Print') + if (self.initialization_state & ~4) ^ 27: + self.missing_field(space, ['lineno', 'col_offset', None, 'values', 'nl'], 'Print') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.dest = None if self.dest: self.dest.sync_app_attrs(space) @@ -532,9 +515,6 @@ class For(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, target, iter, body, orelse, lineno, col_offset): self.target = target self.iter = iter @@ -559,7 +539,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['target', 'iter', 'body', 'orelse', 'lineno', 'col_offset'], 'For') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'iter', 'body', 'orelse'], 'For') else: pass self.target.sync_app_attrs(space) @@ -588,9 +568,6 @@ class While(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -613,7 +590,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'While') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'While') else: pass self.test.sync_app_attrs(space) @@ -641,9 +618,6 @@ class If(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -666,7 +640,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'If') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'If') else: pass self.test.sync_app_attrs(space) @@ -694,9 +668,6 @@ class With(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, context_expr, optional_vars, body, lineno, col_offset): self.context_expr = context_expr self.optional_vars = optional_vars @@ -717,10 +688,10 @@ return visitor.visit_With(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 29: - missing_field(space, self.initialization_state, ['context_expr', None, 'body', 'lineno', 'col_offset'], 'With') + if (self.initialization_state & ~8) ^ 23: + self.missing_field(space, ['lineno', 'col_offset', 'context_expr', None, 'body'], 'With') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.optional_vars = None self.context_expr.sync_app_attrs(space) if self.optional_vars: @@ -739,9 +710,6 @@ class Raise(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, inst, tback, lineno, col_offset): self.type = type self.inst = inst @@ -762,14 +730,14 @@ return visitor.visit_Raise(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~7) ^ 24: - missing_field(space, self.initialization_state, [None, None, None, 'lineno', 'col_offset'], 'Raise') + if (self.initialization_state & ~28) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None, None, None], 'Raise') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.inst = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.tback = None if self.type: self.type.sync_app_attrs(space) @@ -781,9 +749,6 @@ class TryExcept(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, handlers, orelse, lineno, col_offset): self.body = body self.w_body = None @@ -808,7 +773,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['body', 'handlers', 'orelse', 'lineno', 'col_offset'], 'TryExcept') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'handlers', 'orelse'], 'TryExcept') else: pass w_list = self.w_body @@ -845,9 +810,6 @@ class TryFinally(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, body, finalbody, lineno, col_offset): self.body = body self.w_body = None @@ -868,7 +830,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['body', 'finalbody', 'lineno', 'col_offset'], 'TryFinally') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'finalbody'], 'TryFinally') else: pass w_list = self.w_body @@ -895,9 +857,6 @@ class Assert(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, test, msg, lineno, col_offset): self.test = test self.msg = msg @@ -914,10 +873,10 @@ return visitor.visit_Assert(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 13: - missing_field(space, self.initialization_state, ['test', None, 'lineno', 'col_offset'], 'Assert') + if (self.initialization_state & ~8) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'test', None], 'Assert') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.msg = None self.test.sync_app_attrs(space) if self.msg: @@ -926,9 +885,6 @@ class Import(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -945,7 +901,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Import') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Import') else: pass w_list = self.w_names @@ -962,9 +918,6 @@ class ImportFrom(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, module, names, level, lineno, col_offset): self.module = module self.names = names @@ -982,12 +935,12 @@ return visitor.visit_ImportFrom(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~5) ^ 26: - missing_field(space, self.initialization_state, [None, 'names', None, 'lineno', 'col_offset'], 'ImportFrom') + if (self.initialization_state & ~20) ^ 11: + self.missing_field(space, ['lineno', 'col_offset', None, 'names', None], 'ImportFrom') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.module = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.level = 0 w_list = self.w_names if w_list is not None: @@ -1003,9 +956,6 @@ class Exec(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, globals, locals, lineno, col_offset): self.body = body self.globals = globals @@ -1025,12 +975,12 @@ return visitor.visit_Exec(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~6) ^ 25: - missing_field(space, self.initialization_state, ['body', None, None, 'lineno', 'col_offset'], 'Exec') + if (self.initialization_state & ~24) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'body', None, None], 'Exec') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.globals = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.locals = None self.body.sync_app_attrs(space) if self.globals: @@ -1041,9 +991,6 @@ class Global(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -1058,7 +1005,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Global') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Global') else: pass w_list = self.w_names @@ -1072,9 +1019,6 @@ class Expr(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -1089,7 +1033,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Expr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Expr') else: pass self.value.sync_app_attrs(space) @@ -1097,9 +1041,6 @@ class Pass(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1112,16 +1053,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Pass') + self.missing_field(space, ['lineno', 'col_offset'], 'Pass') else: pass class Break(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1134,16 +1072,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Break') + self.missing_field(space, ['lineno', 'col_offset'], 'Break') else: pass class Continue(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1156,21 +1091,19 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Continue') + self.missing_field(space, ['lineno', 'col_offset'], 'Continue') else: pass class expr(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class BoolOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, values, lineno, col_offset): self.op = op self.values = values @@ -1188,7 +1121,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'values', 'lineno', 'col_offset'], 'BoolOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'values'], 'BoolOp') else: pass w_list = self.w_values @@ -1205,9 +1138,6 @@ class BinOp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, op, right, lineno, col_offset): self.left = left self.op = op @@ -1225,7 +1155,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'op', 'right', 'lineno', 'col_offset'], 'BinOp') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'op', 'right'], 'BinOp') else: pass self.left.sync_app_attrs(space) @@ -1234,9 +1164,6 @@ class UnaryOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, operand, lineno, col_offset): self.op = op self.operand = operand @@ -1252,7 +1179,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'operand', 'lineno', 'col_offset'], 'UnaryOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'operand'], 'UnaryOp') else: pass self.operand.sync_app_attrs(space) @@ -1260,9 +1187,6 @@ class Lambda(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, args, body, lineno, col_offset): self.args = args self.body = body @@ -1279,7 +1203,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['args', 'body', 'lineno', 'col_offset'], 'Lambda') + self.missing_field(space, ['lineno', 'col_offset', 'args', 'body'], 'Lambda') else: pass self.args.sync_app_attrs(space) @@ -1288,9 +1212,6 @@ class IfExp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -1309,7 +1230,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'IfExp') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'IfExp') else: pass self.test.sync_app_attrs(space) @@ -1319,9 +1240,6 @@ class Dict(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, keys, values, lineno, col_offset): self.keys = keys self.w_keys = None @@ -1342,7 +1260,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['keys', 'values', 'lineno', 'col_offset'], 'Dict') + self.missing_field(space, ['lineno', 'col_offset', 'keys', 'values'], 'Dict') else: pass w_list = self.w_keys @@ -1369,9 +1287,6 @@ class Set(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, elts, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1388,7 +1303,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['elts', 'lineno', 'col_offset'], 'Set') + self.missing_field(space, ['lineno', 'col_offset', 'elts'], 'Set') else: pass w_list = self.w_elts @@ -1405,9 +1320,6 @@ class ListComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1426,7 +1338,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'ListComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'ListComp') else: pass self.elt.sync_app_attrs(space) @@ -1444,9 +1356,6 @@ class SetComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1465,7 +1374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'SetComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'SetComp') else: pass self.elt.sync_app_attrs(space) @@ -1483,9 +1392,6 @@ class DictComp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, key, value, generators, lineno, col_offset): self.key = key self.value = value @@ -1506,7 +1412,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['key', 'value', 'generators', 'lineno', 'col_offset'], 'DictComp') + self.missing_field(space, ['lineno', 'col_offset', 'key', 'value', 'generators'], 'DictComp') else: pass self.key.sync_app_attrs(space) @@ -1525,9 +1431,6 @@ class GeneratorExp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1546,7 +1449,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'GeneratorExp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'GeneratorExp') else: pass self.elt.sync_app_attrs(space) @@ -1564,9 +1467,6 @@ class Yield(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1581,10 +1481,10 @@ return visitor.visit_Yield(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Yield') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Yield') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -1592,9 +1492,6 @@ class Compare(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, ops, comparators, lineno, col_offset): self.left = left self.ops = ops @@ -1615,7 +1512,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'ops', 'comparators', 'lineno', 'col_offset'], 'Compare') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'ops', 'comparators'], 'Compare') else: pass self.left.sync_app_attrs(space) @@ -1640,9 +1537,6 @@ class Call(expr): - _lineno_mask = 32 - _col_offset_mask = 64 - def __init__(self, func, args, keywords, starargs, kwargs, lineno, col_offset): self.func = func self.args = args @@ -1670,12 +1564,12 @@ return visitor.visit_Call(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~24) ^ 103: - missing_field(space, self.initialization_state, ['func', 'args', 'keywords', None, None, 'lineno', 'col_offset'], 'Call') + if (self.initialization_state & ~96) ^ 31: + self.missing_field(space, ['lineno', 'col_offset', 'func', 'args', 'keywords', None, None], 'Call') else: - if not self.initialization_state & 8: + if not self.initialization_state & 32: self.starargs = None - if not self.initialization_state & 16: + if not self.initialization_state & 64: self.kwargs = None self.func.sync_app_attrs(space) w_list = self.w_args @@ -1706,9 +1600,6 @@ class Repr(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1723,7 +1614,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Repr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Repr') else: pass self.value.sync_app_attrs(space) @@ -1731,9 +1622,6 @@ class Num(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, n, lineno, col_offset): self.n = n expr.__init__(self, lineno, col_offset) @@ -1747,16 +1635,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['n', 'lineno', 'col_offset'], 'Num') + self.missing_field(space, ['lineno', 'col_offset', 'n'], 'Num') else: pass class Str(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, s, lineno, col_offset): self.s = s expr.__init__(self, lineno, col_offset) @@ -1770,16 +1655,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['s', 'lineno', 'col_offset'], 'Str') + self.missing_field(space, ['lineno', 'col_offset', 's'], 'Str') else: pass class Attribute(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, attr, ctx, lineno, col_offset): self.value = value self.attr = attr @@ -1796,7 +1678,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'attr', 'ctx', 'lineno', 'col_offset'], 'Attribute') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'attr', 'ctx'], 'Attribute') else: pass self.value.sync_app_attrs(space) @@ -1804,9 +1686,6 @@ class Subscript(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, slice, ctx, lineno, col_offset): self.value = value self.slice = slice @@ -1824,7 +1703,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'slice', 'ctx', 'lineno', 'col_offset'], 'Subscript') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'slice', 'ctx'], 'Subscript') else: pass self.value.sync_app_attrs(space) @@ -1833,9 +1712,6 @@ class Name(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, id, ctx, lineno, col_offset): self.id = id self.ctx = ctx @@ -1850,16 +1726,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['id', 'ctx', 'lineno', 'col_offset'], 'Name') + self.missing_field(space, ['lineno', 'col_offset', 'id', 'ctx'], 'Name') else: pass class List(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1877,7 +1750,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'List') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'List') else: pass w_list = self.w_elts @@ -1894,9 +1767,6 @@ class Tuple(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1914,7 +1784,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'Tuple') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'Tuple') else: pass w_list = self.w_elts @@ -1931,9 +1801,6 @@ class Const(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1947,7 +1814,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Const') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Const') else: pass @@ -2009,7 +1876,6 @@ class Ellipsis(slice): - def __init__(self): self.initialization_state = 0 @@ -2021,14 +1887,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 0: - missing_field(space, self.initialization_state, [], 'Ellipsis') + self.missing_field(space, [], 'Ellipsis') else: pass class Slice(slice): - def __init__(self, lower, upper, step): self.lower = lower self.upper = upper @@ -2049,7 +1914,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~7) ^ 0: - missing_field(space, self.initialization_state, [None, None, None], 'Slice') + self.missing_field(space, [None, None, None], 'Slice') else: if not self.initialization_state & 1: self.lower = None @@ -2067,7 +1932,6 @@ class ExtSlice(slice): - def __init__(self, dims): self.dims = dims self.w_dims = None @@ -2083,7 +1947,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['dims'], 'ExtSlice') + self.missing_field(space, ['dims'], 'ExtSlice') else: pass w_list = self.w_dims @@ -2100,7 +1964,6 @@ class Index(slice): - def __init__(self, value): self.value = value self.initialization_state = 1 @@ -2114,7 +1977,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['value'], 'Index') + self.missing_field(space, ['value'], 'Index') else: pass self.value.sync_app_attrs(space) @@ -2377,7 +2240,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['target', 'iter', 'ifs'], 'comprehension') + self.missing_field(space, ['target', 'iter', 'ifs'], 'comprehension') else: pass self.target.sync_app_attrs(space) @@ -2394,15 +2257,13 @@ node.sync_app_attrs(space) class excepthandler(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class ExceptHandler(excepthandler): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, name, body, lineno, col_offset): self.type = type self.name = name @@ -2424,12 +2285,12 @@ return visitor.visit_ExceptHandler(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~3) ^ 28: - missing_field(space, self.initialization_state, [None, None, 'body', 'lineno', 'col_offset'], 'ExceptHandler') + if (self.initialization_state & ~12) ^ 19: + self.missing_field(space, ['lineno', 'col_offset', None, None, 'body'], 'ExceptHandler') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.name = None if self.type: self.type.sync_app_attrs(space) @@ -2470,7 +2331,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~6) ^ 9: - missing_field(space, self.initialization_state, ['args', None, None, 'defaults'], 'arguments') + self.missing_field(space, ['args', None, None, 'defaults'], 'arguments') else: if not self.initialization_state & 2: self.vararg = None @@ -2513,7 +2374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['arg', 'value'], 'keyword') + self.missing_field(space, ['arg', 'value'], 'keyword') else: pass self.value.sync_app_attrs(space) @@ -2533,7 +2394,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~2) ^ 1: - missing_field(space, self.initialization_state, ['name', None], 'alias') + self.missing_field(space, ['name', None], 'alias') else: if not self.initialization_state & 2: self.asname = None @@ -2925,14 +2786,13 @@ def Module_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -2968,14 +2828,13 @@ def Interactive_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3015,13 +2874,14 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Expression_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -3057,14 +2917,13 @@ def Suite_get_body(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body @@ -3102,10 +2961,9 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def stmt_set_lineno(space, w_self, w_new_value): @@ -3117,17 +2975,16 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def stmt_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def stmt_set_col_offset(space, w_self, w_new_value): @@ -3139,7 +2996,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 stmt.typedef = typedef.TypeDef("stmt", AST.typedef, @@ -3155,10 +3012,9 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def FunctionDef_set_name(space, w_self, w_new_value): @@ -3170,17 +3026,16 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def FunctionDef_get_args(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def FunctionDef_set_args(space, w_self, w_new_value): @@ -3192,43 +3047,41 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def FunctionDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def FunctionDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def FunctionDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list def FunctionDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _FunctionDef_field_unroller = unrolling_iterable(['name', 'args', 'body', 'decorator_list']) def FunctionDef_init(space, w_self, __args__): @@ -3264,10 +3117,9 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ClassDef_set_name(space, w_self, w_new_value): @@ -3279,61 +3131,58 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ClassDef_get_bases(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'bases'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: if w_self.bases is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.bases] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_bases = w_list return w_self.w_bases def ClassDef_set_bases(space, w_self, w_new_value): w_self.w_bases = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ClassDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def ClassDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def ClassDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'decorator_list'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: if w_self.decorator_list is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.decorator_list] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_decorator_list = w_list return w_self.w_decorator_list def ClassDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _ClassDef_field_unroller = unrolling_iterable(['name', 'bases', 'body', 'decorator_list']) def ClassDef_init(space, w_self, __args__): @@ -3370,22 +3219,23 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Return_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Return_field_unroller = unrolling_iterable(['value']) def Return_init(space, w_self, __args__): @@ -3412,22 +3262,21 @@ ) def Delete_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets def Delete_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Delete_field_unroller = unrolling_iterable(['targets']) def Delete_init(space, w_self, __args__): @@ -3455,44 +3304,44 @@ ) def Assign_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'targets'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: if w_self.targets is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.targets] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_targets = w_list return w_self.w_targets def Assign_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Assign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assign_field_unroller = unrolling_iterable(['targets', 'value']) def Assign_init(space, w_self, __args__): @@ -3525,32 +3374,32 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def AugAssign_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def AugAssign_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def AugAssign_set_op(space, w_self, w_new_value): @@ -3564,29 +3413,30 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def AugAssign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def AugAssign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _AugAssign_field_unroller = unrolling_iterable(['target', 'op', 'value']) def AugAssign_init(space, w_self, __args__): @@ -3619,50 +3469,49 @@ w_obj = w_self.getdictvalue(space, 'dest') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dest'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) def Print_set_dest(space, w_self, w_new_value): try: w_self.dest = space.interp_w(expr, w_new_value, True) + if type(w_self.dest) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'dest', w_new_value) return w_self.deldictvalue(space, 'dest') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Print_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values def Print_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Print_get_nl(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'nl') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'nl'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) def Print_set_nl(space, w_self, w_new_value): @@ -3674,7 +3523,7 @@ w_self.setdictvalue(space, 'nl', w_new_value) return w_self.deldictvalue(space, 'nl') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Print_field_unroller = unrolling_iterable(['dest', 'values', 'nl']) def Print_init(space, w_self, __args__): @@ -3708,80 +3557,80 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def For_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def For_get_iter(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'iter') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def For_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'iter', w_new_value) return w_self.deldictvalue(space, 'iter') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def For_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def For_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def For_get_orelse(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse def For_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _For_field_unroller = unrolling_iterable(['target', 'iter', 'body', 'orelse']) def For_init(space, w_self, __args__): @@ -3817,58 +3666,57 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def While_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def While_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def While_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def While_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse def While_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _While_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def While_init(space, w_self, __args__): @@ -3903,58 +3751,57 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def If_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def If_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def If_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def If_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse def If_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _If_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def If_init(space, w_self, __args__): @@ -3989,62 +3836,63 @@ w_obj = w_self.getdictvalue(space, 'context_expr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'context_expr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) def With_set_context_expr(space, w_self, w_new_value): try: w_self.context_expr = space.interp_w(expr, w_new_value, False) + if type(w_self.context_expr) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'context_expr', w_new_value) return w_self.deldictvalue(space, 'context_expr') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def With_get_optional_vars(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'optional_vars') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'optional_vars'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) def With_set_optional_vars(space, w_self, w_new_value): try: w_self.optional_vars = space.interp_w(expr, w_new_value, True) + if type(w_self.optional_vars) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'optional_vars', w_new_value) return w_self.deldictvalue(space, 'optional_vars') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def With_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def With_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _With_field_unroller = unrolling_iterable(['context_expr', 'optional_vars', 'body']) def With_init(space, w_self, __args__): @@ -4078,66 +3926,69 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def Raise_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Raise_get_inst(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'inst') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'inst'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) def Raise_set_inst(space, w_self, w_new_value): try: w_self.inst = space.interp_w(expr, w_new_value, True) + if type(w_self.inst) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'inst', w_new_value) return w_self.deldictvalue(space, 'inst') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Raise_get_tback(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'tback') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'tback'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) def Raise_set_tback(space, w_self, w_new_value): try: w_self.tback = space.interp_w(expr, w_new_value, True) + if type(w_self.tback) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'tback', w_new_value) return w_self.deldictvalue(space, 'tback') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Raise_field_unroller = unrolling_iterable(['type', 'inst', 'tback']) def Raise_init(space, w_self, __args__): @@ -4166,58 +4017,55 @@ ) def TryExcept_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def TryExcept_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryExcept_get_handlers(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'handlers'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: if w_self.handlers is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.handlers] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_handlers = w_list return w_self.w_handlers def TryExcept_set_handlers(space, w_self, w_new_value): w_self.w_handlers = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def TryExcept_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: if w_self.orelse is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.orelse] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_orelse = w_list return w_self.w_orelse def TryExcept_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _TryExcept_field_unroller = unrolling_iterable(['body', 'handlers', 'orelse']) def TryExcept_init(space, w_self, __args__): @@ -4249,40 +4097,38 @@ ) def TryFinally_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def TryFinally_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryFinally_get_finalbody(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'finalbody'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: if w_self.finalbody is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.finalbody] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_finalbody = w_list return w_self.w_finalbody def TryFinally_set_finalbody(space, w_self, w_new_value): w_self.w_finalbody = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _TryFinally_field_unroller = unrolling_iterable(['body', 'finalbody']) def TryFinally_init(space, w_self, __args__): @@ -4316,44 +4162,46 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def Assert_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assert_get_msg(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'msg') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'msg'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) def Assert_set_msg(space, w_self, w_new_value): try: w_self.msg = space.interp_w(expr, w_new_value, True) + if type(w_self.msg) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'msg', w_new_value) return w_self.deldictvalue(space, 'msg') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assert_field_unroller = unrolling_iterable(['test', 'msg']) def Assert_init(space, w_self, __args__): @@ -4381,22 +4229,21 @@ ) def Import_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names def Import_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Import_field_unroller = unrolling_iterable(['names']) def Import_init(space, w_self, __args__): @@ -4428,10 +4275,9 @@ w_obj = w_self.getdictvalue(space, 'module') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'module'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) def ImportFrom_set_module(space, w_self, w_new_value): @@ -4446,35 +4292,33 @@ w_self.setdictvalue(space, 'module', w_new_value) return w_self.deldictvalue(space, 'module') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ImportFrom_get_names(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names def ImportFrom_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ImportFrom_get_level(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'level') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'level'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) def ImportFrom_set_level(space, w_self, w_new_value): @@ -4486,7 +4330,7 @@ w_self.setdictvalue(space, 'level', w_new_value) return w_self.deldictvalue(space, 'level') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ImportFrom_field_unroller = unrolling_iterable(['module', 'names', 'level']) def ImportFrom_init(space, w_self, __args__): @@ -4520,66 +4364,69 @@ w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Exec_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Exec_get_globals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'globals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'globals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) def Exec_set_globals(space, w_self, w_new_value): try: w_self.globals = space.interp_w(expr, w_new_value, True) + if type(w_self.globals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'globals', w_new_value) return w_self.deldictvalue(space, 'globals') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Exec_get_locals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'locals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'locals'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) def Exec_set_locals(space, w_self, w_new_value): try: w_self.locals = space.interp_w(expr, w_new_value, True) + if type(w_self.locals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'locals', w_new_value) return w_self.deldictvalue(space, 'locals') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Exec_field_unroller = unrolling_iterable(['body', 'globals', 'locals']) def Exec_init(space, w_self, __args__): @@ -4608,22 +4455,21 @@ ) def Global_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'names'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: if w_self.names is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.names] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_names = w_list return w_self.w_names def Global_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Global_field_unroller = unrolling_iterable(['names']) def Global_init(space, w_self, __args__): @@ -4655,22 +4501,23 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Expr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Expr_field_unroller = unrolling_iterable(['value']) def Expr_init(space, w_self, __args__): @@ -4752,10 +4599,9 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def expr_set_lineno(space, w_self, w_new_value): @@ -4767,17 +4613,16 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def expr_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def expr_set_col_offset(space, w_self, w_new_value): @@ -4789,7 +4634,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 expr.typedef = typedef.TypeDef("expr", AST.typedef, @@ -4805,10 +4650,9 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() def BoolOp_set_op(space, w_self, w_new_value): @@ -4822,25 +4666,24 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BoolOp_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values def BoolOp_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _BoolOp_field_unroller = unrolling_iterable(['op', 'values']) def BoolOp_init(space, w_self, __args__): @@ -4873,32 +4716,32 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def BinOp_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BinOp_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() def BinOp_set_op(space, w_self, w_new_value): @@ -4912,29 +4755,30 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def BinOp_get_right(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'right') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'right'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) def BinOp_set_right(space, w_self, w_new_value): try: w_self.right = space.interp_w(expr, w_new_value, False) + if type(w_self.right) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'right', w_new_value) return w_self.deldictvalue(space, 'right') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _BinOp_field_unroller = unrolling_iterable(['left', 'op', 'right']) def BinOp_init(space, w_self, __args__): @@ -4967,10 +4811,9 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'op'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() def UnaryOp_set_op(space, w_self, w_new_value): @@ -4984,29 +4827,30 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def UnaryOp_get_operand(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'operand') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'operand'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) def UnaryOp_set_operand(space, w_self, w_new_value): try: w_self.operand = space.interp_w(expr, w_new_value, False) + if type(w_self.operand) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'operand', w_new_value) return w_self.deldictvalue(space, 'operand') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _UnaryOp_field_unroller = unrolling_iterable(['op', 'operand']) def UnaryOp_init(space, w_self, __args__): @@ -5038,10 +4882,9 @@ w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) def Lambda_set_args(space, w_self, w_new_value): @@ -5053,29 +4896,30 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Lambda_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def Lambda_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Lambda_field_unroller = unrolling_iterable(['args', 'body']) def Lambda_init(space, w_self, __args__): @@ -5107,66 +4951,69 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'test'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) def IfExp_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def IfExp_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) def IfExp_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def IfExp_get_orelse(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'orelse') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'orelse'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) def IfExp_set_orelse(space, w_self, w_new_value): try: w_self.orelse = space.interp_w(expr, w_new_value, False) + if type(w_self.orelse) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'orelse', w_new_value) return w_self.deldictvalue(space, 'orelse') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _IfExp_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def IfExp_init(space, w_self, __args__): @@ -5195,40 +5042,38 @@ ) def Dict_get_keys(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keys'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: if w_self.keys is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keys] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keys = w_list return w_self.w_keys def Dict_set_keys(space, w_self, w_new_value): w_self.w_keys = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Dict_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'values'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: if w_self.values is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.values] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_values = w_list return w_self.w_values def Dict_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Dict_field_unroller = unrolling_iterable(['keys', 'values']) def Dict_init(space, w_self, __args__): @@ -5258,22 +5103,21 @@ ) def Set_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts def Set_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Set_field_unroller = unrolling_iterable(['elts']) def Set_init(space, w_self, __args__): @@ -5305,40 +5149,40 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def ListComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ListComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators def ListComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _ListComp_field_unroller = unrolling_iterable(['elt', 'generators']) def ListComp_init(space, w_self, __args__): @@ -5371,40 +5215,40 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def SetComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def SetComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators def SetComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _SetComp_field_unroller = unrolling_iterable(['elt', 'generators']) def SetComp_init(space, w_self, __args__): @@ -5437,62 +5281,63 @@ w_obj = w_self.getdictvalue(space, 'key') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'key'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) def DictComp_set_key(space, w_self, w_new_value): try: w_self.key = space.interp_w(expr, w_new_value, False) + if type(w_self.key) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'key', w_new_value) return w_self.deldictvalue(space, 'key') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def DictComp_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def DictComp_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def DictComp_get_generators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators def DictComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _DictComp_field_unroller = unrolling_iterable(['key', 'value', 'generators']) def DictComp_init(space, w_self, __args__): @@ -5526,40 +5371,40 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elt'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) def GeneratorExp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def GeneratorExp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'generators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: if w_self.generators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.generators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_generators = w_list return w_self.w_generators def GeneratorExp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _GeneratorExp_field_unroller = unrolling_iterable(['elt', 'generators']) def GeneratorExp_init(space, w_self, __args__): @@ -5592,22 +5437,23 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Yield_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Yield_field_unroller = unrolling_iterable(['value']) def Yield_init(space, w_self, __args__): @@ -5638,58 +5484,57 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'left'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) def Compare_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Compare_get_ops(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ops'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: if w_self.ops is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [cmpop_to_class[node - 1]() for node in w_self.ops] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ops = w_list return w_self.w_ops def Compare_set_ops(space, w_self, w_new_value): w_self.w_ops = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Compare_get_comparators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'comparators'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: if w_self.comparators is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.comparators] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_comparators = w_list return w_self.w_comparators def Compare_set_comparators(space, w_self, w_new_value): w_self.w_comparators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Compare_field_unroller = unrolling_iterable(['left', 'ops', 'comparators']) def Compare_init(space, w_self, __args__): @@ -5724,102 +5569,103 @@ w_obj = w_self.getdictvalue(space, 'func') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'func'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) def Call_set_func(space, w_self, w_new_value): try: w_self.func = space.interp_w(expr, w_new_value, False) + if type(w_self.func) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'func', w_new_value) return w_self.deldictvalue(space, 'func') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Call_get_args(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_args = w_list return w_self.w_args def Call_set_args(space, w_self, w_new_value): w_self.w_args = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Call_get_keywords(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'keywords'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: if w_self.keywords is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.keywords] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_keywords = w_list return w_self.w_keywords def Call_set_keywords(space, w_self, w_new_value): w_self.w_keywords = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def Call_get_starargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'starargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'starargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) def Call_set_starargs(space, w_self, w_new_value): try: w_self.starargs = space.interp_w(expr, w_new_value, True) + if type(w_self.starargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'starargs', w_new_value) return w_self.deldictvalue(space, 'starargs') - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 def Call_get_kwargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'kwargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 16: + if not w_self.initialization_state & 64: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'kwargs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) def Call_set_kwargs(space, w_self, w_new_value): try: w_self.kwargs = space.interp_w(expr, w_new_value, True) + if type(w_self.kwargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'kwargs', w_new_value) return w_self.deldictvalue(space, 'kwargs') - w_self.initialization_state |= 16 + w_self.initialization_state |= 64 _Call_field_unroller = unrolling_iterable(['func', 'args', 'keywords', 'starargs', 'kwargs']) def Call_init(space, w_self, __args__): @@ -5856,22 +5702,23 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Repr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Repr_field_unroller = unrolling_iterable(['value']) def Repr_init(space, w_self, __args__): @@ -5902,10 +5749,9 @@ w_obj = w_self.getdictvalue(space, 'n') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'n'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n def Num_set_n(space, w_self, w_new_value): @@ -5917,7 +5763,7 @@ w_self.setdictvalue(space, 'n', w_new_value) return w_self.deldictvalue(space, 'n') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Num_field_unroller = unrolling_iterable(['n']) def Num_init(space, w_self, __args__): @@ -5948,10 +5794,9 @@ w_obj = w_self.getdictvalue(space, 's') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 's'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s def Str_set_s(space, w_self, w_new_value): @@ -5963,7 +5808,7 @@ w_self.setdictvalue(space, 's', w_new_value) return w_self.deldictvalue(space, 's') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Str_field_unroller = unrolling_iterable(['s']) def Str_init(space, w_self, __args__): @@ -5994,32 +5839,32 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Attribute_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Attribute_get_attr(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'attr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'attr'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) def Attribute_set_attr(space, w_self, w_new_value): @@ -6031,17 +5876,16 @@ w_self.setdictvalue(space, 'attr', w_new_value) return w_self.deldictvalue(space, 'attr') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Attribute_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Attribute_set_ctx(space, w_self, w_new_value): @@ -6055,7 +5899,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Attribute_field_unroller = unrolling_iterable(['value', 'attr', 'ctx']) def Attribute_init(space, w_self, __args__): @@ -6088,54 +5932,55 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Subscript_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Subscript_get_slice(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'slice') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'slice'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) def Subscript_set_slice(space, w_self, w_new_value): try: w_self.slice = space.interp_w(slice, w_new_value, False) + if type(w_self.slice) is slice: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'slice', w_new_value) return w_self.deldictvalue(space, 'slice') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Subscript_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Subscript_set_ctx(space, w_self, w_new_value): @@ -6149,7 +5994,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Subscript_field_unroller = unrolling_iterable(['value', 'slice', 'ctx']) def Subscript_init(space, w_self, __args__): @@ -6182,10 +6027,9 @@ w_obj = w_self.getdictvalue(space, 'id') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'id'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) def Name_set_id(space, w_self, w_new_value): @@ -6197,17 +6041,16 @@ w_self.setdictvalue(space, 'id', w_new_value) return w_self.deldictvalue(space, 'id') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Name_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Name_set_ctx(space, w_self, w_new_value): @@ -6221,7 +6064,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Name_field_unroller = unrolling_iterable(['id', 'ctx']) def Name_init(space, w_self, __args__): @@ -6249,32 +6092,30 @@ ) def List_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts def List_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def List_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def List_set_ctx(space, w_self, w_new_value): @@ -6288,7 +6129,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _List_field_unroller = unrolling_iterable(['elts', 'ctx']) def List_init(space, w_self, __args__): @@ -6317,32 +6158,30 @@ ) def Tuple_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'elts'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: if w_self.elts is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.elts] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_elts = w_list return w_self.w_elts def Tuple_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Tuple_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ctx'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() def Tuple_set_ctx(space, w_self, w_new_value): @@ -6356,7 +6195,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Tuple_field_unroller = unrolling_iterable(['elts', 'ctx']) def Tuple_init(space, w_self, __args__): @@ -6389,10 +6228,9 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value def Const_set_value(space, w_self, w_new_value): @@ -6404,7 +6242,7 @@ w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Const_field_unroller = unrolling_iterable(['value']) def Const_init(space, w_self, __args__): @@ -6510,13 +6348,14 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lower'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lower') return space.wrap(w_self.lower) def Slice_set_lower(space, w_self, w_new_value): try: w_self.lower = space.interp_w(expr, w_new_value, True) + if type(w_self.lower) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6532,13 +6371,14 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'upper'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'upper') return space.wrap(w_self.upper) def Slice_set_upper(space, w_self, w_new_value): try: w_self.upper = space.interp_w(expr, w_new_value, True) + if type(w_self.upper) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6554,13 +6394,14 @@ return w_obj if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'step'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'step') return space.wrap(w_self.step) def Slice_set_step(space, w_self, w_new_value): try: w_self.step = space.interp_w(expr, w_new_value, True) + if type(w_self.step) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6598,14 +6439,13 @@ def ExtSlice_get_dims(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'dims'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dims') if w_self.w_dims is None: if w_self.dims is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.dims] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_dims = w_list return w_self.w_dims @@ -6645,13 +6485,14 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'value'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) def Index_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6915,13 +6756,14 @@ return w_obj if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'target'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) def comprehension_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6937,13 +6779,14 @@ return w_obj if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'iter'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) def comprehension_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6955,14 +6798,13 @@ def comprehension_get_ifs(space, w_self): if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'ifs'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ifs') if w_self.w_ifs is None: if w_self.ifs is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.ifs] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_ifs = w_list return w_self.w_ifs @@ -7002,10 +6844,9 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'lineno'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) def excepthandler_set_lineno(space, w_self, w_new_value): @@ -7017,17 +6858,16 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def excepthandler_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'col_offset'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) def excepthandler_set_col_offset(space, w_self, w_new_value): @@ -7039,7 +6879,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 excepthandler.typedef = typedef.TypeDef("excepthandler", AST.typedef, @@ -7055,62 +6895,63 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'type'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) def ExceptHandler_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ExceptHandler_get_name(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'name'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) def ExceptHandler_set_name(space, w_self, w_new_value): try: w_self.name = space.interp_w(expr, w_new_value, True) + if type(w_self.name) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ExceptHandler_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'body'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: if w_self.body is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.body] - w_list = space.newlist(list_w) + w_list = space.newlist(list_w) w_self.w_body = w_list return w_self.w_body def ExceptHandler_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ExceptHandler_field_unroller = unrolling_iterable(['type', 'name', 'body']) def ExceptHandler_init(space, w_self, __args__): @@ -7142,14 +6983,13 @@ def arguments_get_args(space, w_self): if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) - w_err = space.wrap("'%s' object has no attribute 'args'" % typename) - raise OperationError(space.w_AttributeError, w_err) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: if w_self.args is None: - w_list = space.newlist([]) + list_w = [] else: list_w = [space.wrap(node) for node in w_self.args] From noreply at buildbot.pypy.org Sat Feb 4 05:39:14 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 4 Feb 2012 05:39:14 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) merge default Message-ID: <20120204043914.D8D98710770@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52077:867e6bde0e52 Date: 2012-01-30 15:35 -0800 http://bitbucket.org/pypy/pypy/changeset/867e6bde0e52/ Log: o) merge default o) use jitcell count to check ffi call in test_zjit o) more consistent use of opaque C handle types diff too long, truncating to 10000 out of 14540 lines diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -20,6 +20,8 @@ # 2. Altered source versions must be plainly marked as such, and must not be # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. +# +# Note: This software has been modified for use in PyPy. from ctypes import c_void_p, c_int, c_double, c_int64, c_char_p, cdll from ctypes import POINTER, byref, string_at, CFUNCTYPE, cast @@ -27,7 +29,6 @@ from collections import OrderedDict import datetime import sys -import time import weakref from threading import _get_ident as thread_get_ident @@ -606,7 +607,7 @@ def authorizer(userdata, action, arg1, arg2, dbname, source): try: return int(callback(action, arg1, arg2, dbname, source)) - except Exception, e: + except Exception: return SQLITE_DENY c_authorizer = AUTHORIZER(authorizer) @@ -653,7 +654,7 @@ if not aggregate_ptr[0]: try: aggregate = cls() - except Exception, e: + except Exception: msg = ("user-defined aggregate's '__init__' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -667,7 +668,7 @@ params = _convert_params(context, argc, c_params) try: aggregate.step(*params) - except Exception, e: + except Exception: msg = ("user-defined aggregate's 'step' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -683,7 +684,7 @@ aggregate = self.aggregate_instances[aggregate_ptr[0]] try: val = aggregate.finalize() - except Exception, e: + except Exception: msg = ("user-defined aggregate's 'finalize' " "method raised error") sqlite.sqlite3_result_error(context, msg, len(msg)) @@ -771,7 +772,7 @@ self.statement.item = None self.statement.exhausted = True - if self.statement.kind == DML or self.statement.kind == DDL: + if self.statement.kind == DML: self.statement.reset() self.rowcount = -1 @@ -791,7 +792,7 @@ if self.statement.kind == DML: self.connection._begin() else: - raise ProgrammingError, "executemany is only for DML statements" + raise ProgrammingError("executemany is only for DML statements") self.rowcount = 0 for params in many_params: @@ -861,8 +862,6 @@ except StopIteration: return None - return nextrow - def fetchmany(self, size=None): self._check_closed() self._check_reset() @@ -915,7 +914,7 @@ def __init__(self, connection, sql): self.statement = None if not isinstance(sql, str): - raise ValueError, "sql must be a string" + raise ValueError("sql must be a string") self.con = connection self.sql = sql # DEBUG ONLY first_word = self._statement_kind = sql.lstrip().split(" ")[0].upper() @@ -944,8 +943,8 @@ raise self.con._get_exception(ret) self.con._remember_statement(self) if _check_remaining_sql(next_char.value): - raise Warning, "One and only one statement required: %r" % ( - next_char.value,) + raise Warning("One and only one statement required: %r" % ( + next_char.value,)) # sql_char should remain alive until here self._build_row_cast_map() @@ -1016,7 +1015,7 @@ elif type(param) is buffer: sqlite.sqlite3_bind_blob(self.statement, idx, str(param), len(param), SQLITE_TRANSIENT) else: - raise InterfaceError, "parameter type %s is not supported" % str(type(param)) + raise InterfaceError("parameter type %s is not supported" % str(type(param))) def set_params(self, params): ret = sqlite.sqlite3_reset(self.statement) @@ -1045,11 +1044,11 @@ for idx in range(1, sqlite.sqlite3_bind_parameter_count(self.statement) + 1): param_name = sqlite.sqlite3_bind_parameter_name(self.statement, idx) if param_name is None: - raise ProgrammingError, "need named parameters" + raise ProgrammingError("need named parameters") param_name = param_name[1:] try: param = params[param_name] - except KeyError, e: + except KeyError: raise ProgrammingError("missing parameter '%s'" %param) self.set_param(idx, param) @@ -1260,7 +1259,7 @@ params = _convert_params(context, nargs, c_params) try: val = real_cb(*params) - except Exception, e: + except Exception: msg = "user-defined function raised exception" sqlite.sqlite3_result_error(context, msg, len(msg)) else: diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -13,7 +13,7 @@ Sources for time zone and DST data: http://www.twinsun.com/tz/tz-link.htm This was originally copied from the sandbox of the CPython CVS repository. -Thanks to Tim Peters for suggesting using it. +Thanks to Tim Peters for suggesting using it. """ import time as _time @@ -271,6 +271,8 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): + if not isinstance(year, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -280,6 +282,8 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): + if not isinstance(hour, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -543,61 +547,76 @@ self = object.__new__(cls) - self.__days = d - self.__seconds = s - self.__microseconds = us + self._days = d + self._seconds = s + self._microseconds = us if abs(d) > 999999999: raise OverflowError("timedelta # of days is too large: %d" % d) return self def __repr__(self): - if self.__microseconds: + if self._microseconds: return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds, - self.__microseconds) - if self.__seconds: + self._days, + self._seconds, + self._microseconds) + if self._seconds: return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds) - return "%s(%d)" % ('datetime.' + self.__class__.__name__, self.__days) + self._days, + self._seconds) + return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days) def __str__(self): - mm, ss = divmod(self.__seconds, 60) + mm, ss = divmod(self._seconds, 60) hh, mm = divmod(mm, 60) s = "%d:%02d:%02d" % (hh, mm, ss) - if self.__days: + if self._days: def plural(n): return n, abs(n) != 1 and "s" or "" - s = ("%d day%s, " % plural(self.__days)) + s - if self.__microseconds: - s = s + ".%06d" % self.__microseconds + s = ("%d day%s, " % plural(self._days)) + s + if self._microseconds: + s = s + ".%06d" % self._microseconds return s - days = property(lambda self: self.__days, doc="days") - seconds = property(lambda self: self.__seconds, doc="seconds") - microseconds = property(lambda self: self.__microseconds, - doc="microseconds") - def total_seconds(self): return ((self.days * 86400 + self.seconds) * 10**6 + self.microseconds) / 1e6 + # Read-only field accessors + @property + def days(self): + """days""" + return self._days + + @property + def seconds(self): + """seconds""" + return self._seconds + + @property + def microseconds(self): + """microseconds""" + return self._microseconds + def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days + other.__days, - self.__seconds + other.__seconds, - self.__microseconds + other.__microseconds) + return timedelta(self._days + other._days, + self._seconds + other._seconds, + self._microseconds + other._microseconds) return NotImplemented __radd__ = __add__ def __sub__(self, other): if isinstance(other, timedelta): - return self + -other + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(self._days - other._days, + self._seconds - other._seconds, + self._microseconds - other._microseconds) return NotImplemented def __rsub__(self, other): @@ -606,17 +625,17 @@ return NotImplemented def __neg__(self): - # for CPython compatibility, we cannot use - # our __class__ here, but need a real timedelta - return timedelta(-self.__days, - -self.__seconds, - -self.__microseconds) + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(-self._days, + -self._seconds, + -self._microseconds) def __pos__(self): return self def __abs__(self): - if self.__days < 0: + if self._days < 0: return -self else: return self @@ -625,81 +644,81 @@ if isinstance(other, (int, long)): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days * other, - self.__seconds * other, - self.__microseconds * other) + return timedelta(self._days * other, + self._seconds * other, + self._microseconds * other) return NotImplemented __rmul__ = __mul__ def __div__(self, other): if isinstance(other, (int, long)): - usec = ((self.__days * (24*3600L) + self.__seconds) * 1000000 + - self.__microseconds) + usec = ((self._days * (24*3600L) + self._seconds) * 1000000 + + self._microseconds) return timedelta(0, 0, usec // other) return NotImplemented __floordiv__ = __div__ - # Comparisons. + # Comparisons of timedelta objects with other. def __eq__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, timedelta) - return cmp(self.__getstate(), other.__getstate()) + return cmp(self._getstate(), other._getstate()) def __hash__(self): - return hash(self.__getstate()) + return hash(self._getstate()) def __nonzero__(self): - return (self.__days != 0 or - self.__seconds != 0 or - self.__microseconds != 0) + return (self._days != 0 or + self._seconds != 0 or + self._microseconds != 0) # Pickle support. __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - return (self.__days, self.__seconds, self.__microseconds) + def _getstate(self): + return (self._days, self._seconds, self._microseconds) def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) timedelta.min = timedelta(-999999999) timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, @@ -749,25 +768,26 @@ return self _check_date_fields(year, month, day) self = object.__new__(cls) - self.__year = year - self.__month = month - self.__day = day + self._year = year + self._month = month + self._day = day return self # Additional constructors + @classmethod def fromtimestamp(cls, t): "Construct a date from a POSIX timestamp (like time.time())." y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t) return cls(y, m, d) - fromtimestamp = classmethod(fromtimestamp) + @classmethod def today(cls): "Construct a date from time.time()." t = _time.time() return cls.fromtimestamp(t) - today = classmethod(today) + @classmethod def fromordinal(cls, n): """Contruct a date from a proleptic Gregorian ordinal. @@ -776,16 +796,24 @@ """ y, m, d = _ord2ymd(n) return cls(y, m, d) - fromordinal = classmethod(fromordinal) # Conversions to string def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + """ return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__year, - self.__month, - self.__day) + self._year, + self._month, + self._day) # XXX These shouldn't depend on time.localtime(), because that # clips the usable dates to [1970 .. 2038). At least ctime() is # easily done without using strftime() -- that's better too because @@ -793,12 +821,20 @@ def ctime(self): "Format a la ctime()." - return tmxxx(self.__year, self.__month, self.__day).ctime() + return tmxxx(self._year, self._month, self._day).ctime() def strftime(self, fmt): "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + def isoformat(self): """Return the date formatted according to ISO. @@ -808,29 +844,31 @@ - http://www.w3.org/TR/NOTE-datetime - http://www.cl.cam.ac.uk/~mgk25/iso-time.html """ - return "%04d-%02d-%02d" % (self.__year, self.__month, self.__day) + return "%04d-%02d-%02d" % (self._year, self._month, self._day) __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + # Read-only field accessors + @property + def year(self): + """year (1-9999)""" + return self._year - # Read-only field accessors - year = property(lambda self: self.__year, - doc="year (%d-%d)" % (MINYEAR, MAXYEAR)) - month = property(lambda self: self.__month, doc="month (1-12)") - day = property(lambda self: self.__day, doc="day (1-31)") + @property + def month(self): + """month (1-12)""" + return self._month + + @property + def day(self): + """day (1-31)""" + return self._day # Standard conversions, __cmp__, __hash__ (and helpers) def timetuple(self): "Return local time tuple compatible with time.localtime()." - return _build_struct_time(self.__year, self.__month, self.__day, + return _build_struct_time(self._year, self._month, self._day, 0, 0, 0, -1) def toordinal(self): @@ -839,24 +877,24 @@ January 1 of year 1 is day 1. Only the year, month and day values contribute to the result. """ - return _ymd2ord(self.__year, self.__month, self.__day) + return _ymd2ord(self._year, self._month, self._day) def replace(self, year=None, month=None, day=None): """Return a new date with new values for the specified fields.""" if year is None: - year = self.__year + year = self._year if month is None: - month = self.__month + month = self._month if day is None: - day = self.__day + day = self._day _check_date_fields(year, month, day) return date(year, month, day) - # Comparisons. + # Comparisons of date objects with other. def __eq__(self, other): if isinstance(other, date): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -864,7 +902,7 @@ def __ne__(self, other): if isinstance(other, date): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -872,7 +910,7 @@ def __le__(self, other): if isinstance(other, date): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -880,7 +918,7 @@ def __lt__(self, other): if isinstance(other, date): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -888,7 +926,7 @@ def __ge__(self, other): if isinstance(other, date): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -896,21 +934,21 @@ def __gt__(self, other): if isinstance(other, date): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple"): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, date) - y, m, d = self.__year, self.__month, self.__day - y2, m2, d2 = other.__year, other.__month, other.__day + y, m, d = self._year, self._month, self._day + y2, m2, d2 = other._year, other._month, other._day return cmp((y, m, d), (y2, m2, d2)) def __hash__(self): "Hash." - return hash(self.__getstate()) + return hash(self._getstate()) # Computations @@ -922,9 +960,9 @@ def __add__(self, other): "Add a date to a timedelta." if isinstance(other, timedelta): - t = tmxxx(self.__year, - self.__month, - self.__day + other.days) + t = tmxxx(self._year, + self._month, + self._day + other.days) self._checkOverflow(t.year) result = date(t.year, t.month, t.day) return result @@ -966,9 +1004,9 @@ ISO calendar algorithm taken from http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm """ - year = self.__year + year = self._year week1monday = _isoweek1monday(year) - today = _ymd2ord(self.__year, self.__month, self.__day) + today = _ymd2ord(self._year, self._month, self._day) # Internally, week and day have origin 0 week, day = divmod(today - week1monday, 7) if week < 0: @@ -985,18 +1023,18 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - return ("%c%c%c%c" % (yhi, ylo, self.__month, self.__day), ) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + return ("%c%c%c%c" % (yhi, ylo, self._month, self._day), ) def __setstate(self, string): if len(string) != 4 or not (1 <= ord(string[2]) <= 12): raise TypeError("not enough arguments") - yhi, ylo, self.__month, self.__day = map(ord, string) - self.__year = yhi * 256 + ylo + yhi, ylo, self._month, self._day = map(ord, string) + self._year = yhi * 256 + ylo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) _date_class = date # so functions w/ args named "date" can get at the class @@ -1118,62 +1156,80 @@ return self _check_tzinfo_arg(tzinfo) _check_time_fields(hour, minute, second, microsecond) - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo # Standard conversions, __hash__ (and helpers) - # Comparisons. + # Comparisons of time objects with other. def __eq__(self, other): if isinstance(other, time): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, time): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, time): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, time): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, time): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, time): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, time) mytz = self._tzinfo ottz = other._tzinfo @@ -1187,23 +1243,23 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._hour, self._minute, self._second, + self._microsecond), + (other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware times") - myhhmm = self.__hour * 60 + self.__minute - myoff - othhmm = other.__hour * 60 + other.__minute - otoff - return cmp((myhhmm, self.__second, self.__microsecond), - (othhmm, other.__second, other.__microsecond)) + myhhmm = self._hour * 60 + self._minute - myoff + othhmm = other._hour * 60 + other._minute - otoff + return cmp((myhhmm, self._second, self._microsecond), + (othhmm, other._second, other._microsecond)) def __hash__(self): """Hash.""" tzoff = self._utcoffset() if not tzoff: # zero or None - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) h, m = divmod(self.hour * 60 + self.minute - tzoff, 60) if 0 <= h < 24: return hash(time(h, m, self.second, self.microsecond)) @@ -1227,14 +1283,14 @@ def __repr__(self): """Convert to formal string, for repr().""" - if self.__microsecond != 0: - s = ", %d, %d" % (self.__second, self.__microsecond) - elif self.__second != 0: - s = ", %d" % self.__second + if self._microsecond != 0: + s = ", %d, %d" % (self._second, self._microsecond) + elif self._second != 0: + s = ", %d" % self._second else: s = "" s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__, - self.__hour, self.__minute, s) + self._hour, self._minute, s) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" @@ -1246,8 +1302,8 @@ This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if self.microsecond == 0. """ - s = _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond) + s = _format_time(self._hour, self._minute, self._second, + self._microsecond) tz = self._tzstr() if tz: s += tz @@ -1255,14 +1311,6 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) - def strftime(self, fmt): """Format using strftime(). The date part of the timestamp passed to underlying strftime should not be used. @@ -1270,10 +1318,18 @@ # The year must be >= 1900 else Python's strftime implementation # can raise a bogus exception. timetuple = (1900, 1, 1, - self.__hour, self.__minute, self.__second, + self._hour, self._minute, self._second, 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + # Timezone functions def utcoffset(self): @@ -1350,10 +1406,10 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 6) % (self.__hour, self.__minute, self.__second, + basestate = ("%c" * 6) % (self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1363,13 +1419,13 @@ def __setstate(self, string, tzinfo): if len(string) != 6 or ord(string[0]) >= 24: raise TypeError("an integer is required") - self.__hour, self.__minute, self.__second, us1, us2, us3 = \ + self._hour, self._minute, self._second, us1, us2, us3 = \ map(ord, string) - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (time, self.__getstate()) + return (time, self._getstate()) _time_class = time # so functions w/ args named "time" can get at the class @@ -1378,9 +1434,11 @@ time.resolution = timedelta(microseconds=1) class datetime(date): + """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) - # XXX needs docstrings - # See http://www.zope.org/Members/fdrake/DateTimeWiki/TimeZoneInfo + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints or longs. + """ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None): @@ -1393,24 +1451,43 @@ _check_time_fields(hour, minute, second, microsecond) self = date.__new__(cls, year, month, day) # XXX This duplicates __year, __month, __day for convenience :-( - self.__year = year - self.__month = month - self.__day = day - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._year = year + self._month = month + self._day = day + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo + + @classmethod def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). @@ -1438,37 +1515,42 @@ if tz is not None: result = tz.fromutc(result) return result - fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation # XXX uses gettimeofday on platforms that have it, but that isn't # XXX available from Python. So now() may return different results # XXX across the implementations. + @classmethod def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) - now = classmethod(now) + @classmethod def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) - utcnow = classmethod(utcnow) + @classmethod def combine(cls, date, time): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): @@ -1478,7 +1560,6 @@ return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, time.tzinfo) - combine = classmethod(combine) def timetuple(self): "Return local time tuple compatible with time.localtime()." @@ -1504,7 +1585,7 @@ def date(self): "Return the date part." - return date(self.__year, self.__month, self.__day) + return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." @@ -1564,8 +1645,8 @@ def ctime(self): "Format a la ctime()." - t = tmxxx(self.__year, self.__month, self.__day, self.__hour, - self.__minute, self.__second) + t = tmxxx(self._year, self._month, self._day, self._hour, + self._minute, self._second) return t.ctime() def isoformat(self, sep='T'): @@ -1580,10 +1661,10 @@ Optional argument sep specifies the separator between date and time, default 'T'. """ - s = ("%04d-%02d-%02d%c" % (self.__year, self.__month, self.__day, + s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + - _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond)) + _format_time(self._hour, self._minute, self._second, + self._microsecond)) off = self._utcoffset() if off is not None: if off < 0: @@ -1596,13 +1677,13 @@ return s def __repr__(self): - "Convert to formal string, for repr()." - L = [self.__year, self.__month, self.__day, # These are never zero - self.__hour, self.__minute, self.__second, self.__microsecond] + """Convert to formal string, for repr().""" + L = [self._year, self._month, self._day, # These are never zero + self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: - del L[-1] + del L[-1] s = ", ".join(map(str, L)) s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s) if self._tzinfo is not None: @@ -1675,7 +1756,7 @@ def __eq__(self, other): if isinstance(other, datetime): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1683,7 +1764,7 @@ def __ne__(self, other): if isinstance(other, datetime): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1691,7 +1772,7 @@ def __le__(self, other): if isinstance(other, datetime): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1699,7 +1780,7 @@ def __lt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1707,7 +1788,7 @@ def __ge__(self, other): if isinstance(other, datetime): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1715,13 +1796,13 @@ def __gt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo @@ -1737,12 +1818,12 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__year, self.__month, self.__day, - self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__year, other.__month, other.__day, - other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._year, self._month, self._day, + self._hour, self._minute, self._second, + self._microsecond), + (other._year, other._month, other._day, + other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware datetimes") @@ -1756,13 +1837,13 @@ "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented - t = tmxxx(self.__year, - self.__month, - self.__day + other.days, - self.__hour, - self.__minute, - self.__second + other.seconds, - self.__microsecond + other.microseconds) + t = tmxxx(self._year, + self._month, + self._day + other.days, + self._hour, + self._minute, + self._second + other.seconds, + self._microsecond + other.microseconds) self._checkOverflow(t.year) result = datetime(t.year, t.month, t.day, t.hour, t.minute, t.second, @@ -1780,11 +1861,11 @@ days1 = self.toordinal() days2 = other.toordinal() - secs1 = self.__second + self.__minute * 60 + self.__hour * 3600 - secs2 = other.__second + other.__minute * 60 + other.__hour * 3600 + secs1 = self._second + self._minute * 60 + self._hour * 3600 + secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, - self.__microsecond - other.__microsecond) + self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self._utcoffset() @@ -1792,13 +1873,13 @@ if myoff == otoff: return base if myoff is None or otoff is None: - raise TypeError, "cannot mix naive and timezone-aware time" + raise TypeError("cannot mix naive and timezone-aware time") return base + timedelta(minutes = otoff-myoff) def __hash__(self): tzoff = self._utcoffset() if tzoff is None: - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + (self.minute - tzoff) * 60 + self.second return hash(timedelta(days, seconds, self.microsecond)) @@ -1807,12 +1888,12 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 10) % (yhi, ylo, self.__month, self.__day, - self.__hour, self.__minute, self.__second, + basestate = ("%c" * 10) % (yhi, ylo, self._month, self._day, + self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1820,14 +1901,14 @@ return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): - (yhi, ylo, self.__month, self.__day, self.__hour, - self.__minute, self.__second, us1, us2, us3) = map(ord, string) - self.__year = yhi * 256 + ylo - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + (yhi, ylo, self._month, self._day, self._hour, + self._minute, self._second, us1, us2, us3) = map(ord, string) + self._year = yhi * 256 + ylo + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) datetime.min = datetime(1, 1, 1) @@ -2009,7 +2090,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -0,0 +1,2430 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + assert order == 'C' + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + assert axis is None + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + assert axis is None + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplementedError('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + assert dtype is None + assert out is None + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum(axis=axis) + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplementedError('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + assert axis is None + assert out is None + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + assert axis is None + assert out is None + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + assert axis is None + assert out is None + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + assert axis is None + assert out is None + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplementedError('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + assert axis is None + assert out is None + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + assert axis is None + assert out is None + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplementedError('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplementedError('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplementedError('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + assert dtype is None + assert out is None + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean(axis=axis) + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + assert axis is None + assert dtype is None + assert out is None + assert ddof == 0 + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + assert axis is None + assert dtype is None + assert out is None + assert ddof == 0 + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/bin/py.py b/pypy/bin/py.py --- a/pypy/bin/py.py +++ b/pypy/bin/py.py @@ -76,6 +76,8 @@ config.objspace.suggest(allworkingmodules=False) if config.objspace.allworkingmodules: pypyoption.enable_allworkingmodules(config) + if config.objspace.usemodules._continuation: + config.translation.continuation = True if config.objspace.usemodules.thread: config.translation.thread = True diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -340,7 +340,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -370,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withspecialisedtuple=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,11 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.6-linux.tar.bz2 + $ tar xf pypy-1.7-linux.tar.bz2 - $ ./pypy-1.6/bin/pypy + $ ./pypy-1.7/bin/pypy Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + [PyPy 1.7.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -75,14 +75,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.6/bin/pypy distribute_setup.py + $ ./pypy-1.7/bin/pypy distribute_setup.py - $ ./pypy-1.6/bin/pypy get-pip.py + $ ./pypy-1.7/bin/pypy get-pip.py - $ ./pypy-1.6/bin/pip install pygments # for example + $ ./pypy-1.7/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.6/site-packages``, and -the scripts in ``pypy-1.6/bin``. +3rd party libraries will be installed in ``pypy-1.7/site-packages``, and +the scripts in ``pypy-1.7/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -155,7 +155,7 @@ function. The two input variables are the exception class and the exception value, respectively. (No other block will actually link to the exceptblock if the function does not - explicitely raise exceptions.) + explicitly raise exceptions.) ``Block`` @@ -325,7 +325,7 @@ Mutable objects need special treatment during annotation, because the annotation of contained values needs to be possibly updated to account for mutation operations, and consequently the annotation information -reflown through the relevant parts of the flow the graphs. +reflown through the relevant parts of the flow graphs. * ``SomeList`` stands for a list of homogeneous type (i.e. all the elements of the list are represented by a single common ``SomeXxx`` @@ -503,8 +503,8 @@ Since RPython is a garbage collected language there is a lot of heap memory allocation going on all the time, which would either not occur at all in a more -traditional explicitely managed language or results in an object which dies at -a time known in advance and can thus be explicitely deallocated. For example a +traditional explicitly managed language or results in an object which dies at +a time known in advance and can thus be explicitly deallocated. For example a loop of the following form:: for i in range(n): @@ -696,7 +696,7 @@ So far it is the second most mature high level backend after GenCLI: it still can't translate the full Standard Interpreter, but after the -Leysin sprint we were able to compile and run the rpytstone and +Leysin sprint we were able to compile and run the rpystone and richards benchmarks. GenJVM is almost entirely the work of Niko Matsakis, who worked on it diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -445,6 +445,7 @@ AsyncAction.__init__(self, space) self.dying_objects = [] self.finalizers_lock_count = 0 + self.enabled_at_app_level = True def register_callback(self, w_obj, callback, descrname): self.dying_objects.append((w_obj, callback, descrname)) diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -547,6 +551,28 @@ res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) + def test_call_box_func(self): + def a(a1, a2): + return a1 + a2 + def b(b1, b2): + return b1 * b2 + + arg1 = 40 + arg2 = 2 + for f in [a, b]: + TP = lltype.Signed + FPTR = self.Ptr(self.FuncType([TP, TP], TP)) + func_ptr = llhelper(FPTR, f) + FUNC = deref(FPTR) + funcconst = self.get_funcbox(self.cpu, func_ptr) + funcbox = funcconst.clonebox() + calldescr = self.cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, + EffectInfo.MOST_GENERAL) + res = self.execute_operation(rop.CALL, + [funcbox, BoxInt(arg1), BoxInt(arg2)], + 'int', descr=calldescr) + assert res.getint() == f(arg1, arg2) + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. @@ -1868,6 +1894,7 @@ values.append(descr) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(1)) + values.append(token) FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Void) func_ptr = llhelper(lltype.Ptr(FUNC), maybe_force) @@ -1898,7 +1925,8 @@ assert fail.identifier == 1 assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 10 - assert values == [faildescr, 1, 10] + token = self.cpu.get_latest_force_token() + assert values == [faildescr, 1, 10, token] def test_force_operations_returning_int(self): values = [] @@ -1907,6 +1935,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Signed) @@ -1940,7 +1969,8 @@ assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 42 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_force_operations_returning_float(self): values = [] @@ -1949,6 +1979,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42.5 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Float) @@ -1984,7 +2015,8 @@ x = self.cpu.get_latest_value_float(1) assert longlong.getrealfloat(x) == 42.5 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_call_to_c_function(self): from pypy.rlib.libffi import CDLL, types, ArgChain, FUNCFLAG_CDECL @@ -2974,6 +3006,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -1113,6 +1118,12 @@ for src, dst in singlefloats: self.mc.MOVD(dst, src) # Finally remap the arguments in the main regs + # If x is a register and is in dst_locs, then oups, it needs to + # be moved away: + if x in dst_locs: + src_locs.append(x) + dst_locs.append(r10) + x = r10 remap_frame_layout(self, src_locs, dst_locs, X86_64_SCRATCH_REG) self._regalloc.reserve_param(len(pass_on_stack)) @@ -2037,10 +2048,7 @@ size = sizeloc.value signloc = arglocs[1] - if isinstance(op.getarg(0), Const): - x = imm(op.getarg(0).getint()) - else: - x = arglocs[2] + x = arglocs[2] # the function address if x is eax: tmp = ecx else: @@ -2550,3 +2558,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -741,7 +744,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -6,7 +6,7 @@ from pypy.jit.codewriter import longlong from pypy.jit.metainterp import history, compile from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS +from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS, IS_X86_32 from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc @@ -142,7 +142,9 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) - all_null_registers = lltype.malloc(rffi.LONGP.TO, 24, + all_null_registers = lltype.malloc(rffi.LONGP.TO, + IS_X86_32 and (16+8) # 16 + 8 regs + or (16+16), # 16 + 16 regs flavor='raw', zero=True, immortal=True) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,7 +423,8 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,16 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1553,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1793,6 +1794,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1966,9 +1976,14 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 + + _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model @@ -590,12 +596,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2629,6 +2629,38 @@ self.check_jitcell_token_count(1) self.check_target_token_count(5) + def test_max_unroll_loops(self): + from pypy.jit.metainterp.optimize import InvalidLoop + from pypy.jit.metainterp import optimizeopt + myjitdriver = JitDriver(greens = [], reds = ['n', 'i']) + # + def f(n, limit): + set_param(myjitdriver, 'threshold', 5) + set_param(myjitdriver, 'max_unroll_loops', limit) + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + print i + i += 1 + return i + # + def my_optimize_trace(*args, **kwds): + raise InvalidLoop + old_optimize_trace = optimizeopt.optimize_trace + optimizeopt.optimize_trace = my_optimize_trace + try: + res = self.meta_interp(f, [23, 4]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(3) + # + res = self.meta_interp(f, [23, 20]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(2) + finally: + optimizeopt.optimize_trace = old_optimize_trace + def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True @@ -596,20 +601,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -108,6 +108,10 @@ w_result = state.codec_search_cache.get(normalized_encoding, None) if w_result is not None: return w_result + return _lookup_codec_loop(space, encoding, normalized_encoding) + +def _lookup_codec_loop(space, encoding, normalized_encoding): + state = space.fromcache(CodecState) if state.codec_need_encodings: w_import = space.getattr(space.builtin, space.wrap("__import__")) # registers new codecs diff --git a/pypy/module/_codecs/test/test_codecs.py b/pypy/module/_codecs/test/test_codecs.py --- a/pypy/module/_codecs/test/test_codecs.py +++ b/pypy/module/_codecs/test/test_codecs.py @@ -588,10 +588,18 @@ raises(UnicodeDecodeError, '+3ADYAA-'.decode, 'utf-7') def test_utf_16_encode_decode(self): - import codecs + import codecs, sys x = u'123abc' - assert codecs.getencoder('utf-16')(x) == ('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) - assert codecs.getdecoder('utf-16')('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) + if sys.byteorder == 'big': + assert codecs.getencoder('utf-16')(x) == ( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c', 6) + assert codecs.getdecoder('utf-16')( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c') == (x, 14) + else: + assert codecs.getencoder('utf-16')(x) == ( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) + assert codecs.getdecoder('utf-16')( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) def test_unicode_escape(self): assert u'\\'.encode('unicode-escape') == '\\\\' diff --git a/pypy/module/_codecs/test/test_ztranslation.py b/pypy/module/_codecs/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/_codecs/test/test_ztranslation.py @@ -0,0 +1,5 @@ +from pypy.objspace.fake.checkmodule import checkmodule + + +def test__codecs_translates(): + checkmodule('_codecs') diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -34,8 +34,12 @@ ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) - ropenssl.EVP_DigestInit(ctx, digest_type) - self.ctx = ctx + try: + ropenssl.EVP_DigestInit(ctx, digest_type) + self.ctx = ctx + except: + lltype.free(ctx, flavor='raw') + raise def __del__(self): # self.lock.free() diff --git a/pypy/module/_io/interp_fileio.py b/pypy/module/_io/interp_fileio.py --- a/pypy/module/_io/interp_fileio.py +++ b/pypy/module/_io/interp_fileio.py @@ -349,6 +349,8 @@ try: s = os.read(self.fd, size) except OSError, e: + if e.errno == errno.EAGAIN: + return space.w_None raise wrap_oserror(space, e, exception_name='w_IOError') @@ -362,6 +364,8 @@ try: buf = os.read(self.fd, length) except OSError, e: + if e.errno == errno.EAGAIN: + return space.w_None raise wrap_oserror(space, e, exception_name='w_IOError') rwbuffer.setslice(0, buf) diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -133,6 +133,19 @@ f.close() assert a == 'a\nbxxxxxxx' + def test_nonblocking_read(self): + import os, fcntl + r_fd, w_fd = os.pipe() + # set nonblocking + fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) + import _io + f = _io.FileIO(r_fd, 'r') + # Read from stream sould return None + assert f.read() is None + assert f.read(10) is None + a = bytearray('x' * 10) + assert f.readinto(a) is None + def test_repr(self): import _io f = _io.FileIO(self.tmpfile, 'r') diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/_socket/interp_socket.py b/pypy/module/_socket/interp_socket.py --- a/pypy/module/_socket/interp_socket.py +++ b/pypy/module/_socket/interp_socket.py @@ -67,9 +67,6 @@ self.connect(self.addr_from_object(space, w_addr)) except SocketError, e: raise converted_error(space, e) - except TypeError, e: - raise OperationError(space.w_TypeError, - space.wrap(str(e))) def connect_ex_w(self, space, w_addr): """connect_ex(address) -> errno diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py --- a/pypy/module/_socket/test/test_sock_app.py +++ b/pypy/module/_socket/test/test_sock_app.py @@ -529,26 +529,31 @@ import _socket, os if not hasattr(_socket, 'AF_UNIX'): skip('AF_UNIX not supported.') - sockpath = os.path.join(self.udir, 'app_test_unix_socket_connect') + oldcwd = os.getcwd() + os.chdir(self.udir) + try: + sockpath = 'app_test_unix_socket_connect' - serversock = _socket.socket(_socket.AF_UNIX) - serversock.bind(sockpath) - serversock.listen(1) + serversock = _socket.socket(_socket.AF_UNIX) + serversock.bind(sockpath) + serversock.listen(1) - clientsock = _socket.socket(_socket.AF_UNIX) - clientsock.connect(sockpath) - s, addr = serversock.accept() - assert not addr + clientsock = _socket.socket(_socket.AF_UNIX) + clientsock.connect(sockpath) + s, addr = serversock.accept() + assert not addr - s.send('X') - data = clientsock.recv(100) - assert data == 'X' - clientsock.send('Y') - data = s.recv(100) - assert data == 'Y' + s.send('X') + data = clientsock.recv(100) + assert data == 'X' + clientsock.send('Y') + data = s.recv(100) + assert data == 'Y' - clientsock.close() - s.close() + clientsock.close() + s.close() + finally: + os.chdir(oldcwd) class AppTestSocketTCP: diff --git a/pypy/module/_ssl/test/test_ssl.py b/pypy/module/_ssl/test/test_ssl.py --- a/pypy/module/_ssl/test/test_ssl.py +++ b/pypy/module/_ssl/test/test_ssl.py @@ -64,8 +64,8 @@ def test_sslwrap(self): import _ssl, _socket, sys, gc - if sys.platform == 'darwin': - skip("hangs indefinitely on OSX (also on CPython)") + if sys.platform == 'darwin' or 'freebsd' in sys.platform: + skip("hangs indefinitely on OSX & FreeBSD (also on CPython)") s = _socket.socket() ss = _ssl.sslwrap(s, 0) exc = raises(_socket.error, ss.do_handshake) diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -4,13 +4,13 @@ import reflex_capi as backend #import cint_capi as backend +_C_OPAQUE_PTR = rffi.VOIDP +_C_OPAQUE_NULL = lltype.nullptr(_C_OPAQUE_PTR.TO) -C_NULL_VOIDP = lltype.nullptr(rffi.VOIDP.TO) - -C_TYPEHANDLE = rffi.LONG -C_NULL_TYPEHANDLE = rffi.cast(C_TYPEHANDLE, C_NULL_VOIDP) -C_OBJECT = rffi.VOIDP -C_NULL_OBJECT = C_NULL_VOIDP +C_TYPEHANDLE = _C_OPAQUE_PTR +C_NULL_TYPEHANDLE = rffi.cast(C_TYPEHANDLE, _C_OPAQUE_NULL) +C_OBJECT = _C_OPAQUE_PTR +C_NULL_OBJECT = rffi.cast(C_OBJECT, _C_OPAQUE_NULL) C_METHPTRGETTER = lltype.FuncType([C_OBJECT], rffi.VOIDP) C_METHPTRGETTER_PTR = lltype.Ptr(C_METHPTRGETTER) @@ -19,7 +19,7 @@ c_get_typehandle = rffi.llexternal( "cppyy_get_typehandle", - [rffi.CCHARP], C_TYPEHANDLE, + [rffi.CCHARP], C_OBJECT, compilation_info=backend.eci) c_get_templatehandle = rffi.llexternal( "cppyy_get_templatehandle", @@ -28,7 +28,7 @@ c_allocate = rffi.llexternal( "cppyy_allocate", - [C_TYPEHANDLE], rffi.VOIDP, + [C_TYPEHANDLE], C_OBJECT, compilation_info=backend.eci) c_deallocate = rffi.llexternal( "cppyy_deallocate", diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -16,13 +16,13 @@ from pypy.module.cppyy.interp_cppyy import W_CPPInstance cppinstance = space.interp_w(W_CPPInstance, w_obj, can_be_None=True) if cppinstance: - assert lltype.typeOf(cppinstance.rawobject) == rffi.VOIDP + assert lltype.typeOf(cppinstance.rawobject) == capi.C_OBJECT return cppinstance.rawobject - return lltype.nullptr(rffi.VOIDP.TO) + return capi.C_NULL_OBJECT def _direct_ptradd(ptr, offset): # TODO: factor out with interp_cppyy.py address = rffi.cast(rffi.CCHARP, ptr) - return rffi.cast(rffi.CCHARP, lltype.direct_ptradd(address, offset)) + return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) class TypeConverter(object): @@ -545,7 +545,7 @@ _immutable_ = True def from_memory(self, space, w_obj, w_type, offset): - address = rffi.cast(rffi.VOIDP, self._get_raw_address(space, w_obj, offset)) + address = rffi.cast(capi.C_OBJECT, self._get_raw_address(space, w_obj, offset)) from pypy.module.cppyy import interp_cppyy return interp_cppyy.new_instance(space, w_type, self.cpptype, address, False) diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -247,12 +247,12 @@ from pypy.module.cppyy import interp_cppyy long_result = capi.c_call_l( func.cpptype.handle, func.method_index, cppthis, num_args, args) - ptr_result = rffi.cast(rffi.VOIDP, long_result) + ptr_result = rffi.cast(capi.C_OBJECT, long_result) return interp_cppyy.new_instance(space, w_returntype, self.cpptype, ptr_result, False) def execute_libffi(self, space, w_returntype, libffifunc, argchain): from pypy.module.cppyy import interp_cppyy - ptr_result = rffi.cast(rffi.VOIDP, libffifunc.call(argchain, rffi.VOIDP)) + ptr_result = rffi.cast(capi.C_OBJECT, libffifunc.call(argchain, rffi.VOIDP)) return interp_cppyy.new_instance(space, w_returntype, self.cpptype, ptr_result, False) @@ -263,7 +263,7 @@ from pypy.module.cppyy import interp_cppyy long_result = capi.c_call_o( func.cpptype.handle, func.method_index, cppthis, num_args, args, self.cpptype.handle) - ptr_result = rffi.cast(rffi.VOIDP, long_result) + ptr_result = rffi.cast(capi.C_OBJECT, long_result) return interp_cppyy.new_instance(space, w_returntype, self.cpptype, ptr_result, True) def execute_libffi(self, space, w_returntype, libffifunc, argchain): diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -18,7 +18,7 @@ def _direct_ptradd(ptr, offset): # TODO: factor out with convert.py address = rffi.cast(rffi.CCHARP, ptr) - return rffi.cast(rffi.VOIDP, lltype.direct_ptradd(address, offset)) + return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) @unwrap_spec(name=str) def load_dictionary(space, name): @@ -135,7 +135,7 @@ if self.arg_converters is None: self._build_converters() jit.promote(self) - funcptr = self.methgetter(rffi.cast(rffi.VOIDP, cppthis)) + funcptr = self.methgetter(rffi.cast(capi.C_OBJECT, cppthis)) libffi_func = self._get_libffi_func(funcptr) if not libffi_func: raise FastCallNotPossible @@ -252,6 +252,7 @@ offset = capi.c_base_offset( cppinstance.cppclass.handle, self.scope_handle, cppinstance.rawobject) cppthis = _direct_ptradd(cppinstance.rawobject, offset) + assert lltype.typeOf(cppthis) == capi.C_OBJECT else: cppthis = capi.C_NULL_OBJECT return cppthis @@ -618,7 +619,7 @@ @unwrap_spec(address=int, owns=bool) def bind_object(space, address, w_type, owns=False): - rawobject = rffi.cast(rffi.VOIDP, address) + rawobject = rffi.cast(capi.C_OBJECT, address) w_cpptype = space.findattr(w_type, space.wrap("_cpp_proxy")) cpptype = space.interp_w(W_CPPType, w_cpptype, can_be_None=False) diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -4,7 +4,7 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.cppyy import interp_cppyy +from pypy.module.cppyy import interp_cppyy, capi class FakeBase(W_Root): typename = None @@ -31,7 +31,7 @@ @jit.dont_look_inside def _opaque_direct_ptradd(ptr, offset): address = rffi.cast(rffi.CCHARP, ptr) - return rffi.cast(rffi.VOIDP, lltype.direct_ptradd(address, offset)) + return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) interp_cppyy._direct_ptradd = _opaque_direct_ptradd class FakeUserDelAction(object): @@ -148,5 +148,4 @@ f() space = FakeSpace() result = self.meta_interp(f, [], listops=True, backendopt=True, listcomp=True) - self.check_loops(call=0, call_release_gil=1) - self.check_loops(getarrayitem_gc_pure=0, everywhere=True) + self.check_jitcell_token_count(1) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.7.1" +#define PYPY_VERSION "1.8.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/fcntl/test/test_fcntl.py b/pypy/module/fcntl/test/test_fcntl.py --- a/pypy/module/fcntl/test/test_fcntl.py +++ b/pypy/module/fcntl/test/test_fcntl.py @@ -42,13 +42,9 @@ else: start_len = "qq" - if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', - 'Darwin1.2', 'darwin', - 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', - 'freebsd6', 'freebsd7', 'freebsd8', 'freebsd9', - 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', - 'openbsd5'): + if any(substring in sys.platform.lower() + for substring in ('netbsd', 'darwin', 'freebsd', 'bsdos', + 'openbsd')): if struct.calcsize('l') == 8: off_t = 'l' pid_t = 'i' @@ -182,7 +178,8 @@ def test_large_flag(self): import sys - if sys.platform == "darwin" or sys.platform.startswith("openbsd"): + if any(plat in sys.platform + for plat in ('darwin', 'openbsd', 'freebsd')): skip("Mac OS doesn't have any large flag in fcntl.h") import fcntl, sys if sys.maxint == 2147483647: diff --git a/pypy/module/gc/__init__.py b/pypy/module/gc/__init__.py --- a/pypy/module/gc/__init__.py +++ b/pypy/module/gc/__init__.py @@ -1,18 +1,18 @@ from pypy.interpreter.mixedmodule import MixedModule class Module(MixedModule): - appleveldefs = { - 'enable': 'app_gc.enable', - 'disable': 'app_gc.disable', - 'isenabled': 'app_gc.isenabled', - } interpleveldefs = { 'collect': 'interp_gc.collect', + 'enable': 'interp_gc.enable', + 'disable': 'interp_gc.disable', + 'isenabled': 'interp_gc.isenabled', 'enable_finalizers': 'interp_gc.enable_finalizers', 'disable_finalizers': 'interp_gc.disable_finalizers', 'garbage' : 'space.newlist([])', #'dump_heap_stats': 'interp_gc.dump_heap_stats', } + appleveldefs = { + } def __init__(self, space, w_name): if (not space.config.translating or diff --git a/pypy/module/gc/app_gc.py b/pypy/module/gc/app_gc.py deleted file mode 100644 --- a/pypy/module/gc/app_gc.py +++ /dev/null @@ -1,21 +0,0 @@ -# NOT_RPYTHON - -enabled = True - -def isenabled(): - global enabled - return enabled - -def enable(): - global enabled - import gc - if not enabled: - gc.enable_finalizers() - enabled = True - -def disable(): - global enabled - import gc - if enabled: - gc.disable_finalizers() - enabled = False diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -17,6 +17,26 @@ rgc.collect() return space.wrap(0) +def enable(space): + """Non-recursive version. Enable finalizers now. + If they were already enabled, no-op. + If they were disabled even several times, enable them anyway. + """ + if not space.user_del_action.enabled_at_app_level: + space.user_del_action.enabled_at_app_level = True + enable_finalizers(space) + +def disable(space): + """Non-recursive version. Disable finalizers now. Several calls + to this function are ignored. + """ + if space.user_del_action.enabled_at_app_level: + space.user_del_action.enabled_at_app_level = False + disable_finalizers(space) + +def isenabled(space): + return space.newbool(space.user_del_action.enabled_at_app_level) + def enable_finalizers(space): if space.user_del_action.finalizers_lock_count == 0: raise OperationError(space.w_ValueError, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -27,6 +27,8 @@ 'dot': 'interp_numarray.dot', 'fromstring': 'interp_support.fromstring', 'flatiter': 'interp_numarray.W_FlatIterator', + 'isna': 'interp_numarray.isna', + 'concatenate': 'interp_numarray.concatenate', 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', @@ -48,6 +50,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } @@ -69,6 +72,7 @@ ("exp", "exp"), ("fabs", "fabs"), ("floor", "floor"), + ("ceil", "ceil"), ("greater", "greater"), ("greater_equal", "greater_equal"), ("less", "less"), @@ -84,12 +88,13 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl appleveldefs = { 'average': 'app_numpy.average', - 'mean': 'app_numpy.mean', 'sum': 'app_numpy.sum', 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', @@ -98,5 +103,4 @@ 'e': 'app_numpy.e', 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', - 'reshape': 'app_numpy.reshape', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -11,34 +11,55 @@ def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! - return mean(a) + if not hasattr(a, "mean"): + a = _numpypy.array(a) + return a.mean() def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n, n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a): - if not hasattr(a, "mean"): - a = numpypy.array(a) - return a.mean() +def sum(a,axis=None): + '''sum(a, axis=None) + Sum of array elements over a given axis. -def sum(a): + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + ''' + # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): - a = numpypy.array(a) - return a.sum() + a = _numpypy.array(a) + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): - a = numpypy.array(a) - return a.min() + a = _numpypy.array(a) + return a.min(axis) -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): - a = numpypy.array(a) - return a.max() - + a = _numpypy.array(a) + return a.max(axis) + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). @@ -47,48 +68,11 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i i += step return arr - - -def reshape(a, shape): - '''reshape(a, newshape) - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array -''' - if not hasattr(a, 'reshape'): - a = numpypy.array(a) - return a.reshape(shape) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -372,13 +372,17 @@ def execute(self, interp): if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch arr = self.args[0].execute(interp) if not isinstance(arr, BaseArray): raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -416,7 +420,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +508,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +528,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +548,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +564,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -9,6 +9,7 @@ MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () +MIXIN_32 = () if LONG_BIT == 64 else (int_typedef,) def new_dtype_getter(name): def get_dtype(space): @@ -78,6 +79,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +172,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -229,7 +232,7 @@ __new__ = interp2app(W_UInt16Box.descr__new__.im_func), ) -W_Int32Box.typedef = TypeDef("int32", W_SignedIntegerBox.typedef, +W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), ) @@ -239,23 +242,18 @@ __new__ = interp2app(W_UInt32Box.descr__new__.im_func), ) -if LONG_BIT == 32: - long_name = "int32" -elif LONG_BIT == 64: - long_name = "int64" -W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), - __module__ = "numpypy", -) - -W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, - __module__ = "numpypy", -) - W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), ) +if LONG_BIT == 32: + W_LongBox = W_Int32Box + W_ULongBox = W_UInt32Box +elif LONG_BIT == 64: + W_LongBox = W_Int64Box + W_ULongBox = W_UInt64Box + W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -46,6 +47,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_bool(storage, isize, i, 0) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) @@ -62,7 +67,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -84,12 +89,30 @@ def descr_get_shape(self, space): return space.newtuple([]) + def eq(self, space, w_other): + w_other = space.call_function(space.gettypefor(W_Dtype), w_other) + return space.is_w(self, w_other) + + def descr_eq(self, space, w_other): + return space.wrap(self.eq(space, w_other)) + + def descr_ne(self, space, w_other): + return space.wrap(not self.eq(space, w_other)) + + def is_int_type(self): + return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), __str__= interp2app(W_Dtype.descr_str), __repr__ = interp2app(W_Dtype.descr_repr), + __eq__ = interp2app(W_Dtype.descr_eq), + __ne__ = interp2app(W_Dtype.descr_ne), num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), @@ -107,7 +130,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +139,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +147,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +155,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +163,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +171,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +179,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +191,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +200,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +208,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +217,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +225,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +235,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,37 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +# structures to describe slicing -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class Chunk(object): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + + def __repr__(self): + return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, + self.lgt) + +class BaseTransform(object): + pass + +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,20 +40,40 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 self.size = size def next(self, shapelen): + return self._next(1) + + def _next(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self._next(0) + def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +90,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +137,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +151,59 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + if i == self.dim: + first_line = True + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +221,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,39 +1,62 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', +) +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', ) def _find_shape_and_elems(space, w_iterable): @@ -151,7 +174,7 @@ # fits the new shape, using those steps. If there is a shape/step mismatch # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. -def calc_new_strides(new_shape, old_shape, old_strides): +def calc_new_strides(new_shape, old_shape, old_strides, order): # Return the proper strides for new_shape, or None if the mapping crosses # stepping boundaries @@ -161,7 +184,7 @@ last_step = 1 oldI = 0 new_strides = [] - if old_strides[0] < old_strides[-1]: + if order == 'F': for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -178,11 +201,10 @@ n_old_elems_to_use *= old_shape[oldI] if n_new_elems_used == n_old_elems_to_use: oldI += 1 - if oldI >= len(old_shape): - break - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - else: + if oldI < len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + elif order == 'C': for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -201,10 +223,10 @@ n_old_elems_to_use *= old_shape[oldI] if n_new_elems_used == n_old_elems_to_use: oldI -= 1 - if oldI < -len(old_shape): - break - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] + if oldI >= -len(old_shape): + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + assert len(new_strides) == len(new_shape) return new_strides class BaseArray(Wrappable): @@ -266,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -282,13 +307,17 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, self, multidim=True) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") @@ -297,6 +326,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -372,7 +402,7 @@ else: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) def get_concrete(self): raise NotImplementedError @@ -400,9 +430,22 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space): + return self.flatten(space) + def copy(self, space): return self.get_concrete().copy(space) + def empty_copy(self, space, dtype): + shape = self.shape + size = 1 + for elem in shape: + size *= elem + return W_NDimArray(size, shape[:], dtype, 'C') + + def flatten(self, space): + return self.get_concrete().flatten(space) + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -470,11 +513,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -484,6 +585,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -500,9 +606,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -533,8 +638,8 @@ concrete = self.get_concrete() new_shape = get_shape_from_iterable(space, concrete.size, w_shape) # Since we got to here, prod(new_shape) == self.size - new_strides = calc_new_strides(new_shape, - concrete.shape, concrete.strides) + new_strides = calc_new_strides(new_shape, concrete.shape, + concrete.strides, concrete.order) if new_strides: # We can create a view, strides somehow match up. ndims = len(new_shape) @@ -560,8 +665,30 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_axis) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) + + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space, space.w_None) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) def descr_nonzero(self, space): if self.size > 1: @@ -596,11 +723,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -654,7 +782,15 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype) + array.setitem(0, self.value) + return array + + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -672,7 +808,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -688,8 +825,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -724,11 +860,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -738,6 +874,7 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -748,16 +885,17 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -772,12 +910,56 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if not self.no_broadcast and self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -832,11 +1014,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice The items will be seperated by a comma if comma is 1 @@ -850,14 +1027,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -869,11 +1046,11 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: - builder.append(ccomma +'\n' + indent + '...' + ncomma) + builder.append(ccomma + '\n' + indent + '...' + ncomma) i = self.shape[0] - 3 else: i += 1 @@ -886,7 +1063,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -951,20 +1128,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -974,30 +1153,37 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) return array + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype, self.order) + if self.supports_fast_slicing(): + array._fast_setslice(space, self) + else: + arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) + array._sliceloop(arr) + return array + + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1015,6 +1201,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1038,7 +1228,8 @@ self.backstrides = backstrides self.shape = new_shape return - new_strides = calc_new_strides(new_shape, self.shape, self.strides) + new_strides = calc_new_strides(new_shape, self.shape, self.strides, + self.order) if new_strides is None: raise OperationError(space.w_AttributeError, space.wrap( "incompatible shape for a non-contiguous array")) @@ -1061,8 +1252,11 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_iter(self): + return ArrayIterator(self.size) + + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) @@ -1115,6 +1309,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1146,6 +1341,42 @@ return convert_to_array(space, w_obj2).descr_dot(space, w_arr) return w_arr.descr_dot(space, w_obj2) + at unwrap_spec(axis=int) +def concatenate(space, w_args, axis=0): + args_w = space.listview(w_args) + if len(args_w) == 0: + raise OperationError(space.w_ValueError, space.wrap("concatenation of zero-length sequences is impossible")) + args_w = [convert_to_array(space, w_arg) for w_arg in args_w] + dtype = args_w[0].find_dtype() + shape = args_w[0].shape[:] + if len(shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for arr in args_w[1:]: + dtype = interp_ufuncs.find_binop_result_dtype(space, dtype, + arr.find_dtype()) + if len(arr.shape) <= axis: + raise OperationError(space.w_ValueError, + space.wrap("bad axis argument")) + for i, axis_size in enumerate(arr.shape): + if len(arr.shape) != len(shape) or (i != axis and axis_size != shape[i]): + raise OperationError(space.w_ValueError, space.wrap( + "array dimensions must agree except for axis being concatenated")) + elif i == axis: + shape[i] += axis_size + size = 1 + for elem in shape: + size *= elem + res = W_NDimArray(size, shape, dtype, 'C') + chunks = [Chunk(0, i, 1, i) for i in shape] + axis_start = 0 + for arr in args_w: + chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, + arr.shape[axis]) + res.create_slice(chunks).setslice(space, arr) + axis_start += arr.shape[axis] + return res + BaseArray.typedef = TypeDef( 'ndarray', __module__ = "numpypy", @@ -1181,6 +1412,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1204,8 +1438,13 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), + + fill = interp2app(BaseArray.descr_fill), copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), ) @@ -1243,3 +1482,11 @@ __iter__ = interp2app(W_FlatIterator.descr_iter), ) W_FlatIterator.acceptable_as_base_class = False + +def isna(space, w_obj): + if isinstance(w_obj, BaseArray): + arr = w_obj.empty_copy(space, + interp_dtype.get_dtype_cache(space).w_booldtype) + arr.fill(space, space.wrap(False)) + return arr + return space.wrap(False) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,19 +3,29 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"], + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -48,18 +58,72 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + def descr_reduce(self, space, w_obj, w_dim=0): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + return self.reduce(space, w_obj, False, False, w_dim) + + def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - + dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -67,26 +131,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -94,10 +212,12 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 @@ -129,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -148,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -182,11 +304,14 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -230,6 +355,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -254,6 +380,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -302,6 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -326,6 +457,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -347,11 +479,12 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,32 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -33,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -51,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -59,6 +82,22 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -70,6 +109,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -78,7 +120,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -95,13 +137,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -120,16 +162,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -137,23 +169,25 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -166,7 +200,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -186,8 +220,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -198,6 +233,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -207,12 +245,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -248,11 +285,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -293,29 +329,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -325,3 +400,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,4 +1,5 @@ from pypy.rlib import jit +from pypy.interpreter.error import OperationError @jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: @@ -10,12 +11,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 @@ -37,3 +38,27 @@ rstrides = [0] * (len(res_shape) - len(orig_shape)) + rstrides rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides + +def to_coords(space, shape, size, order, w_item_or_slice): + '''Returns a start coord, step, and length. + ''' + start = lngth = step = 0 + if not (space.isinstance_w(w_item_or_slice, space.w_int) or + space.isinstance_w(w_item_or_slice, space.w_slice)): + raise OperationError(space.w_IndexError, + space.wrap('unsupported iterator index')) + + start, stop, step, lngth = space.decode_index4(w_item_or_slice, size) + + coords = [0] * len(shape) + i = start + if order == 'C': + for s in range(len(shape) -1, -1, -1): + coords[s] = i % shape[s] + i //= shape[s] + else: + for s in range(len(shape)): + coords[s] = i % shape[s] + i //= shape[s] + + return coords, step, lngth diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -13,8 +13,16 @@ assert dtype(None) is dtype(float) raises(TypeError, dtype, 1042) + def test_dtype_eq(self): + from _numpypy import dtype + + assert dtype("int8") == "int8" + assert "int8" == dtype("int8") + raises(TypeError, lambda: dtype("int8") == 3) + assert dtype(bool) == bool + def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +30,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +44,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +52,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +66,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +106,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +115,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +137,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +146,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +155,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +164,25 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_aliases(self): + from _numpypy import dtype + + assert dtype("float") is dtype(float) + + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -180,8 +194,17 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +219,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +241,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +264,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +274,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +283,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +298,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +309,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -309,13 +332,13 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +353,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +362,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +375,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): @@ -361,3 +384,14 @@ b = X(10) assert type(b) is X assert b.m() == 12 + + def test_int(self): + import sys + from _numpypy import int32, int64, int_ + assert issubclass(int_, int) + if sys.maxint == (1<<31) - 1: + assert issubclass(int32, int) + assert int_ is int32 + else: + assert issubclass(int64, int) + assert int_ is int64 diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,34 +2,29 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -152,16 +154,41 @@ def test_calc_new_strides(self): from pypy.module.micronumpy.interp_numarray import calc_new_strides - assert calc_new_strides([2, 4], [4, 2], [4, 2]) == [8, 2] - assert calc_new_strides([2, 4, 3], [8, 3], [1, 16]) == [1, 2, 16] - assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None - assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None - assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([2, 4], [4, 2], [4, 2], "C") == [8, 2] + assert calc_new_strides([2, 4, 3], [8, 3], [1, 16], 'F') == [1, 2, 16] + assert calc_new_strides([2, 3, 4], [8, 3], [1, 16], 'F') is None + assert calc_new_strides([24], [2, 4, 3], [48, 6, 1], 'C') is None + assert calc_new_strides([24], [2, 4, 3], [24, 6, 2], 'C') == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1],'C') == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1],'C') == [105, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1],'F') is None + assert calc_new_strides([1, 1, 1, 105, 1], [15, 7], [7, 1],'C') == \ + [105, 105, 105, 1, 1] + assert calc_new_strides([1, 1, 105, 1, 1], [7, 15], [1, 7],'F') == \ + [1, 1, 1, 105, 105] + def test_to_coords(self): + from pypy.module.micronumpy.strides import to_coords + + def _to_coords(index, order): + return to_coords(self.space, [2, 3, 4], 24, order, + self.space.wrap(index))[0] + + assert _to_coords(0, 'C') == [0, 0, 0] + assert _to_coords(1, 'C') == [0, 0, 1] + assert _to_coords(-1, 'C') == [1, 2, 3] + assert _to_coords(5, 'C') == [0, 1, 1] + assert _to_coords(13, 'C') == [1, 0, 1] + assert _to_coords(0, 'F') == [0, 0, 0] + assert _to_coords(1, 'F') == [1, 0, 0] + assert _to_coords(-1, 'F') == [1, 2, 3] + assert _to_coords(5, 'F') == [1, 2, 0] + assert _to_coords(13, 'F') == [1, 0, 2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -176,12 +203,12 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_ndim(self): - from numpypy import array + from _numpypy import array x = array(0.2) assert x.ndim == 0 x = array([1, 2]) @@ -190,12 +217,12 @@ assert x.ndim == 2 x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an + # numpy actually raises an AttributeError, but _numpypy raises an # TypeError raises(TypeError, 'x.ndim = 3') def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -204,7 +231,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -215,13 +242,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -230,7 +257,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -246,13 +273,17 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -261,7 +292,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -271,7 +302,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -279,7 +310,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -290,7 +321,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -301,7 +332,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -320,7 +351,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -328,14 +359,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -344,13 +375,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -359,7 +390,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -377,9 +408,11 @@ a.shape = () #numpy allows this a.shape = (1,) + a = array(range(6)).reshape(2,3).T + raises(AttributeError, 'a.shape = 6') def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -392,7 +425,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -428,13 +461,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -447,7 +480,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -455,20 +488,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -477,14 +510,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -492,34 +525,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -527,7 +560,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -559,7 +592,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -567,14 +600,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -584,7 +617,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -592,14 +625,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -612,7 +645,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -620,14 +653,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -638,7 +671,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -649,7 +682,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -660,7 +693,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -674,7 +707,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array From noreply at buildbot.pypy.org Sat Feb 4 05:39:17 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 4 Feb 2012 05:39:17 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: more casts cleanup and assertions to replace casts Message-ID: <20120204043917.80E1F710770@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52078:c0dc3a2e1f02 Date: 2012-02-01 13:50 -0800 http://bitbucket.org/pypy/pypy/changeset/c0dc3a2e1f02/ Log: more casts cleanup and assertions to replace casts diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -21,6 +21,7 @@ return capi.C_NULL_OBJECT def _direct_ptradd(ptr, offset): # TODO: factor out with interp_cppyy.py + assert lltype.typeOf(ptr) == capi.C_OBJECT address = rffi.cast(rffi.CCHARP, ptr) return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) @@ -37,10 +38,12 @@ @jit.dont_look_inside def _get_raw_address(self, space, w_obj, offset): rawobject = get_rawobject(space, w_obj) + assert lltype.typeOf(rawobject) == capi.C_OBJECT if rawobject: + fieldptr = _direct_ptradd(rawobject, offset) else: - fieldptr = rffi.cast(rffi.CCHARP, offset) + fieldptr = rffi.cast(capi.C_OBJECT, offset) return fieldptr def _is_abstract(self, space): @@ -100,7 +103,7 @@ def to_memory(self, space, w_obj, w_value, offset): # copy the full array (uses byte copy for now) - address = self._get_raw_address(space, w_obj, offset) + address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) buf = space.buffer_w(w_value) # TODO: report if too many items given? for i in range(min(self.size*self.typesize, buf.getlength())): @@ -165,13 +168,13 @@ argchain.arg(self._unwrap_object(space, w_obj)) def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) + address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) if address[0] == '\x01': return space.w_True return space.w_False def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) + address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) arg = self._unwrap_object(space, w_value) if arg: address[0] = '\x01' @@ -207,11 +210,11 @@ argchain.arg(self._unwrap_object(space, w_obj)) def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) + address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) return space.wrap(address[0]) def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) + address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) address[0] = self._unwrap_object(space, w_value) class IntConverter(TypeConverter): @@ -344,7 +347,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.FLOATP, address) x[0] = self._unwrap_object(space, w_obj) - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'f' def convert_argument_libffi(self, space, w_obj, argchain): @@ -373,7 +377,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.DOUBLEP, address) x[0] = self._unwrap_object(space, w_obj) - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'd' def convert_argument_libffi(self, space, w_obj, argchain): @@ -397,7 +402,8 @@ x = rffi.cast(rffi.LONGP, address) arg = space.str_w(w_obj) x[0] = rffi.cast(rffi.LONG, rffi.str2charp(arg)) - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'a' def from_memory(self, space, w_obj, w_type, offset): @@ -414,9 +420,9 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) - obj_address = get_rawobject(space, w_obj) - x[0] = obj_address - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'a' def convert_argument_libffi(self, space, w_obj, argchain): @@ -428,9 +434,9 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) - obj_address = get_rawobject(space, w_obj) - x[0] = obj_address - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'p' @@ -439,9 +445,9 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) - obj_address = get_rawobject(space, w_obj) - x[0] = obj_address - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'r' @@ -525,7 +531,7 @@ offset = capi.c_base_offset( obj.cppclass.handle, self.cpptype.handle, obj.rawobject) obj_address = _direct_ptradd(obj.rawobject, offset) - return rffi.cast(rffi.VOIDP, obj_address) + return rffi.cast(capi.C_OBJECT, obj_address) raise OperationError(space.w_TypeError, space.wrap("cannot pass %s as %s" % ( space.type(w_obj).getname(space, "?"), @@ -533,8 +539,10 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) - x[0] = self._unwrap_object(space, w_obj) - typecode = _direct_ptradd(address, capi.c_function_arg_typeoffset()) + x[0] = rffi.cast(rffi.VOIDP, self._unwrap_object(space, w_obj)) + address = rffi.cast(capi.C_OBJECT, address) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'o' def convert_argument_libffi(self, space, w_obj, argchain): diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -10,7 +10,7 @@ from pypy.rlib import libffi, rdynload, rweakref from pypy.rlib import jit, debug -from pypy.module.cppyy import converter, executor, helper +from pypy.module.cppyy import converter, executor, helper, capi class FastCallNotPossible(Exception): @@ -183,13 +183,13 @@ w_arg = args_w[i] try: arg_i = lltype.direct_ptradd(rffi.cast(rffi.CCHARP, args), i*stride) - conv.convert_argument(space, w_arg, rffi.cast(rffi.VOIDP, arg_i)) + conv.convert_argument(space, w_arg, rffi.cast(capi.C_OBJECT, arg_i)) except: # fun :-( for j in range(i): conv = self.arg_converters[j] arg_j = lltype.direct_ptradd(rffi.cast(rffi.CCHARP, args), j*stride) - conv.free_argument(rffi.cast(rffi.VOIDP, arg_j)) + conv.free_argument(rffi.cast(capi.C_OBJECT, arg_j)) capi.c_deallocate_function_args(args) raise return args @@ -200,7 +200,7 @@ for i in range(nargs): conv = self.arg_converters[i] arg_i = lltype.direct_ptradd(rffi.cast(rffi.CCHARP, args), i*stride) - conv.free_argument(rffi.cast(rffi.VOIDP, arg_i)) + conv.free_argument(rffi.cast(capi.C_OBJECT, arg_i)) capi.c_deallocate_function_args(args) def __repr__(self): From noreply at buildbot.pypy.org Sat Feb 4 05:39:20 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 4 Feb 2012 05:39:20 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) default function values on fast path for integer types Message-ID: <20120204043920.0E586710770@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52079:2d3f34563372 Date: 2012-02-03 16:12 -0800 http://bitbucket.org/pypy/pypy/changeset/2d3f34563372/ Log: o) default function values on fast path for integer types diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -168,6 +168,10 @@ "cppyy_method_arg_type", [C_TYPEHANDLE, rffi.INT, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) +c_method_arg_default = rffi.llexternal( + "cppyy_method_arg_default", + [C_TYPEHANDLE, rffi.INT, rffi.INT], rffi.CCHARP, + compilation_info=backend.eci) c_is_constructor = rffi.llexternal( "cppyy_is_constructor", @@ -204,6 +208,11 @@ [C_TYPEHANDLE, rffi.INT], rffi.INT, compilation_info=backend.eci) +c_atoi = rffi.llexternal( + "cppyy_atoi", + [rffi.CCHARP], rffi.INT, + compilation_info=backend.eci) + c_free = rffi.llexternal( "cppyy_free", [rffi.VOIDP], lltype.Void, diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -32,7 +32,7 @@ name = "" - def __init__(self, space, array_size): + def __init__(self, space, extra): pass @jit.dont_look_inside @@ -221,6 +221,9 @@ _immutable_ = True libffitype = libffi.types.sint + def __init__(self, space, default): + self.default = capi.c_atoi(default) + def _unwrap_object(self, space, w_obj): return rffi.cast(rffi.INT, space.c_int_w(w_obj)) @@ -231,6 +234,9 @@ def convert_argument_libffi(self, space, w_obj, argchain): argchain.arg(self._unwrap_object(space, w_obj)) + def default_argument_libffi(self, space, argchain): + argchain.arg(self.default) + def from_memory(self, space, w_obj, w_type, offset): address = self._get_raw_address(space, w_obj, offset) intptr = rffi.cast(rffi.INTP, address) @@ -558,11 +564,13 @@ return interp_cppyy.new_instance(space, w_type, self.cpptype, address, False) -_converters = {} -def get_converter(space, name): +_converters = {} # builtin and custom types +_a_converters = {} # array and ptr versions of above +def get_converter(space, name, default): from pypy.module.cppyy import interp_cppyy # The matching of the name to a converter should follow: # 1) full, exact match + # 1a) const-removed match # 2) match of decorated, unqualified type # 3) accept const ref as by value # 4) accept ref as pointer @@ -573,7 +581,13 @@ # 1) full, exact match try: - return _converters[name](space, -1) + return _converters[name](space, default) + except KeyError, k: + pass + + # 1a) const-removed match + try: + return _converters[helper.remove_const(name)](space, default) except KeyError, k: pass @@ -583,14 +597,14 @@ try: # array_index may be negative to indicate no size or no size found array_size = helper.array_size(name) - return _converters[clean_name+compound](space, array_size) + return _a_converters[clean_name+compound](space, array_size) except KeyError, k: pass # 3) accept const ref as by value if compound and compound[len(compound)-1] == "&": try: - return _converters[clean_name](space, -1) + return _converters[clean_name](space, default) except KeyError: pass @@ -617,42 +631,44 @@ _converters["unsigned char"] = CharConverter _converters["short int"] = ShortConverter _converters["short"] = _converters["short int"] -_converters["short int*"] = ShortPtrConverter -_converters["short*"] = _converters["short int*"] -_converters["short int[]"] = ShortArrayConverter -_converters["short[]"] = _converters["short int[]"] _converters["unsigned short int"] = ShortConverter _converters["unsigned short"] = _converters["unsigned short int"] -_converters["unsigned short int*"] = ShortPtrConverter -_converters["unsigned short*"] = _converters["unsigned short int*"] -_converters["unsigned short int[]"] = ShortArrayConverter -_converters["unsigned short[]"] = _converters["unsigned short int[]"] _converters["int"] = IntConverter -_converters["int*"] = IntPtrConverter -_converters["int[]"] = IntArrayConverter _converters["unsigned int"] = UnsignedIntConverter -_converters["unsigned int*"] = UnsignedIntPtrConverter -_converters["unsigned int[]"] = UnsignedIntArrayConverter _converters["long int"] = LongConverter _converters["long"] = _converters["long int"] -_converters["long int*"] = LongPtrConverter -_converters["long*"] = _converters["long int*"] -_converters["long int[]"] = LongArrayConverter -_converters["long[]"] = _converters["long int[]"] _converters["unsigned long int"] = UnsignedLongConverter _converters["unsigned long"] = _converters["unsigned long int"] -_converters["unsigned long int*"] = LongPtrConverter -_converters["unsigned long*"] = _converters["unsigned long int*"] -_converters["unsigned long int[]"] = LongArrayConverter -_converters["unsigned long[]"] = _converters["unsigned long int[]"] _converters["float"] = FloatConverter -_converters["float*"] = FloatPtrConverter -_converters["float[]"] = FloatArrayConverter _converters["double"] = DoubleConverter -_converters["double*"] = DoublePtrConverter -_converters["double[]"] = DoubleArrayConverter _converters["const char*"] = CStringConverter _converters["char*"] = CStringConverter _converters["void*"] = VoidPtrConverter _converters["void**"] = VoidPtrPtrConverter _converters["void*&"] = VoidPtrRefConverter + +# it should be possible to generate these: +_a_converters["short int*"] = ShortPtrConverter +_a_converters["short*"] = _a_converters["short int*"] +_a_converters["short int[]"] = ShortArrayConverter +_a_converters["short[]"] = _a_converters["short int[]"] +_a_converters["unsigned short int*"] = ShortPtrConverter +_a_converters["unsigned short*"] = _a_converters["unsigned short int*"] +_a_converters["unsigned short int[]"] = ShortArrayConverter +_a_converters["unsigned short[]"] = _a_converters["unsigned short int[]"] +_a_converters["int*"] = IntPtrConverter +_a_converters["int[]"] = IntArrayConverter +_a_converters["unsigned int*"] = UnsignedIntPtrConverter +_a_converters["unsigned int[]"] = UnsignedIntArrayConverter +_a_converters["long int*"] = LongPtrConverter +_a_converters["long*"] = _a_converters["long int*"] +_a_converters["long int[]"] = LongArrayConverter +_a_converters["long[]"] = _a_converters["long int[]"] +_a_converters["unsigned long int*"] = LongPtrConverter +_a_converters["unsigned long*"] = _a_converters["unsigned long int*"] +_a_converters["unsigned long int[]"] = LongArrayConverter +_a_converters["unsigned long[]"] = _a_converters["unsigned long int[]"] +_a_converters["float*"] = FloatPtrConverter +_a_converters["float[]"] = FloatArrayConverter +_a_converters["double*"] = DoublePtrConverter +_a_converters["double[]"] = DoubleArrayConverter diff --git a/pypy/module/cppyy/helper.py b/pypy/module/cppyy/helper.py --- a/pypy/module/cppyy/helper.py +++ b/pypy/module/cppyy/helper.py @@ -5,6 +5,9 @@ def _remove_const(name): return "".join(rstring.split(name, "const")) # poor man's replace +def remove_const(name): + return _remove_const(name).strip(' ') + def compound(name): name = _remove_const(name) if name.endswith("]"): # array type? diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -6,7 +6,7 @@ #ifdef __cplusplus extern "C" { #endif // ifdef __cplusplus - typedef long cppyy_typehandle_t; + typedef void* cppyy_typehandle_t; typedef void* cppyy_object_t; typedef void* (*cppyy_methptrgetter_t)(cppyy_object_t); @@ -55,6 +55,7 @@ int cppyy_method_num_args(cppyy_typehandle_t handle, int method_index); int cppyy_method_req_args(cppyy_typehandle_t handle, int method_index); char* cppyy_method_arg_type(cppyy_typehandle_t handle, int method_index, int index); + char* cppyy_method_arg_default(cppyy_typehandle_t handle, int method_index, int index); /* method properties */ int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index); @@ -70,8 +71,9 @@ int cppyy_is_publicdata(cppyy_typehandle_t handle, int data_member_index); int cppyy_is_staticdata(cppyy_typehandle_t handle, int data_member_index); - /* misc helper */ + /* misc helpers */ void cppyy_free(void* ptr); + int cppyy_atoi(const char* str); #ifdef __cplusplus } diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -93,13 +93,13 @@ class CPPMethod(object): """ A concrete function after overloading has been resolved """ _immutable_ = True - _immutable_fields_ = ["arg_types[*]", "arg_converters[*]"] + _immutable_fields_ = ["arg_defs[*]", "arg_converters[*]"] - def __init__(self, cpptype, method_index, result_type, arg_types, args_required): + def __init__(self, cpptype, method_index, result_type, arg_defs, args_required): self.cpptype = cpptype self.space = cpptype.space self.method_index = method_index - self.arg_types = arg_types + self.arg_defs = arg_defs self.args_required = args_required self.executor = executor.get_executor(self.space, result_type) self.arg_converters = None @@ -114,7 +114,7 @@ if self.executor is None: raise OperationError(self.space.w_TypeError, self.space.wrap("return type not handled")) - if len(self.arg_types) < len(args_w) or len(args_w) < self.args_required: + if len(self.arg_defs) < len(args_w) or len(args_w) < self.args_required: raise OperationError(self.space.w_TypeError, self.space.wrap("wrong number of arguments")) if self.methgetter and cppthis: # only for methods @@ -142,10 +142,14 @@ argchain = libffi.ArgChain() argchain.arg(cppthis) + i = len(self.arg_defs) for i in range(len(args_w)): conv = self.arg_converters[i] w_arg = args_w[i] conv.convert_argument_libffi(space, w_arg, argchain) + for j in range(i+1, len(self.arg_defs)): + conv = self.arg_converters[j] + conv.default_argument_libffi(space, argchain) return self.executor.execute_libffi(space, w_type, libffi_func, argchain) @jit.elidable_promote() @@ -167,8 +171,8 @@ return libffifunc def _build_converters(self): - self.arg_converters = [converter.get_converter(self.space, arg_type) - for arg_type in self.arg_types] + self.arg_converters = [converter.get_converter(self.space, arg_type, arg_dflt) + for arg_type, arg_dflt in self.arg_defs] @jit.unroll_safe def prepare_arguments(self, args_w): @@ -205,7 +209,7 @@ def __repr__(self): return "CPPFunction(%s, %s, %r, %s)" % ( - self.cpptype, self.method_index, self.executor, self.arg_types) + self.cpptype, self.method_index, self.executor, self.arg_defs) def _freeze_(self): assert 0, "you should never have a pre-built instance of this!" @@ -302,7 +306,7 @@ self.space = space assert lltype.typeOf(scope_handle) == capi.C_TYPEHANDLE self.scope_handle = scope_handle - self.converter = converter.get_converter(self.space, type_name) + self.converter = converter.get_converter(self.space, type_name, '') self.offset = offset self._is_static = is_static @@ -415,11 +419,12 @@ result_type = capi.charp2str_free(capi.c_method_result_type(self.handle, method_index)) num_args = capi.c_method_num_args(self.handle, method_index) args_required = capi.c_method_req_args(self.handle, method_index) - argtypes = [] + arg_defs = [] for i in range(num_args): - argtype = capi.charp2str_free(capi.c_method_arg_type(self.handle, method_index, i)) - argtypes.append(argtype) - return CPPFunction(self, method_index, result_type, argtypes, args_required) + arg_type = capi.charp2str_free(capi.c_method_arg_type(self.handle, method_index, i)) + arg_dflt = capi.charp2str_free(capi.c_method_arg_default(self.handle, method_index, i)) + arg_defs.append((arg_type, arg_dflt)) + return CPPFunction(self, method_index, result_type, arg_defs, args_required) def _find_data_members(self): num_data_members = capi.c_num_data_members(self.handle) @@ -460,10 +465,11 @@ result_type = capi.charp2str_free(capi.c_method_result_type(self.handle, method_index)) num_args = capi.c_method_num_args(self.handle, method_index) args_required = capi.c_method_req_args(self.handle, method_index) - argtypes = [] + arg_defs = [] for i in range(num_args): - argtype = capi.charp2str_free(capi.c_method_arg_type(self.handle, method_index, i)) - argtypes.append(argtype) + arg_type = capi.charp2str_free(capi.c_method_arg_type(self.handle, method_index, i)) + arg_dflt = capi.charp2str_free(capi.c_method_arg_default(self.handle, method_index, i)) + arg_defs.append((arg_type, arg_dflt)) if capi.c_is_constructor(self.handle, method_index): result_type = "void" # b/c otherwise CINT v.s. Reflex difference cls = CPPConstructor @@ -471,7 +477,7 @@ cls = CPPFunction else: cls = CPPMethod - return cls(self, method_index, result_type, argtypes, args_required) + return cls(self, method_index, result_type, arg_defs, args_required) def _find_data_members(self): num_data_members = capi.c_num_data_members(self.handle) diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -462,6 +462,10 @@ /* misc helpers ----------------------------------------------------------- */ +int cppyy_atoi(const char* str) { + return atoi(str); +} + void cppyy_free(void* ptr) { free(ptr); } diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -14,6 +14,8 @@ #include #include +#include + /* local helpers ---------------------------------------------------------- */ static inline char* cppstring_to_cstring(const std::string& name) { @@ -296,6 +298,13 @@ return cppstring_to_cstring(name); } +char* cppyy_method_arg_default(cppyy_typehandle_t handle, int method_index, int arg_index) { + Reflex::Scope s = scope_from_handle(handle); + Reflex::Member m = s.FunctionMemberAt(method_index); + std::string dflt = m.FunctionParameterDefaultAt(arg_index); + return cppstring_to_cstring(dflt); +} + int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); @@ -350,7 +359,11 @@ } -/* misc helper ------------------------------------------------------------ */ +/* misc helpers ----------------------------------------------------------- */ +int cppyy_atoi(const char* str) { + return atoi(str); +} + void cppyy_free(void* ptr) { free(ptr); } diff --git a/pypy/module/cppyy/test/example01.cxx b/pypy/module/cppyy/test/example01.cxx --- a/pypy/module/cppyy/test/example01.cxx +++ b/pypy/module/cppyy/test/example01.cxx @@ -137,5 +137,42 @@ } +// argument passing +int ArgPasser::intValue(int arg0, int argn, int arg1, int arg2) +{ + switch (argn) { + case 0: + return arg0; + case 1: + return arg1; + case 2: + return arg2; + default: + break; + } + + return -1; +} + +std::string ArgPasser::stringValue(std::string arg0, int argn, std::string arg1) +{ + switch (argn) { + case 0: + return arg0; + case 1: + return arg1; + default: + break; + } + + return "argn invalid"; +} + +std::string ArgPasser::stringRef(const std::string& arg0, int argn, const std::string& arg1) +{ + return stringValue(arg0, argn, arg1); +} + + // special case naming z_& z_::gime_z_(z_& z) { return z; } diff --git a/pypy/module/cppyy/test/example01.h b/pypy/module/cppyy/test/example01.h --- a/pypy/module/cppyy/test/example01.h +++ b/pypy/module/cppyy/test/example01.h @@ -1,3 +1,5 @@ +#include + class payload { public: payload(double d = 0.); @@ -60,6 +62,19 @@ } +// argument passing +class ArgPasser { // use a class for now as methptrgetter not +public: // implemented for global functions + int intValue(int arg0, int argn=0, int arg1=1, int arg2=2); + + std::string stringValue( + std::string arg0, int argn=0, std::string arg1 = "default"); + + std::string stringRef( + const std::string& arg0, int argn=0, const std::string& arg1="default"); +}; + + // special case naming class z_ { public: diff --git a/pypy/module/cppyy/test/example01.xml b/pypy/module/cppyy/test/example01.xml --- a/pypy/module/cppyy/test/example01.xml +++ b/pypy/module/cppyy/test/example01.xml @@ -2,6 +2,7 @@ + @@ -9,4 +10,6 @@ + + diff --git a/pypy/module/cppyy/test/test_cppyy.py b/pypy/module/cppyy/test/test_cppyy.py --- a/pypy/module/cppyy/test/test_cppyy.py +++ b/pypy/module/cppyy/test/test_cppyy.py @@ -24,7 +24,7 @@ adddouble = w_cppyyclass.methods["staticAddToDouble"] func, = adddouble.functions assert isinstance(func.executor, executor.DoubleExecutor) - assert func.arg_types == ["double"] + assert func.arg_defs == [("double", "")] class AppTestCPPYY: diff --git a/pypy/module/cppyy/test/test_helper.py b/pypy/module/cppyy/test/test_helper.py --- a/pypy/module/cppyy/test/test_helper.py +++ b/pypy/module/cppyy/test/test_helper.py @@ -1,5 +1,8 @@ from pypy.module.cppyy import helper +def test_remove_const(): + assert helper.remove_const("const int") == "int" + def test_compound(): assert helper.compound("int*") == "*" assert helper.compound("int* const *&") == "**&" diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -37,7 +37,7 @@ cl2 = cppyy.gbl.example01 assert example01_class is cl2 - raises(AttributeError, "cppyy.gbl.nonexistingclass") + raises(AttributeError, 'cppyy.gbl.nonexistingclass') def test03_calling_static_functions(self): """Test calling of static methods.""" @@ -244,7 +244,30 @@ # TODO: need ReferenceError on touching pl_a - def test10_underscore_in_class_name(self): + def test10_default_arguments(self): + """Test propagation of default function arguments""" + + import cppyy + a = cppyy.gbl.ArgPasser() + + # NOTE: when called through the stub, default args are fine + f = a.stringRef + s = cppyy.gbl.std.string + assert f(s("aap"), 0, s("noot")).c_str() == "aap" + assert f(s("noot"), 1).c_str() == "default" + assert f(s("mies")).c_str() == "mies" + + g = a.intValue + raises(TypeError, 'g(1, 2, 3, 4, 6)') + assert g(11, 0, 12, 13) == 11 + assert g(11, 1, 12, 13) == 12 + assert g(11, 1, 12) == 12 + assert g(11, 2, 12) == 2 + assert g(11, 1) == 1 + assert g(11, 2) == 2 + assert g(11) == 11 + + def test12_underscore_in_class_name(self): """Test recognition of '_' as part of a valid class name""" import cppyy @@ -255,4 +278,3 @@ assert hasattr(z, 'myint') assert z.gime_z_(z) - diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first From noreply at buildbot.pypy.org Sat Feb 4 11:44:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 11:44:19 +0100 (CET) Subject: [pypy-commit] pypy default: oops, typo Message-ID: <20120204104419.C3504710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52080:3d464cea29ac Date: 2012-02-04 12:43 +0200 http://bitbucket.org/pypy/pypy/changeset/3d464cea29ac/ Log: oops, typo diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -199,7 +199,7 @@ assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "int_and": 1, "int_add": 1, - 'convert_float_to_int': 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) From noreply at buildbot.pypy.org Sat Feb 4 11:55:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 11:55:46 +0100 (CET) Subject: [pypy-commit] benchmarks default: hopefully fix uploading Message-ID: <20120204105546.D838D710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r173:bafaa81907cc Date: 2012-02-04 12:55 +0200 http://bitbucket.org/pypy/benchmarks/changeset/bafaa81907cc/ Log: hopefully fix uploading diff --git a/runner.py b/runner.py --- a/runner.py +++ b/runner.py @@ -52,7 +52,7 @@ def get_upload_options(options): - ''' + """ returns a dict with 2 keys: CHANGED, BASELINE. The values are dicts with the keys * 'upload' (boolean) @@ -67,7 +67,7 @@ raises: AssertionError if upload is specified, but not the corresponding executable or revision. - ''' + """ if options.upload_baseline_revision is None: options.upload_baseline_revision = options.upload_revision @@ -281,7 +281,7 @@ urls = upload_options[run]['urls'] project = upload_options[run]['project'] executable = upload_options[run]['executable'] - branch = upload_options[run]['branch'] + branch = upload_options[run]['branch'] or 'default' revision = upload_options[run]['revision'] if upload: From noreply at buildbot.pypy.org Sat Feb 4 13:23:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 13:23:53 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: End-of-transaction collections. Message-ID: <20120204122353.25ABB710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52081:94c8bc20d9e0 Date: 2012-02-03 18:07 +0100 http://bitbucket.org/pypy/pypy/changeset/94c8bc20d9e0/ Log: End-of-transaction collections. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -3,7 +3,8 @@ from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase from pypy.rlib.rarithmetic import LONG_BIT -from pypy.rlib.debug import ll_assert +from pypy.rlib.debug import ll_assert, debug_start, debug_stop +from pypy.module.thread import ll_thread WORD = LONG_BIT // 8 @@ -37,7 +38,7 @@ malloc_zero_filled = True # xxx? HDR = lltype.Struct('header', ('tid', lltype.Signed), - ('version', lltype.Signed)) + ('version', llmemory.Address)) typeid_is_in_field = 'tid' withhash_flag_is_in_field = 'tid', 'XXX' @@ -45,14 +46,16 @@ ('nursery_top', llmemory.Address), ('nursery_start', llmemory.Address), ('nursery_size', lltype.Signed), - ('malloc_flags', lltype.Signed)) - + ('malloc_flags', lltype.Signed), + ('pending_list', llmemory.Address), + ) def __init__(self, config, stm_operations, max_nursery_size=1024, **kwds): GCBase.__init__(self, config, **kwds) self.stm_operations = stm_operations + self.collector = Collector(self) self.max_nursery_size = max_nursery_size # self.declare_readers() @@ -62,6 +65,7 @@ """Called at run-time to initialize the GC.""" GCBase.setup(self) self.main_thread_tls = self.setup_thread(True) + self.mutex_lock = ll_thread.allocate_ll_lock() def _alloc_nursery(self): nursery = llarena.arena_malloc(self.max_nursery_size, 1) @@ -73,6 +77,8 @@ llarena.arena_free(nursery) def setup_thread(self, in_main_thread): + """Setup a thread. Allocates the thread-local data structures. + Must be called only once per OS-level thread.""" tls = lltype.malloc(self.GCTLS, flavor='raw') self.stm_operations.set_tls(self, llmemory.cast_ptr_to_adr(tls)) tls.nursery_start = self._alloc_nursery() @@ -90,19 +96,25 @@ tls.malloc_flags = 0 return tls + @staticmethod + def reset_nursery(tls): + """Clear and forget all locally allocated objects.""" + size = tls.nursery_free - tls.nursery_start + llarena.arena_reset(tls.nursery_start, size, 2) + tls.nursery_free = tls.nursery_start + def teardown_thread(self): - tls = self.get_tls() + """Teardown a thread. Call this just before the OS-level thread + disappears.""" + tls = self.collector.get_tls() self.stm_operations.set_tls(self, NULL) self._free_nursery(tls.nursery_start) lltype.free(tls, flavor='raw') - @always_inline - def get_tls(self): - tls = self.stm_operations.get_tls() - return llmemory.cast_adr_to_ptr(tls, lltype.Ptr(self.GCTLS)) + # ---------- def allocate_bump_pointer(self, size): - return self._allocate_bump_pointer(self.get_tls(), size) + return self._allocate_bump_pointer(self.collector.get_tls(), size) @always_inline def _allocate_bump_pointer(self, tls, size): @@ -129,7 +141,7 @@ # Check the mode: either in a transactional thread, or in # the main thread. For now we do the same thing in both # modes, but set different flags. - tls = self.get_tls() + tls = self.collector.get_tls() flags = tls.malloc_flags # # Get the memory from the nursery. @@ -145,12 +157,12 @@ return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) - def _malloc_local_raw(self, size): + def _malloc_local_raw(self, tls, size): # for _stm_write_barrier_global(): a version of malloc that does # no initialization of the malloc'ed object size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size - result = self.allocate_bump_pointer(totalsize) + result = self._allocate_bump_pointer(tls, totalsize) llarena.arena_reserve(result, totalsize) obj = result + size_gc_header return obj @@ -229,8 +241,9 @@ # # Here, we need to really make a local copy size = self.get_size(obj) + tls = self.collector.get_tls() try: - localobj = self._malloc_local_raw(size) + localobj = self._malloc_local_raw(tls, size) except MemoryError: # XXX fatalerror("MemoryError in _stm_write_barrier_global -- sorry") @@ -252,7 +265,189 @@ # Remove the GCFLAG_GLOBAL from the copy localhdr.tid &= ~GCFLAG_GLOBAL # + # Set the 'version' field of the local copy to be a pointer + # to the global obj. (The field is called 'version' because + # of its use by the C STM library: on global objects (only), + # it is a version number.) + localhdr.version = obj + # # Register the object as a valid copy stm_operations.tldict_add(obj, localobj) # return localobj + + # ---------- + + def acquire(self, lock): + ll_thread.c_thread_acquirelock(lock, 1) + + def release(self, lock): + ll_thread.c_thread_releaselock(lock) + + +# ------------------------------------------------------------ + + +class Collector(object): + """A separate frozen class. Useful to prevent any buggy concurrent + access to GC data. The methods here use the GCTLS instead for + storing things in a thread-local way.""" + + def __init__(self, gc): + self.gc = gc + self.stm_operations = gc.stm_operations + + def _freeze_(self): + return True + + def get_tls(self): + tls = self.stm_operations.get_tls() + return llmemory.cast_adr_to_ptr(tls, lltype.Ptr(StmGC.GCTLS)) + + def is_in_nursery(self, tls, addr): + ll_assert(llmemory.cast_adr_to_int(addr) & 1 == 0, + "odd-valued (i.e. tagged) pointer unexpected here") + return tls.nursery_start <= addr < tls.nursery_top + + def header(self, obj): + return self.gc.header(obj) + + + def start_transaction(self): + """Start a transaction, by clearing and resetting the tls nursery.""" + tls = self.get_tls() + self.gc.reset_nursery(tls) + + + def commit_transaction(self): + """End of a transaction, just before its end. No more GC + operations should occur afterwards! Note that the C code that + does the commit runs afterwards, and may still abort.""" + # + debug_start("gc-collect-commit") + # + tls = self.get_tls() + # + # Do a mark-and-move minor collection out of the tls' nursery + # into the main thread's global area (which is right now also + # called a nursery). To simplify things, we use a global lock + # around the whole mark-and-move. + self.gc.acquire(self.gc.mutex_lock) + # + # We are starting from the tldict's local objects as roots. At + # this point, these objects have GCFLAG_WAS_COPIED, and the other + # local objects don't. We want to move all reachable local objects + # to the global area. + # + # Start from tracing the root objects + self.collect_roots_from_tldict(tls) + # + # Continue iteratively until we have reached all the reachable + # local objects + self.collect_from_pending_list(tls) + # + self.gc.release(self.gc.mutex_lock) + # + # Now, all indirectly reachable local objects have been copied into + # the global area, and all pointers have been fixed to point to the + # global copies, including in the local copy of the roots. What + # remains is only overwriting of the global copy of the roots. + # This is done by the C code. + debug_stop("gc-collect-commit") + + + def collect_roots_from_tldict(self, tls): + tls.pending_list = NULL + # Enumerate the roots, which are the local copies of global objects. + # For each root, trace it. + self.stm_operations.enum_tldict_start() + while self.stm_operations.enum_tldict_find_next(): + globalobj = self.stm_operations.enum_tldict_globalobj() + localobj = self.stm_operations.enum_tldict_localobj() + # + localhdr = self.header(localobj) + ll_assert(localhdr.version == globalobj, + "in a root: localobj.version != globalobj") + ll_assert(localhdr.tid & GCFLAG_GLOBAL == 0, + "in a root: unexpected GCFLAG_GLOBAL") + ll_assert(localhdr.tid & GCFLAG_WAS_COPIED != 0, + "in a root: missing GCFLAG_WAS_COPIED") + # + self.trace_and_drag_out_of_nursery(tls, localobj) + + + def collect_from_pending_list(self, tls): + while tls.pending_list != NULL: + pending_obj = tls.pending_list + pending_hdr = self.header(pending_obj) + # + # 'pending_list' is a chained list of fresh global objects, + # linked together via their 'version' field. The 'version' + # must be replaced with NULL after we pop the object from + # the linked list. + tls.pending_list = pending_hdr.version + pending_hdr.version = NULL + # + # Check the flags of pending_obj: it should be a fresh global + # object, without GCFLAG_WAS_COPIED + ll_assert(pending_hdr.tid & GCFLAG_GLOBAL != 0, + "from pending list: missing GCFLAG_GLOBAL") + ll_assert(pending_hdr.tid & GCFLAG_WAS_COPIED == 0, + "from pending list: unexpected GCFLAG_WAS_COPIED") + # + self.trace_and_drag_out_of_nursery(tls, pending_obj) + + + def trace_and_drag_out_of_nursery(self, tls, obj): + # This is called to fix the references inside 'obj', to ensure that + # they are global. If necessary, the referenced objects are copied + # into the global area first. This is called on the *local* copy of + # the roots, and on the fresh *global* copy of all other reached + # objects. + self.gc.trace(obj, self._trace_drag_out, tls) + + def _trace_drag_out(self, root, tls): + obj = root.address[0] + hdr = self.header(obj) + # + # Figure out if the object is GLOBAL or not by looking at its + # address, not at its header --- to avoid cache misses and + # pollution for all global objects + if not self.is_in_nursery(tls, obj): + ll_assert(hdr.tid & GCFLAG_GLOBAL != 0, + "trace_and_mark: non-GLOBAL obj is not in nursery") + return # ignore global objects + # + ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, + "trace_and_mark: GLOBAL obj in nursery") + # + if hdr.tid & GCFLAG_WAS_COPIED != 0: + # this local object is a root or was already marked. Either + # way, its 'version' field should point to the corresponding + # global object. + globalobj = hdr.version + # + else: + # First visit to a local-only 'obj': copy it into the global area + size = self.gc.get_size(obj) + main_tls = self.gc.main_thread_tls + globalobj = self.gc._malloc_local_raw(main_tls, size) + llmemory.raw_memcopy(obj, globalobj, size) + # + # Initialize the header of the 'globalobj' + globalhdr = self.header(globalobj) + globalhdr.tid = hdr.tid | GCFLAG_GLOBAL + # + # Add the flags to 'localobj' to say 'has been copied now' + hdr.tid |= GCFLAG_WAS_COPIED + hdr.version = globalobj + # + # Set a temporary linked list through the globalobj's version + # numbers. This is normally not allowed, but it works here + # because these new globalobjs are not visible to any other + # thread before the commit is really complete. + globalhdr.version = tls.pending_list + tls.pending_list = globalobj + # + # Fix the original root.address[0] to point to the globalobj + root.address[0] = globalobj diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -3,11 +3,23 @@ from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED -S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed)) +S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed), + ('c', lltype.Signed)) ofs_a = llmemory.offsetof(S, 'a') +SR = lltype.GcForwardReference() +SR.become(lltype.GcStruct('SR', ('s1', lltype.Ptr(S)), + ('sr2', lltype.Ptr(SR)), + ('sr3', lltype.Ptr(SR)))) + class FakeStmOperations: + # The point of this class is to make sure about the distinction between + # RPython code in the GC versus C code in translator/stm/src_stm. This + # class contains a fake implementation of what should be in C. So almost + # any use of 'self._gc' is wrong here: it's stmgc.py that should call + # et.c, and not the other way around. + threadnum = 0 # 0 = main thread; 1,2,3... = transactional threads def set_tls(self, gc, tls): @@ -17,6 +29,7 @@ assert not hasattr(self, '_gc') self._tls_dict = {0: tls} self._tldicts = {0: {}} + self._tldicts_iterators = {} self._gc = gc self._transactional_copies = [] else: @@ -39,6 +52,32 @@ assert obj not in tldict tldict[obj] = localobj + def enum_tldict_start(self): + it = self._tldicts[self.threadnum].iteritems() + self._tldicts_iterators[self.threadnum] = [it, None, None] + + def enum_tldict_find_next(self): + state = self._tldicts_iterators[self.threadnum] + try: + next_key, next_value = state[0].next() + except StopIteration: + state[1] = None + state[2] = None + return False + state[1] = next_key + state[2] = next_value + return True + + def enum_tldict_globalobj(self): + state = self._tldicts_iterators[self.threadnum] + assert state[1] is not None + return state[1] + + def enum_tldict_localobj(self): + state = self._tldicts_iterators[self.threadnum] + assert state[2] is not None + return state[2] + class stm_read_word: def __init__(self, obj, offset): self.obj = obj @@ -67,6 +106,21 @@ else: assert 0 +def fake_trace(obj, callback, arg): + TYPE = obj.ptr._TYPE.TO + if TYPE == S: + ofslist = [] # no pointers in S + elif TYPE == SR: + ofslist = [llmemory.offsetof(SR, 's1'), + llmemory.offsetof(SR, 'sr2'), + llmemory.offsetof(SR, 'sr3')] + else: + assert 0 + for ofs in ofslist: + addr = obj + ofs + if addr.address[0]: + callback(addr, arg) + class TestBasic: GCClass = StmGC @@ -78,6 +132,7 @@ translated_to_c=False) self.gc.DEBUG = True self.gc.get_size = fake_get_size + self.gc.trace = fake_trace self.gc.setup() def teardown_method(self, meth): @@ -97,6 +152,9 @@ self.gc.stm_operations.threadnum = threadnum if threadnum not in self.gc.stm_operations._tls_dict: self.gc.setup_thread(False) + def gcsize(self, S): + return (llmemory.raw_malloc_usage(llmemory.sizeof(self.gc.HDR)) + + llmemory.raw_malloc_usage(llmemory.sizeof(S))) def test_gc_creation_works(self): pass @@ -193,3 +251,94 @@ # u_adr = self.gc.write_barrier(u_adr) # local object assert u_adr == t_adr + + def test_commit_transaction_empty(self): + self.select_thread(1) + s, s_adr = self.malloc(S) + t, t_adr = self.malloc(S) + self.gc.collector.commit_transaction() # no roots + main_tls = self.gc.main_thread_tls + assert main_tls.nursery_free == main_tls.nursery_start # empty + + def test_commit_transaction_no_references(self): + s, s_adr = self.malloc(S) + s.b = 12345 + self.select_thread(1) + t_adr = self.gc.write_barrier(s_adr) # make a local copy + t = llmemory.cast_adr_to_ptr(t_adr, lltype.Ptr(S)) + assert s != t + assert self.gc.header(t_adr).version == s_adr + t.b = 67890 + # + main_tls = self.gc.main_thread_tls + assert main_tls.nursery_free != main_tls.nursery_start # contains s + old_value = main_tls.nursery_free + # + self.gc.collector.commit_transaction() + # + assert main_tls.nursery_free == old_value # no new object + assert s.b == 12345 # not updated by the GC code + assert t.b == 67890 # still valid + + def test_commit_transaction_with_one_reference(self): + sr, sr_adr = self.malloc(SR) + assert sr.s1 == lltype.nullptr(S) + assert sr.sr2 == lltype.nullptr(SR) + self.select_thread(1) + tr_adr = self.gc.write_barrier(sr_adr) # make a local copy + tr = llmemory.cast_adr_to_ptr(tr_adr, lltype.Ptr(SR)) + assert sr != tr + t, t_adr = self.malloc(S) + t.b = 67890 + assert tr.s1 == lltype.nullptr(S) + assert tr.sr2 == lltype.nullptr(SR) + tr.s1 = t + # + main_tls = self.gc.main_thread_tls + old_value = main_tls.nursery_free + # + self.gc.collector.commit_transaction() + # + assert main_tls.nursery_free - old_value == self.gcsize(S) + + def test_commit_transaction_with_graph(self): + sr1, sr1_adr = self.malloc(SR) + sr2, sr2_adr = self.malloc(SR) + self.select_thread(1) + tr1_adr = self.gc.write_barrier(sr1_adr) # make a local copy + tr2_adr = self.gc.write_barrier(sr2_adr) # make a local copy + tr1 = llmemory.cast_adr_to_ptr(tr1_adr, lltype.Ptr(SR)) + tr2 = llmemory.cast_adr_to_ptr(tr2_adr, lltype.Ptr(SR)) + tr3, tr3_adr = self.malloc(SR) + tr4, tr4_adr = self.malloc(SR) + t, t_adr = self.malloc(S) + # + tr1.sr2 = tr3; tr1.sr3 = tr1 + tr2.sr2 = tr3; tr2.sr3 = tr3 + tr3.sr2 = tr4; tr3.sr3 = tr2 + tr4.sr2 = tr3; tr4.sr3 = tr3; tr4.s1 = t + # + for i in range(4): + self.malloc(S) # forgotten + # + main_tls = self.gc.main_thread_tls + old_value = main_tls.nursery_free + # + self.gc.collector.commit_transaction() + # + assert main_tls.nursery_free - old_value == ( + self.gcsize(SR) + self.gcsize(SR) + self.gcsize(S)) + # + sr3_adr = self.gc.header(tr3_adr).version + sr4_adr = self.gc.header(tr4_adr).version + s_adr = self.gc.header(t_adr ).version + assert len(set([sr3_adr, sr4_adr, s_adr])) == 3 + # + sr3 = llmemory.cast_adr_to_ptr(sr3_adr, lltype.Ptr(SR)) + sr4 = llmemory.cast_adr_to_ptr(sr4_adr, lltype.Ptr(SR)) + s = llmemory.cast_adr_to_ptr(s_adr, lltype.Ptr(S)) + assert tr1.sr2 == sr3; assert tr1.sr3 == sr1 # roots: local obj + assert tr2.sr2 == sr3; assert tr2.sr3 == sr3 # is modified + assert sr3.sr2 == sr4; assert sr3.sr3 == sr2 # non-roots: global + assert sr4.sr2 == sr3; assert sr4.sr3 == sr3 # obj is modified + assert sr4.s1 == s From noreply at buildbot.pypy.org Sat Feb 4 13:23:55 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 13:23:55 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Improve test precision. Message-ID: <20120204122355.200EB710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52082:7e1dd5f40eab Date: 2012-02-03 18:12 +0100 http://bitbucket.org/pypy/pypy/changeset/7e1dd5f40eab/ Log: Improve test precision. diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -155,6 +155,15 @@ def gcsize(self, S): return (llmemory.raw_malloc_usage(llmemory.sizeof(self.gc.HDR)) + llmemory.raw_malloc_usage(llmemory.sizeof(S))) + def checkflags(self, obj, must_have_global, must_have_was_copied, + must_have_version='?'): + if lltype.typeOf(obj) != llmemory.Address: + obj = llmemory.cast_ptr_to_adr(obj) + hdr = self.gc.header(obj) + assert (hdr.tid & GCFLAG_GLOBAL != 0) == must_have_global + assert (hdr.tid & GCFLAG_WAS_COPIED != 0) == must_have_was_copied + if must_have_version != '?': + assert hdr.version == must_have_version def test_gc_creation_works(self): pass @@ -342,3 +351,9 @@ assert sr3.sr2 == sr4; assert sr3.sr3 == sr2 # non-roots: global assert sr4.sr2 == sr3; assert sr4.sr3 == sr3 # obj is modified assert sr4.s1 == s + # + self.checkflags(sr1, 1, 1) + self.checkflags(sr2, 1, 1) + self.checkflags(sr3, 1, 0, llmemory.NULL) + self.checkflags(sr4, 1, 0, llmemory.NULL) + self.checkflags(s , 1, 0, llmemory.NULL) From noreply at buildbot.pypy.org Sat Feb 4 13:23:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 13:23:56 +0100 (CET) Subject: [pypy-commit] pypy string-NUL: Fix. Message-ID: <20120204122356.AFB81710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: string-NUL Changeset: r52083:a768ccc1c97a Date: 2012-02-04 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/a768ccc1c97a/ Log: Fix. diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -216,6 +216,8 @@ _about_ = assert_str0 def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj assert isinstance(s_obj, (SomeString, SomeUnicodeString)) if s_obj.no_nul: return s_obj @@ -237,7 +239,7 @@ _about_ = check_str0 def compute_result_annotation(self, s_obj): - if not isinstance(s_obj, SomeString): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): return s_obj if not s_obj.no_nul: raise ValueError("Value is not no_nul") From noreply at buildbot.pypy.org Sat Feb 4 13:23:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 13:23:58 +0100 (CET) Subject: [pypy-commit] pypy string-NUL: Fixes. Message-ID: <20120204122358.022E4710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: string-NUL Changeset: r52084:d725dfbdab1c Date: 2012-02-04 13:23 +0100 http://bitbucket.org/pypy/pypy/changeset/d725dfbdab1c/ Log: Fixes. diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -146,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -226,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -312,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -629,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -641,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -655,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -679,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed From noreply at buildbot.pypy.org Sat Feb 4 13:24:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 13:24:10 +0100 (CET) Subject: [pypy-commit] pypy string-NUL: hg merge default Message-ID: <20120204122410.CA76A710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: string-NUL Changeset: r52085:49ebd7bffdd3 Date: 2012-02-04 13:23 +0100 http://bitbucket.org/pypy/pypy/changeset/49ebd7bffdd3/ Log: hg merge default diff too long, truncating to 10000 out of 150636 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
\n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::=
\n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Sat Feb 4 14:17:30 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 14:17:30 +0100 (CET) Subject: [pypy-commit] pypy default: Merge the string-NUL branch by amaury. Adds to the annotator the Message-ID: <20120204131730.0B7CB710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52086:382248c1c015 Date: 2012-02-04 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/382248c1c015/ Log: Merge the string-NUL branch by amaury. Adds to the annotator the constraint that some external functions, like open(), must be called with strings that don't contain \x00 characters. diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -229,21 +229,33 @@ "Stands for an object which is known to be a string." knowntype = str immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) class SomeUnicodeString(SomeObject): "Stands for an object which is known to be an unicode string" knowntype = unicode immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None @@ -254,14 +266,16 @@ class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +516,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +731,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -1841,7 +1855,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2032,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2116,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -548,7 +548,7 @@ for s in result ] else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +635,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +650,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +765,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +785,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +914,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1103,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1113,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -342,7 +342,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -68,6 +72,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +90,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +307,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +325,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +339,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +360,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +385,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +524,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +619,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +823,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1057,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1069,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1181,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1248,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1261,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1290,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1367,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1389,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1407,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1430,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1443,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1460,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1482,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1495,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1509,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1561,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1574,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -5,6 +5,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib import rposix +str0 = annmodel.s_Str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +66,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +95,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +130,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +174,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +195,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) From noreply at buildbot.pypy.org Sat Feb 4 14:18:08 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 14:18:08 +0100 (CET) Subject: [pypy-commit] pypy string-NUL: close merged branch Message-ID: <20120204131808.A1977710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: string-NUL Changeset: r52087:43148440cc3f Date: 2012-02-04 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/43148440cc3f/ Log: close merged branch From noreply at buildbot.pypy.org Sat Feb 4 15:01:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 15:01:35 +0100 (CET) Subject: [pypy-commit] pypy default: Hackish fix for issue978: make sure the virtualizable stays alive Message-ID: <20120204140135.4AD81710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52088:4033391a3442 Date: 2012-02-04 14:42 +0100 http://bitbucket.org/pypy/pypy/changeset/4033391a3442/ Log: Hackish fix for issue978: make sure the virtualizable stays alive across the CALL_ASSEMBLER, by giving it as a useless argument to the GUARD_NOT_FORCED that follows. (Testless checkin, to see if it works...) diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -174,7 +174,7 @@ 'debug_merge_point': (('ref', 'int'), None), 'force_token' : ((), 'int'), 'call_may_force' : (('int', 'varargs'), 'intorptr'), - 'guard_not_forced': ((), None), + 'guard_not_forced': (('ref',), None), } # ____________________________________________________________ @@ -1053,7 +1053,7 @@ finally: self._may_force = -1 - def op_guard_not_forced(self, descr): + def op_guard_not_forced(self, descr, vable_ignored): forced = self._forced self._forced = False if forced: diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1346,12 +1346,15 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() - self.generate_guard(rop.GUARD_NOT_FORCED, None) + self.generate_guard(rop.GUARD_NOT_FORCED, + extraargs=[vablebox or history.CONST_NULL]) self.metainterp.handle_possible_exception() return resbox else: @@ -2478,6 +2481,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We stick it + # randomly to the GUARD_NOT_FORCED that follows. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -391,7 +391,7 @@ 'GUARD_EXCEPTION/1d', # may be called with an exception currently set 'GUARD_NO_OVERFLOW/0d', 'GUARD_OVERFLOW/0d', - 'GUARD_NOT_FORCED/0d', # may be called with an exception currently set + 'GUARD_NOT_FORCED/1d', # may be called with an exception currently set 'GUARD_NOT_INVALIDATED/0d', '_GUARD_LAST', # ----- end of guard operations ----- From noreply at buildbot.pypy.org Sat Feb 4 15:32:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 15:32:16 +0100 (CET) Subject: [pypy-commit] pypy default: Backed out changeset 4033391a3442 Message-ID: <20120204143216.D1A68710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52089:28c54434dea0 Date: 2012-02-04 15:31 +0100 http://bitbucket.org/pypy/pypy/changeset/28c54434dea0/ Log: Backed out changeset 4033391a3442 ...no, it does not work, because the CALL_ASSEMBLER/GUARD_NOT_FORCED are paired in the backend and the virtualizable still doesn't survive past that pair... diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -174,7 +174,7 @@ 'debug_merge_point': (('ref', 'int'), None), 'force_token' : ((), 'int'), 'call_may_force' : (('int', 'varargs'), 'intorptr'), - 'guard_not_forced': (('ref',), None), + 'guard_not_forced': ((), None), } # ____________________________________________________________ @@ -1053,7 +1053,7 @@ finally: self._may_force = -1 - def op_guard_not_forced(self, descr, vable_ignored): + def op_guard_not_forced(self, descr): forced = self._forced self._forced = False if forced: diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1346,15 +1346,12 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() - vablebox = None if assembler_call: - vablebox = self.metainterp.direct_assembler_call( - assembler_call_jd) + self.metainterp.direct_assembler_call(assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() - self.generate_guard(rop.GUARD_NOT_FORCED, - extraargs=[vablebox or history.CONST_NULL]) + self.generate_guard(rop.GUARD_NOT_FORCED, None) self.metainterp.handle_possible_exception() return resbox else: @@ -2481,15 +2478,6 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) - # - # To fix an obscure issue, make sure the vable stays alive - # longer than the CALL_ASSEMBLER operation. We stick it - # randomly to the GUARD_NOT_FORCED that follows. - jd = token.outermost_jitdriver_sd - if jd.index_of_virtualizable >= 0: - return args[jd.index_of_virtualizable] - else: - return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -391,7 +391,7 @@ 'GUARD_EXCEPTION/1d', # may be called with an exception currently set 'GUARD_NO_OVERFLOW/0d', 'GUARD_OVERFLOW/0d', - 'GUARD_NOT_FORCED/1d', # may be called with an exception currently set + 'GUARD_NOT_FORCED/0d', # may be called with an exception currently set 'GUARD_NOT_INVALIDATED/0d', '_GUARD_LAST', # ----- end of guard operations ----- From noreply at buildbot.pypy.org Sat Feb 4 15:53:21 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 15:53:21 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add Py_DebugFlag Message-ID: <20120204145321.8F6C7710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52090:f1ec2fb12b14 Date: 2012-02-02 00:09 +0100 http://bitbucket.org/pypy/pypy/changeset/f1ec2fb12b14/ Log: cpyext: add Py_DebugFlag diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ From noreply at buildbot.pypy.org Sat Feb 4 15:53:22 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 15:53:22 +0100 (CET) Subject: [pypy-commit] pypy default: str.strip() preserves the no_nul-ness of a string. Message-ID: <20120204145322.E0846710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52091:e0a0dfed7e85 Date: 2012-02-04 15:52 +0100 http://bitbucket.org/pypy/pypy/changeset/e0a0dfed7e85/ Log: str.strip() preserves the no_nul-ness of a string. diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -479,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): From noreply at buildbot.pypy.org Sat Feb 4 16:04:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 16:04:00 +0100 (CET) Subject: [pypy-commit] pypy default: Hackish fix for issue978, second attempt: insert an explicit Message-ID: <20120204150400.9D060710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52092:458e381ff84d Date: 2012-02-04 15:42 +0100 http://bitbucket.org/pypy/pypy/changeset/458e381ff84d/ Log: Hackish fix for issue978, second attempt: insert an explicit KEEPALIVE operation. diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', From noreply at buildbot.pypy.org Sat Feb 4 16:04:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 16:04:03 +0100 (CET) Subject: [pypy-commit] pypy default: Fix typo Message-ID: <20120204150403.1951E710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52093:fb832b92d207 Date: 2012-02-04 16:03 +0100 http://bitbucket.org/pypy/pypy/changeset/fb832b92d207/ Log: Fix typo diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,7 +254,7 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) -def do_keepalive(cpu, x): +def do_keepalive(cpu, _, x): pass # ____________________________________________________________ From noreply at buildbot.pypy.org Sat Feb 4 16:04:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 16:04:05 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120204150405.9051B710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52094:f32cbb5713dd Date: 2012-02-04 16:03 +0100 http://bitbucket.org/pypy/pypy/changeset/f32cbb5713dd/ Log: merge heads diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -479,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ From noreply at buildbot.pypy.org Sat Feb 4 18:08:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 18:08:10 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: In-progress. Starting work on the GC/src_stm interface. Message-ID: <20120204170810.8C6A3710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52095:ca1bda35d3ab Date: 2012-02-04 18:07 +0100 http://bitbucket.org/pypy/pypy/changeset/ca1bda35d3ab/ Log: In-progress. Starting work on the GC/src_stm interface. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -58,7 +58,8 @@ # gc ChoiceOption("gc", "Garbage Collection Strategy", ["boehm", "ref", "marksweep", "semispace", "statistics", - "generation", "hybrid", "markcompact", "minimark", "none"], + "generation", "hybrid", "markcompact", "minimark", "none", + "stmgc"], "ref", requires={ "ref": [("translation.rweakref", False), # XXX ("translation.gctransformer", "ref")], @@ -73,6 +74,8 @@ ("translation.gctransformer", "boehm")], "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], + "stmgc": [("translation.gctransformer", "framework"), + ("translation.gcrootfinder", "none")], # XXX }, cmdline="--gc"), ChoiceOption("gctransformer", "GC transformer that is used - internal", @@ -90,7 +93,7 @@ default=IS_64_BITS, cmdline="--gcremovetypeptr"), ChoiceOption("gcrootfinder", "Strategy for finding GC Roots (framework GCs only)", - ["n/a", "shadowstack", "asmgcc"], + ["n/a", "shadowstack", "asmgcc", "none"], "shadowstack", cmdline="--gcrootfinder", requires={ @@ -103,7 +106,8 @@ BoolOption("thread", "enable use of threading primitives", default=False, cmdline="--thread"), BoolOption("stm", "enable use of Software Transactional Memory", - default=False, cmdline="--stm"), + default=False, cmdline="--stm", + requires=[("translation.gc", "stmgc")]), BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], diff --git a/pypy/rpython/memory/gc/base.py b/pypy/rpython/memory/gc/base.py --- a/pypy/rpython/memory/gc/base.py +++ b/pypy/rpython/memory/gc/base.py @@ -440,6 +440,7 @@ "hybrid": "hybrid.HybridGC", "markcompact" : "markcompact.MarkCompactGC", "minimark" : "minimark.MiniMarkGC", + "stmgc": "stmgc.StmGC", } try: modulename, classname = classes[config.translation.gc].split('.') diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -24,11 +24,6 @@ return fn -class StmOperations(object): - def _freeze_(self): - return True - - class StmGC(GCBase): _alloc_flavor_ = "raw" inline_simple_malloc = True @@ -50,10 +45,23 @@ ('pending_list', llmemory.Address), ) - def __init__(self, config, stm_operations, + TRANSLATION_PARAMS = { + 'stm_operations': 'use_real_one', + 'max_nursery_size': 400*1024*1024, # XXX 400MB + } + + def __init__(self, config, stm_operations='use_emulator', max_nursery_size=1024, **kwds): GCBase.__init__(self, config, **kwds) + # + if isinstance(stm_operations, str): + assert stm_operations == 'use_real_one', ( + "XXX not provided so far: stm_operations == %r" % ( + stm_operations,)) + from pypy.translator.stm.stmgcintf import StmOperations + stm_operations = StmOperations() + # self.stm_operations = stm_operations self.collector = Collector(self) self.max_nursery_size = max_nursery_size @@ -80,7 +88,7 @@ """Setup a thread. Allocates the thread-local data structures. Must be called only once per OS-level thread.""" tls = lltype.malloc(self.GCTLS, flavor='raw') - self.stm_operations.set_tls(self, llmemory.cast_ptr_to_adr(tls)) + self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls)) tls.nursery_start = self._alloc_nursery() tls.nursery_size = self.max_nursery_size tls.nursery_free = tls.nursery_start @@ -107,7 +115,7 @@ """Teardown a thread. Call this just before the OS-level thread disappears.""" tls = self.collector.get_tls() - self.stm_operations.set_tls(self, NULL) + self.stm_operations.del_tls() self._free_nursery(tls.nursery_start) lltype.free(tls, flavor='raw') @@ -157,6 +165,11 @@ return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) + def malloc_varsize_clear(self, typeid, length, size, itemsize, + offset_to_length): + raise NotImplementedError + + def _malloc_local_raw(self, tls, size): # for _stm_write_barrier_global(): a version of malloc that does # no initialization of the malloc'ed object @@ -168,6 +181,10 @@ return obj + def collect(self, gen=0): + raise NotImplementedError + + @always_inline def combine(self, typeid16, flags): return llop.combine_ushort(lltype.Signed, typeid16, flags) diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -22,24 +22,26 @@ threadnum = 0 # 0 = main thread; 1,2,3... = transactional threads - def set_tls(self, gc, tls): + def set_tls(self, tls): assert lltype.typeOf(tls) == llmemory.Address + assert tls if self.threadnum == 0: assert not hasattr(self, '_tls_dict') - assert not hasattr(self, '_gc') self._tls_dict = {0: tls} self._tldicts = {0: {}} self._tldicts_iterators = {} - self._gc = gc self._transactional_copies = [] else: - assert self._gc is gc self._tls_dict[self.threadnum] = tls self._tldicts[self.threadnum] = {} def get_tls(self): return self._tls_dict[self.threadnum] + def del_tls(self): + del self._tls_dict[self.threadnum] + del self._tldicts[self.threadnum] + def tldict_lookup(self, obj): assert lltype.typeOf(obj) == llmemory.Address assert obj @@ -48,6 +50,7 @@ def tldict_add(self, obj, localobj): assert lltype.typeOf(obj) == llmemory.Address + assert lltype.typeOf(localobj) == llmemory.Address tldict = self._tldicts[self.threadnum] assert obj not in tldict tldict[obj] = localobj @@ -63,6 +66,7 @@ except StopIteration: state[1] = None state[2] = None + del self._tldicts_iterators[self.threadnum] return False state[1] = next_key state[2] = next_value @@ -130,6 +134,7 @@ config = get_pypy_config(translating=True).translation self.gc = self.GCClass(config, FakeStmOperations(), translated_to_c=False) + self.gc.stm_operations._gc = self.gc self.gc.DEBUG = True self.gc.get_size = fake_get_size self.gc.trace = fake_trace diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -62,6 +62,7 @@ #define OTHERINEV_REASONS 5 struct tx_descriptor { + void *rpython_tls_object; jmp_buf *setjmp_buf; owner_version_t start_time; owner_version_t end_time; @@ -481,7 +482,7 @@ } -void stm_descriptor_init(void) +static struct tx_descriptor *descriptor_init(void) { assert(thread_descriptor == NULL_TX); if (1) /* for hg diff */ @@ -507,10 +508,11 @@ (long)pthread_self()); PYPY_DEBUG_STOP("stm-init"); #endif + return d; } } -void stm_descriptor_done(void) +static void descriptor_done(void) { struct tx_descriptor *d = thread_descriptor; assert(d != NULL_TX); @@ -840,4 +842,20 @@ return d->my_lock_word; } + +void stm_set_tls(void *newtls) +{ + descriptor_init()->rpython_tls_object = newtls; +} + +void *stm_get_tls(void) +{ + return thread_descriptor->rpython_tls_object; +} + +void stm_del_tls(void) +{ + descriptor_done(); +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -11,6 +11,13 @@ #include #include "src/commondefs.h" + +void stm_set_tls(void *); +void *stm_get_tls(void); +void stm_del_tls(void); + + + #ifdef RPY_STM_ASSERT # define STM_CCHARP1(arg) char* arg # define STM_EXPLAIN1(info) info @@ -20,8 +27,6 @@ #endif -void stm_descriptor_init(void); -void stm_descriptor_done(void); void* stm_perform_transaction(void*(*)(void*, long), void*); long stm_read_word(long* addr); void stm_write_word(long* addr, long val); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/stmgcintf.py @@ -0,0 +1,39 @@ +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.translator.stm import _rffi_stm + + +def smexternal(name, args, result): + return staticmethod(_rffi_stm.llexternal(name, args, result)) + + +class StmOperations(object): + + def _freeze_(self): + return True + + set_tls = smexternal('stm_set_tls', [llmemory.Address], lltype.Void) + get_tls = smexternal('stm_get_tls', [], llmemory.Address) + del_tls = smexternal('stm_del_tls', [], lltype.Void) + + tldict_lookup = smexternal('stm_tldict_lookup', [llmemory.Address], + llmemory.Address) + tldict_add = smexternal('stm_tldict_add', [llmemory.Address] * 2, + lltype.Void) + + enum_tldict_start = smexternal('stm_enum_tldict_start', [], lltype.Void) + enum_tldict_find_next = smexternal('stm_enum_tldict_find_next', [], + lltype.Signed) + enum_tldict_globalobj = smexternal('stm_enum_tldict_globalobj', [], + llmemory.Address) + enum_tldict_localobj = smexternal('stm_enum_tldict_localobj', [], + llmemory.Address) + + stm_read_word = smexternal('stm_read_word', + [llmemory.Address, lltype.Signed], + lltype.Signed) + + stm_copy_transactional_to_raw = smexternal('stm_copy_transactional_to_raw', + [llmemory.Address, + llmemory.Address, + lltype.Signed], + lltype.Void) diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -0,0 +1,14 @@ +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.translator.stm.stmgcintf import StmOperations + +stm_operations = StmOperations() + + +def test_set_get_del(): + # assume that they are really thread-local; not checked here + s = lltype.malloc(lltype.Struct('S'), flavor='raw') + a = llmemory.cast_ptr_to_adr(s) + stm_operations.set_tls(a) + assert stm_operations.get_tls() == a + stm_operations.del_tls() + lltype.free(s, flavor='raw') From noreply at buildbot.pypy.org Sat Feb 4 18:38:13 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 18:38:13 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Implement and test stm_tldict_{lookup, add}. Message-ID: <20120204173813.AD784710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52096:92d4a43c2004 Date: 2012-02-04 18:37 +0100 http://bitbucket.org/pypy/pypy/changeset/92d4a43c2004/ Log: Implement and test stm_tldict_{lookup,add}. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -147,6 +147,8 @@ /*** run the redo log to commit a transaction, and release the locks */ static void tx_redo(struct tx_descriptor *d) { + abort(); +#if 0 owner_version_t newver = d->end_time; wlog_t *item; /* loop in "forward" order: in this order, if there are duplicate orecs @@ -163,6 +165,7 @@ *o = newver; } } REDOLOG_LOOP_END; +#endif } /*** on abort, release locks and restore the old version number. */ @@ -858,4 +861,21 @@ descriptor_done(); } +void *stm_tldict_lookup(void *key) +{ + struct tx_descriptor *d = thread_descriptor; + wlog_t* found; + REDOLOG_FIND(d->redolog, key, found, goto not_found); + return found->val; + + not_found: + return NULL; +} + +void stm_tldict_add(void *key, void *value) +{ + struct tx_descriptor *d = thread_descriptor; + redolog_insert(&d->redolog, key, value); +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -16,6 +16,9 @@ void *stm_get_tls(void); void stm_del_tls(void); +void *stm_tldict_lookup(void *); +void stm_tldict_add(void *, void *); + #ifdef RPY_STM_ASSERT diff --git a/pypy/translator/stm/src_stm/lists.c b/pypy/translator/stm/src_stm/lists.c --- a/pypy/translator/stm/src_stm/lists.c +++ b/pypy/translator/stm/src_stm/lists.c @@ -21,8 +21,8 @@ #define TREE_MASK ((TREE_ARITY - 1) * sizeof(void*)) typedef struct { - long* addr; - long val; + void* addr; + void *val; owner_version_t p; // the previous version number (if locked) } wlog_t; @@ -120,7 +120,7 @@ return (wlog_t *)entry; /* may be NULL */ } -static void redolog_insert(struct RedoLog *redolog, long* addr, long val); +static void redolog_insert(struct RedoLog *redolog, void* addr, void *val); static void _redolog_grow(struct RedoLog *redolog, long extra) { @@ -156,7 +156,7 @@ return result; } -static void redolog_insert(struct RedoLog *redolog, long* addr, long val) +static void redolog_insert(struct RedoLog *redolog, void* addr, void *val) { retry:; wlog_t *wlog; @@ -164,6 +164,7 @@ int shift = 0; char *p = (char *)(redolog->toplevel.items); char *entry; + assert((key & (sizeof(void*)-1)) == 0); /* only for aligned keys */ while (1) { p += (key >> shift) & TREE_MASK; @@ -178,12 +179,8 @@ else { wlog_t *wlog1 = (wlog_t *)entry; - if (wlog1->addr == addr) - { - /* overwrite and that's it */ - wlog1->val = val; - return; - } + /* the key must not already be present */ + assert(wlog1->addr != addr); /* collision: there is already a different wlog here */ wlog_node_t *node = (wlog_node_t *) _redolog_grab(redolog, sizeof(wlog_node_t)); diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -1,8 +1,11 @@ -from pypy.rpython.lltypesystem import lltype, llmemory +import random +from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.translator.stm.stmgcintf import StmOperations stm_operations = StmOperations() +DEFAULT_TLS = lltype.Struct('DEFAULT_TLS') + def test_set_get_del(): # assume that they are really thread-local; not checked here @@ -12,3 +15,50 @@ assert stm_operations.get_tls() == a stm_operations.del_tls() lltype.free(s, flavor='raw') + + +class TestStmGcIntf: + + def setup_method(self, meth): + TLS = getattr(meth, 'TLS', DEFAULT_TLS) + s = lltype.malloc(TLS, flavor='raw', immortal=True) + self.tls = s + a = llmemory.cast_ptr_to_adr(s) + stm_operations.set_tls(a) + + def teardown_method(self, meth): + stm_operations.del_tls() + + def test_set_get_del(self): + a = llmemory.cast_ptr_to_adr(self.tls) + assert stm_operations.get_tls() == a + + def test_tldict(self): + a1 = rffi.cast(llmemory.Address, 0x4020) + a2 = rffi.cast(llmemory.Address, 10002) + a3 = rffi.cast(llmemory.Address, 0x4028) + a4 = rffi.cast(llmemory.Address, 10004) + # + assert stm_operations.tldict_lookup(a1) == llmemory.NULL + stm_operations.tldict_add(a1, a2) + assert stm_operations.tldict_lookup(a1) == a2 + # + assert stm_operations.tldict_lookup(a3) == llmemory.NULL + stm_operations.tldict_add(a3, a4) + assert stm_operations.tldict_lookup(a3) == a4 + assert stm_operations.tldict_lookup(a1) == a2 + + def test_tldict_large(self): + content = {} + WORD = rffi.sizeof(lltype.Signed) + for i in range(12000): + key = random.randrange(1000, 2000) * WORD + a1 = rffi.cast(llmemory.Address, key) + a2 = stm_operations.tldict_lookup(a1) + if key in content: + assert a2 == content[key] + else: + assert a2 == llmemory.NULL + a2 = rffi.cast(llmemory.Address, random.randrange(2000, 9999)) + stm_operations.tldict_add(a1, a2) + content[key] = a2 From noreply at buildbot.pypy.org Sat Feb 4 20:46:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 20:46:59 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Enum, with a callback. Message-ID: <20120204194659.52963710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52097:5d548e00c813 Date: 2012-02-04 18:48 +0100 http://bitbucket.org/pypy/pypy/changeset/5d548e00c813/ Log: Enum, with a callback. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -878,4 +878,15 @@ redolog_insert(&d->redolog, key, value); } +void stm_tldict_enum(void(*callback)(void*, void*)) +{ + struct tx_descriptor *d = thread_descriptor; + wlog_t *item; + + REDOLOG_LOOP_FORWARD(d->redolog, item) + { + callback(item->addr, item->val); + } REDOLOG_LOOP_END; +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -18,6 +18,7 @@ void *stm_tldict_lookup(void *); void stm_tldict_add(void *, void *); +void stm_tlidct_enum(void(*)(void*, void*)); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -5,6 +5,8 @@ def smexternal(name, args, result): return staticmethod(_rffi_stm.llexternal(name, args, result)) +CALLBACK = lltype.Ptr(lltype.FuncType([llmemory.Address] * 2, lltype.Void)) + class StmOperations(object): @@ -19,14 +21,7 @@ llmemory.Address) tldict_add = smexternal('stm_tldict_add', [llmemory.Address] * 2, lltype.Void) - - enum_tldict_start = smexternal('stm_enum_tldict_start', [], lltype.Void) - enum_tldict_find_next = smexternal('stm_enum_tldict_find_next', [], - lltype.Signed) - enum_tldict_globalobj = smexternal('stm_enum_tldict_globalobj', [], - llmemory.Address) - enum_tldict_localobj = smexternal('stm_enum_tldict_localobj', [], - llmemory.Address) + tldict_enum = smexternal('stm_tldict_enum', [CALLBACK], lltype.Void) stm_read_word = smexternal('stm_read_word', [llmemory.Address, lltype.Signed], diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -1,6 +1,7 @@ import random from pypy.rpython.lltypesystem import lltype, llmemory, rffi -from pypy.translator.stm.stmgcintf import StmOperations +from pypy.rpython.annlowlevel import llhelper +from pypy.translator.stm.stmgcintf import StmOperations, CALLBACK stm_operations = StmOperations() @@ -62,3 +63,16 @@ a2 = rffi.cast(llmemory.Address, random.randrange(2000, 9999)) stm_operations.tldict_add(a1, a2) content[key] = a2 + return content + + def get_callback(self): + def callback(key, value): + seen.append((key, value)) + seen = [] + p_callback = llhelper(CALLBACK, callback) + return p_callback, seen + + def test_enum_tldict_empty(self): + p_callback, seen = self.get_callback() + stm_operations.tldict_enum(p_callback) + assert seen == [] From noreply at buildbot.pypy.org Sat Feb 4 20:47:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 20:47:01 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Hack at et.c until it starts to make sense in the new world Message-ID: <20120204194701.8AFE1710771@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52098:cd56dce052b4 Date: 2012-02-04 20:43 +0100 http://bitbucket.org/pypy/pypy/changeset/cd56dce052b4/ Log: Hack at et.c until it starts to make sense in the new world diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -12,8 +12,8 @@ first_gcflag = 1 << (LONG_BIT//2) -GCFLAG_GLOBAL = first_gcflag << 0 -GCFLAG_WAS_COPIED = first_gcflag << 1 +GCFLAG_GLOBAL = first_gcflag << 0 # keep in sync with et.c +GCFLAG_WAS_COPIED = first_gcflag << 1 # keep in sync with et.c def always_inline(fn): @@ -199,29 +199,30 @@ def declare_readers(self): # Reading functions. Defined here to avoid the extra burden of # passing 'self' explicitly. - stm_operations = self.stm_operations + stm_read_word = self.stm_operations.stm_read_word # @always_inline def read_signed(obj, offset): if self.header(obj).tid & GCFLAG_GLOBAL == 0: return (obj + offset).signed[0] # local obj: read directly else: - return _read_word_global(obj, offset) # else: call a helper + return stm_read_word(obj, offset) # else: call a helper self.read_signed = read_signed # - @dont_inline - def _read_word_global(obj, offset): - hdr = self.header(obj) - if hdr.tid & GCFLAG_WAS_COPIED != 0: - # - # Look up in the thread-local dictionary. - localobj = stm_operations.tldict_lookup(obj) - if localobj: - ll_assert(self.header(localobj).tid & GCFLAG_GLOBAL == 0, - "stm_read: tldict_lookup() -> GLOBAL obj") - return (localobj + offset).signed[0] - # - return stm_operations.stm_read_word(obj, offset) + # the following logic was moved to et.c to avoid a double call +## @dont_inline +## def _read_word_global(obj, offset): +## hdr = self.header(obj) +## if hdr.tid & GCFLAG_WAS_COPIED != 0: +## # +## # Look up in the thread-local dictionary. +## localobj = stm_operations.tldict_lookup(obj) +## if localobj: +## ll_assert(self.header(localobj).tid & GCFLAG_GLOBAL == 0, +## "stm_read: tldict_lookup() -> GLOBAL obj") +## return (localobj + offset).signed[0] +## # +## return stm_operations.stm_read_word(obj, offset) def declare_write_barrier(self): diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -29,29 +29,29 @@ /************************************************************/ +/* This is the same as the object header structure HDR + * declared in stmgc.py, and the same two flags */ + +typedef struct { + long tid; + long version; +} orec_t; + +enum { + first_gcflag = 1 << (PYPY_LONG_BIT / 2), + GCFLAG_GLOBAL = first_gcflag << 0, + GCFLAG_WAS_COPIED = first_gcflag << 1 +}; + +/************************************************************/ + #define IS_LOCKED(num) ((num) < 0) #define IS_LOCKED_OR_NEWER(num, max_age) \ __builtin_expect(((unsigned long)(num)) > ((unsigned long)(max_age)), 0) + typedef long owner_version_t; -typedef volatile owner_version_t orec_t; - -/*** Specify the number of orecs in the global array. */ -#define NUM_STRIPES 1048576 - -/*** declare the table of orecs */ -static char orecs[NUM_STRIPES * sizeof(orec_t)]; - -/*** map addresses to orec table entries */ -inline static orec_t *get_orec(void* addr) -{ - unsigned long index = (unsigned long)addr; -#ifdef RPY_STM_ASSERT - assert(!(index & (sizeof(orec_t)-1))); -#endif - char *p = orecs + (index & ((NUM_STRIPES-1) * sizeof(orec_t))); - return (orec_t *)p; -} +#define get_orec(addr) ((volatile orec_t *)(addr)) #include "src_stm/lists.c" @@ -59,40 +59,33 @@ #define ABORT_REASONS 8 #define SPINLOOP_REASONS 10 -#define OTHERINEV_REASONS 5 struct tx_descriptor { void *rpython_tls_object; + long (*rpython_get_size)(void*); jmp_buf *setjmp_buf; owner_version_t start_time; owner_version_t end_time; - unsigned long last_known_global_timestamp; + /*unsigned long last_known_global_timestamp;*/ struct OrecList reads; unsigned num_commits; unsigned num_aborts[ABORT_REASONS]; unsigned num_spinloops[SPINLOOP_REASONS]; - unsigned int spinloop_counter; - int transaction_active; + /*unsigned int spinloop_counter;*/ owner_version_t my_lock_word; struct RedoLog redolog; /* last item, because it's the biggest one */ }; -static const struct tx_descriptor null_tx = { - .transaction_active = 0, - .my_lock_word = 0 -}; -#define NULL_TX ((struct tx_descriptor *)(&null_tx)) - /* global_timestamp contains in its lowest bit a flag equal to 1 if there is an inevitable transaction running */ static volatile unsigned long global_timestamp = 2; -static __thread struct tx_descriptor *thread_descriptor = NULL_TX; +static __thread struct tx_descriptor *thread_descriptor = NULL; /************************************************************/ static unsigned long get_global_timestamp(struct tx_descriptor *d) { - return (d->last_known_global_timestamp = global_timestamp); + return (/*d->last_known_global_timestamp =*/ global_timestamp); } static _Bool change_global_timestamp(struct tx_descriptor *d, @@ -101,7 +94,7 @@ { if (bool_cas(&global_timestamp, old, new)) { - d->last_known_global_timestamp = new; + /*d->last_known_global_timestamp = new;*/ return 1; } return 0; @@ -110,7 +103,7 @@ static void set_global_timestamp(struct tx_descriptor *d, unsigned long new) { global_timestamp = new; - d->last_known_global_timestamp = new; + /*d->last_known_global_timestamp = new;*/ } static void tx_abort(int); @@ -123,7 +116,8 @@ d->num_spinloops[num]++; //printf("tx_spinloop(%d)\n", num); - + +#if 0 c = d->spinloop_counter; d->spinloop_counter = c * 9; i = c & 0xff0000; @@ -131,41 +125,41 @@ spinloop(); i -= 0x10000; } -} - -static _Bool is_inevitable_or_inactive(struct tx_descriptor *d) -{ - return d->setjmp_buf == NULL; +#else + spinloop(); +#endif } static _Bool is_inevitable(struct tx_descriptor *d) { - assert(d->transaction_active); - return is_inevitable_or_inactive(d); + return d->setjmp_buf == NULL; } /*** run the redo log to commit a transaction, and release the locks */ static void tx_redo(struct tx_descriptor *d) { - abort(); -#if 0 owner_version_t newver = d->end_time; wlog_t *item; /* loop in "forward" order: in this order, if there are duplicate orecs then only the last one has p != -1. */ REDOLOG_LOOP_FORWARD(d->redolog, item) { - *item->addr = item->val; + void *globalobj = item->addr; + void *localobj = item->val; + owner_version_t p = item->p; + long size = d->rpython_get_size(localobj); + memcpy(((char *)globalobj) + sizeof(orec_t), + ((char *)localobj) + sizeof(orec_t), + size - sizeof(orec_t)); /* but we must only unlock the orec if it's the last time it appears in the redolog list. If it's not, then p == -1. */ - if (item->p != -1) + if (p != -1) { - orec_t* o = get_orec(item->addr); + volatile orec_t* o = get_orec(globalobj); CFENCE; - *o = newver; + o->version = newver; } } REDOLOG_LOOP_END; -#endif } /*** on abort, release locks and restore the old version number. */ @@ -176,8 +170,8 @@ { if (item->p != -1) { - orec_t* o = get_orec(item->addr); - *o = item->p; + volatile orec_t* o = get_orec(item->addr); + o->version = item->p; } } REDOLOG_LOOP_END; } @@ -190,8 +184,8 @@ { if (item->p != -1) { - orec_t* o = get_orec(item->addr); - *o = item->p; + volatile orec_t* o = get_orec(item->addr); + o->version = item->p; item->p = -1; } } REDOLOG_LOOP_END; @@ -205,11 +199,11 @@ REDOLOG_LOOP_BACKWARD(d->redolog, item) { // get orec, read its version# - orec_t* o = get_orec(item->addr); + volatile orec_t* o = get_orec(item->addr); owner_version_t ovt; retry: - ovt = *o; + ovt = o->version; // if orec not locked, lock it // @@ -217,7 +211,7 @@ // reads. Since most writes are also reads, we'll just abort under this // condition. This can introduce false conflicts if (!IS_LOCKED_OR_NEWER(ovt, d->start_time)) { - if (!bool_cas(o, ovt, d->my_lock_word)) + if (!bool_cas(&o->version, ovt, d->my_lock_word)) goto retry; // save old version to item->p. Now we hold the lock. // in case of duplicate orecs, only the last one has p != -1. @@ -247,9 +241,6 @@ { d->reads.size = 0; redolog_clear(&d->redolog); - assert(d->transaction_active); - d->transaction_active = 0; - d->setjmp_buf = NULL; } static void tx_cleanup(struct tx_descriptor *d) @@ -262,10 +253,9 @@ static void tx_restart(struct tx_descriptor *d) { - jmp_buf *env = d->setjmp_buf; tx_cleanup(d); tx_spinloop(0); - longjmp(*env, 1); + longjmp(*d->setjmp_buf, 1); } /*** increase the abort count and restart the transaction */ @@ -295,7 +285,7 @@ for (i=0; ireads.size; i++) { retry: - ovt = *(d->reads.items[i]); + ovt = d->reads.items[i]->version; if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) { // If locked, we wait until it becomes unlocked. The chances are @@ -325,7 +315,7 @@ assert(!is_inevitable(d)); for (i=0; ireads.size; i++) { - ovt = *(d->reads.items[i]); // read this orec + ovt = d->reads.items[i]->version; // read this orec if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) { if (!IS_LOCKED(ovt)) @@ -421,28 +411,29 @@ } /* lazy/lazy read instrumentation */ -long stm_read_word(long* addr) +long stm_read_word(void* addr, long offset) { struct tx_descriptor *d = thread_descriptor; + volatile orec_t *o = get_orec(addr); + owner_version_t ovt; + + if ((o->tid & GCFLAG_WAS_COPIED) != 0) + { + /* Look up in the thread-local dictionary. */ + wlog_t *found; + REDOLOG_FIND(d->redolog, addr, found, goto not_found); + orec_t *localobj = (orec_t *)found->val; #ifdef RPY_STM_ASSERT - assert((((long)addr) & (sizeof(void*)-1)) == 0); + assert((localobj->tid & GCFLAG_GLOBAL) == 0); #endif - if (!d->transaction_active) - return *addr; + return *(long *)(((char *)localobj) + offset); - // check writeset first - wlog_t* found; - REDOLOG_FIND(d->redolog, addr, found, goto not_found); - return found->val; - - not_found:; - // get the orec addr - orec_t* o = get_orec((void*)addr); - owner_version_t ovt; + not_found:; + } retry: // read the orec BEFORE we read anything else - ovt = *o; + ovt = o->version; CFENCE; // this tx doesn't hold any locks, so if the lock for this addr is held, @@ -461,33 +452,22 @@ } // orec is unlocked, with ts <= start_time. read the location - long tmp = *addr; + long tmp = *(long *)(((char *)addr) + offset); // postvalidate AFTER reading addr: CFENCE; - if (*o != ovt) + if (__builtin_expect(o->version != ovt, 0)) goto retry; /* oups, try again */ - oreclist_insert(&d->reads, o); + oreclist_insert(&d->reads, (orec_t*)o); return tmp; } -void stm_write_word(long* addr, long val) -{ - struct tx_descriptor *d = thread_descriptor; - assert((((long)addr) & (sizeof(void*)-1)) == 0); - if (!d->transaction_active) { - *addr = val; - return; - } - redolog_insert(&d->redolog, addr, val); -} - static struct tx_descriptor *descriptor_init(void) { - assert(thread_descriptor == NULL_TX); + assert(thread_descriptor == NULL); if (1) /* for hg diff */ { struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); @@ -502,7 +482,7 @@ if (!IS_LOCKED(d->my_lock_word)) d->my_lock_word = ~d->my_lock_word; assert(IS_LOCKED(d->my_lock_word)); - d->spinloop_counter = (unsigned int)(d->my_lock_word | 1); + /*d->spinloop_counter = (unsigned int)(d->my_lock_word | 1);*/ thread_descriptor = d; @@ -518,9 +498,9 @@ static void descriptor_done(void) { struct tx_descriptor *d = thread_descriptor; - assert(d != NULL_TX); + assert(d != NULL); - thread_descriptor = NULL_TX; + thread_descriptor = NULL; #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-done"); @@ -559,10 +539,11 @@ static void begin_transaction(jmp_buf* buf) { struct tx_descriptor *d = thread_descriptor; - assert(!d->transaction_active); - d->transaction_active = 1; + /* you need to call descriptor_init() before calling + stm_perform_transaction() */ + assert(d != NULL); d->setjmp_buf = buf; - d->start_time = d->last_known_global_timestamp & ~1; + d->start_time = (/*d->last_known_global_timestamp*/ global_timestamp) & ~1; } static long commit_transaction(void) @@ -635,9 +616,6 @@ jmp_buf _jmpbuf; volatile long v_counter = 0; long counter; - /* you need to call descriptor_init() before calling - stm_perform_transaction() */ - assert(thread_descriptor != NULL_TX); setjmp(_jmpbuf); begin_transaction(&_jmpbuf); counter = v_counter; @@ -647,6 +625,7 @@ return result; } +#if 0 void stm_try_inevitable(STM_CCHARP1(why)) { /* when a transaction is inevitable, its start_time is equal to @@ -703,135 +682,17 @@ PYPY_DEBUG_STOP("stm-inevitable"); #endif } +#endif void stm_abort_and_retry(void) { tx_abort(7); /* manual abort */ } -// XXX little-endian only! -#define READ_PARTIAL_WORD(T, fieldsize, addr) \ - int misalignment = ((long)addr) & (sizeof(void*)-1); \ - long *p = (long*)(((char *)addr) - misalignment); \ - unsigned long word = stm_read_word(p); \ - assert(sizeof(T) == fieldsize); \ - return (T)(word >> (misalignment * 8)); - -unsigned char stm_read_partial_1(void *addr) { - READ_PARTIAL_WORD(unsigned char, 1, addr) -} -unsigned short stm_read_partial_2(void *addr) { - READ_PARTIAL_WORD(unsigned short, 2, addr) -} -#if PYPY_LONG_BIT == 64 -unsigned int stm_read_partial_4(void *addr) { - READ_PARTIAL_WORD(unsigned int, 4, addr) -} -#endif - -// XXX little-endian only! -#define WRITE_PARTIAL_WORD(fieldsize, addr, nval) \ - int misalignment = ((long)addr) & (sizeof(void*)-1); \ - long *p = (long*)(((char *)addr) - misalignment); \ - long val = ((long)nval) << (misalignment * 8); \ - long word = stm_read_word(p); \ - long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); \ - val = (val & mask) | (word & ~mask); \ - stm_write_word(p, val); - -void stm_write_partial_1(void *addr, unsigned char nval) { - WRITE_PARTIAL_WORD(1, addr, nval) -} -void stm_write_partial_2(void *addr, unsigned short nval) { - WRITE_PARTIAL_WORD(2, addr, nval) -} -#if PYPY_LONG_BIT == 64 -void stm_write_partial_4(void *addr, unsigned int nval) { - WRITE_PARTIAL_WORD(4, addr, nval) -} -#endif - - -#if PYPY_LONG_BIT == 32 -long long stm_read_doubleword(long *addr) -{ - /* 32-bit only */ - unsigned long res0 = (unsigned long)stm_read_word(addr); - unsigned long res1 = (unsigned long)stm_read_word(addr + 1); - return (((unsigned long long)res1) << 32) | res0; -} - -void stm_write_doubleword(long *addr, long long val) -{ - /* 32-bit only */ - stm_write_word(addr, (long)val); - stm_write_word(addr + 1, (long)(val >> 32)); -} -#endif - -double stm_read_double(long *addr) -{ - long long x; - double dd; -#if PYPY_LONG_BIT == 32 - x = stm_read_doubleword(addr); /* 32 bits */ -#else - x = stm_read_word(addr); /* 64 bits */ -#endif - assert(sizeof(double) == 8 && sizeof(long long) == 8); - memcpy(&dd, &x, 8); - return dd; -} - -void stm_write_double(long *addr, double val) -{ - long long ll; - assert(sizeof(double) == 8 && sizeof(long long) == 8); - memcpy(&ll, &val, 8); -#if PYPY_LONG_BIT == 32 - stm_write_doubleword(addr, ll); /* 32 bits */ -#else - stm_write_word(addr, ll); /* 64 bits */ -#endif -} - -float stm_read_float(long *addr) -{ - unsigned int x; - float ff; -#if PYPY_LONG_BIT == 32 - x = stm_read_word(addr); /* 32 bits */ -#else - if (((long)(char*)addr) & 7) { - addr = (long *)(((char *)addr) - 4); - x = (unsigned int)(stm_read_word(addr) >> 32); /* 64 bits, unaligned */ - } - else - x = (unsigned int)stm_read_word(addr); /* 64 bits, aligned */ -#endif - assert(sizeof(float) == 4 && sizeof(unsigned int) == 4); - memcpy(&ff, &x, 4); - return ff; -} - -void stm_write_float(long *addr, float val) -{ - unsigned int ii; - assert(sizeof(float) == 4 && sizeof(unsigned int) == 4); - memcpy(&ii, &val, 4); -#if PYPY_LONG_BIT == 32 - stm_write_word(addr, ii); /* 32 bits */ -#else - stm_write_partial_4(addr, ii); /* 64 bits */ -#endif -} - long stm_debug_get_state(void) { struct tx_descriptor *d = thread_descriptor; - if (d == NULL_TX) - return -1; - if (!d->transaction_active) + if (d == NULL) return 0; if (!is_inevitable(d)) return 1; @@ -846,9 +707,11 @@ } -void stm_set_tls(void *newtls) +void stm_set_tls(void *newtls, long (*getsize)(void*)) { - descriptor_init()->rpython_tls_object = newtls; + struct tx_descriptor *d = descriptor_init(); + d->rpython_tls_object = newtls; + d->rpython_get_size = getsize; } void *stm_get_tls(void) diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -12,7 +12,7 @@ #include "src/commondefs.h" -void stm_set_tls(void *); +void stm_set_tls(void *, long(*)(void*)); void *stm_get_tls(void); void stm_del_tls(void); @@ -20,8 +20,11 @@ void stm_tldict_add(void *, void *); void stm_tlidct_enum(void(*)(void*, void*)); +long stm_read_word(void *, long); +#if 0 + #ifdef RPY_STM_ASSERT # define STM_CCHARP1(arg) char* arg # define STM_EXPLAIN1(info) info @@ -69,5 +72,7 @@ void stm_write_doubleword(long *addr, long long val); #endif +#endif /* 0 */ + #endif /* _ET_H */ diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -6,6 +6,7 @@ return staticmethod(_rffi_stm.llexternal(name, args, result)) CALLBACK = lltype.Ptr(lltype.FuncType([llmemory.Address] * 2, lltype.Void)) +GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) class StmOperations(object): @@ -13,7 +14,8 @@ def _freeze_(self): return True - set_tls = smexternal('stm_set_tls', [llmemory.Address], lltype.Void) + set_tls = smexternal('stm_set_tls', [llmemory.Address, GETSIZE], + lltype.Void) get_tls = smexternal('stm_get_tls', [], llmemory.Address) del_tls = smexternal('stm_del_tls', [], lltype.Void) diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -1,7 +1,7 @@ import random from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.annlowlevel import llhelper -from pypy.translator.stm.stmgcintf import StmOperations, CALLBACK +from pypy.translator.stm.stmgcintf import StmOperations, CALLBACK, GETSIZE stm_operations = StmOperations() @@ -12,7 +12,7 @@ # assume that they are really thread-local; not checked here s = lltype.malloc(lltype.Struct('S'), flavor='raw') a = llmemory.cast_ptr_to_adr(s) - stm_operations.set_tls(a) + stm_operations.set_tls(a, lltype.nullptr(GETSIZE.TO)) assert stm_operations.get_tls() == a stm_operations.del_tls() lltype.free(s, flavor='raw') @@ -25,11 +25,15 @@ s = lltype.malloc(TLS, flavor='raw', immortal=True) self.tls = s a = llmemory.cast_ptr_to_adr(s) - stm_operations.set_tls(a) + getsize = llhelper(GETSIZE, self.getsize) + stm_operations.set_tls(a, getsize) def teardown_method(self, meth): stm_operations.del_tls() + def getsize(self, obj): + xxx + def test_set_get_del(self): a = llmemory.cast_ptr_to_adr(self.tls) assert stm_operations.get_tls() == a From noreply at buildbot.pypy.org Sat Feb 4 20:47:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 20:47:03 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix this test by moving the commented-out logic from stmgc.py here. Message-ID: <20120204194703.71700710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52099:7b136b21cb51 Date: 2012-02-04 20:46 +0100 http://bitbucket.org/pypy/pypy/changeset/7b136b21cb51/ Log: Fix this test by moving the commented-out logic from stmgc.py here. diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -82,17 +82,14 @@ assert state[2] is not None return state[2] - class stm_read_word: - def __init__(self, obj, offset): - self.obj = obj - self.offset = offset - def __repr__(self): - return 'stm_read_word(%r, %r)' % (self.obj, self.offset) - def __eq__(self, other): - return (type(self) is type(other) and - self.__dict__ == other.__dict__) - def __ne__(self, other): - return not (self == other) + def stm_read_word(self, obj, offset): + hdr = self._gc.header(obj) + if hdr.tid & GCFLAG_WAS_COPIED != 0: + localobj = self.tldict_lookup(obj) + if localobj: + assert self._gc.header(localobj).tid & GCFLAG_GLOBAL == 0 + return (localobj + offset).signed[0] + return 'stm_ll_read_word(%r, %r)' % (obj, offset) def stm_copy_transactional_to_raw(self, srcobj, dstobj, size): sizehdr = self._gc.gcheaderbuilder.size_gc_header @@ -205,7 +202,7 @@ assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL != 0 s.a = 42 value = self.gc.read_signed(s_adr, ofs_a) - assert value == FakeStmOperations.stm_read_word(s_adr, ofs_a) + assert value == 'stm_ll_read_word(%r, %r)' % (s_adr, ofs_a) # self.select_thread(1) s, s_adr = self.malloc(S) From noreply at buildbot.pypy.org Sat Feb 4 20:53:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 20:53:24 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add stuff Message-ID: <20120204195325.00A19710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4073:ac8c392d4e01 Date: 2012-02-04 21:52 +0200 http://bitbucket.org/pypy/extradoc/changeset/ac8c392d4e01/ Log: add stuff diff --git a/talk/pycon2012/tutorial/outline.rst b/talk/pycon2012/tutorial/outline.rst --- a/talk/pycon2012/tutorial/outline.rst +++ b/talk/pycon2012/tutorial/outline.rst @@ -1,9 +1,18 @@ How to get the most out of PyPy =============================== +* Why would you use PyPy - a quick look: + * performance + * memory consumption + * numpy (soon) + * sandbox +* Why you would not use PyPy (yet) + * embedded + * some extensions don't work (lxml) * How PyPy Works * Bytecode VM * GC + * not refcounting * Generational * Implications (building large objects) * JIT From noreply at buildbot.pypy.org Sat Feb 4 20:53:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 20:53:26 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: more stuff Message-ID: <20120204195326.3699C710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4074:539aa13ca942 Date: 2012-02-04 21:52 +0200 http://bitbucket.org/pypy/extradoc/changeset/539aa13ca942/ Log: more stuff diff --git a/talk/pycon2012/tutorial/outline.rst b/talk/pycon2012/tutorial/outline.rst --- a/talk/pycon2012/tutorial/outline.rst +++ b/talk/pycon2012/tutorial/outline.rst @@ -9,6 +9,7 @@ * Why you would not use PyPy (yet) * embedded * some extensions don't work (lxml) + * but there are ways around it! * How PyPy Works * Bytecode VM * GC From noreply at buildbot.pypy.org Sat Feb 4 21:02:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 21:02:15 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: minor stuff Message-ID: <20120204200215.68491710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4075:0c764efc4254 Date: 2012-02-04 22:01 +0200 http://bitbucket.org/pypy/extradoc/changeset/0c764efc4254/ Log: minor stuff diff --git a/talk/pycon2012/tutorial/outline.rst b/talk/pycon2012/tutorial/outline.rst --- a/talk/pycon2012/tutorial/outline.rst +++ b/talk/pycon2012/tutorial/outline.rst @@ -24,7 +24,8 @@ * resops * optimizations * A case study + * Examples, examples, examples * Open source application (TBD) - * Tracebin or jitviewer + * Jitviewer * Putting it to work * Workshop style From noreply at buildbot.pypy.org Sat Feb 4 21:11:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 21:11:33 +0100 (CET) Subject: [pypy-commit] pypy default: The "str0" check is now optional, and controlled by the option Message-ID: <20120204201133.089F1710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52100:a0016c989748 Date: 2012-02-04 21:07 +0100 http://bitbucket.org/pypy/pypy/changeset/a0016c989748/ Log: The "str0" check is now optional, and controlled by the option config.translation.check_str_without_nul. diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -740,6 +740,15 @@ s_obj = new_s_obj return s_obj +def remove_no_nul(s_obj): + if not getattr(s_obj, 'no_nul', False): + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + del new_s_obj.no_nul + return new_s_obj + + # ____________________________________________________________ # internal diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,17 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + + check_no_nul = False + if hasattr(self, 'bookkeeper'): + config = self.bookkeeper.annotator.translator.config + if config.translation.check_str_without_nul: + check_no_nul = True + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + if not check_no_nul: + expected = annmodel.remove_no_nul(expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,9 +3,10 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix -str0 = annmodel.s_Str0 +str0 = ll_os.str0 # ____________________________________________________________ # diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,23 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + From noreply at buildbot.pypy.org Sat Feb 4 21:11:34 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 21:11:34 +0100 (CET) Subject: [pypy-commit] pypy default: Enable check for strings with NUL bytes in pypy translation Message-ID: <20120204201134.5331E710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52101:9faed9cf5289 Date: 2012-02-04 21:08 +0100 http://bitbucket.org/pypy/pypy/changeset/9faed9cf5289/ Log: Enable check for strings with NUL bytes in pypy translation diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Sat Feb 4 21:21:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 21:21:53 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: start writing examples Message-ID: <20120204202153.DF446710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4076:2368ff076bfd Date: 2012-02-04 22:21 +0200 http://bitbucket.org/pypy/extradoc/changeset/2368ff076bfd/ Log: start writing examples diff --git a/talk/pycon2012/tutorial/examples/01_refcount.py b/talk/pycon2012/tutorial/examples/01_refcount.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/01_refcount.py @@ -0,0 +1,9 @@ + +def wrong(): + open("file").write("data") + assert open("file").read() == "data" + +def right(): + with open("file") as f: + f.write("data") + # contents *will be there* by now diff --git a/talk/pycon2012/tutorial/examples/02_speedup.py b/talk/pycon2012/tutorial/examples/02_speedup.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/02_speedup.py @@ -0,0 +1,11 @@ + +def f(): + s = 0 + for i in xrange(100000000): + s += 1 + return s + +if __name__ == '__main__': + import dis + dis.dis(f) + f() From noreply at buildbot.pypy.org Sat Feb 4 21:33:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 21:33:35 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add another example Message-ID: <20120204203335.40302710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4077:1b2e064d275a Date: 2012-02-04 22:33 +0200 http://bitbucket.org/pypy/extradoc/changeset/1b2e064d275a/ Log: add another example diff --git a/talk/pycon2012/tutorial/examples.rst b/talk/pycon2012/tutorial/examples.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples.rst @@ -0,0 +1,6 @@ + +* Refcount example, where it won't work + +* A simple speedup example and show performance + +* Show memory consumption how it grows for tight instances diff --git a/talk/pycon2012/tutorial/examples/03_memory.py b/talk/pycon2012/tutorial/examples/03_memory.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/03_memory.py @@ -0,0 +1,42 @@ + +import time, os, re, gc + +class A(object): + def __init__(self, a, b, c): + self.a = a + self.b = b + self.c = c + +def read_smaps(): + with open("/proc/%d/smaps" % os.getpid()) as f: + mark = False + for line in f: + if mark: + assert line.startswith('Size:') + m = re.search('(\d+).*', line) + return m.group(0), int(m.group(1)) + if 'heap' in line: + mark = True + +def main(): + l = [] + count = 0 + for k in range(100): + t0 = time.time() + for i in range(100000): + l.append(A(1, 2, i)) + for j in range(4): + A(1, 1, 2) + count += i + print time.time() - t0 + usage, kb = read_smaps() + print usage, kb * 1024 / count, "per instance" + gc.collect() + usage, kb = read_smaps() + print "after collect", usage, kb * 1024 / count, "per instance" + #import pdb + #pdb.set_trace() + time.sleep(1) + +if __name__ == '__main__': + main() From noreply at buildbot.pypy.org Sat Feb 4 21:36:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 4 Feb 2012 21:36:34 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: this is useless kill Message-ID: <20120204203634.E27E2710770@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4078:83df573f4d14 Date: 2012-02-04 22:35 +0200 http://bitbucket.org/pypy/extradoc/changeset/83df573f4d14/ Log: this is useless kill diff --git a/talk/pycon2012/tutorial/examples/03_memory.py b/talk/pycon2012/tutorial/examples/03_memory.py --- a/talk/pycon2012/tutorial/examples/03_memory.py +++ b/talk/pycon2012/tutorial/examples/03_memory.py @@ -31,11 +31,6 @@ print time.time() - t0 usage, kb = read_smaps() print usage, kb * 1024 / count, "per instance" - gc.collect() - usage, kb = read_smaps() - print "after collect", usage, kb * 1024 / count, "per instance" - #import pdb - #pdb.set_trace() time.sleep(1) if __name__ == '__main__': From noreply at buildbot.pypy.org Sat Feb 4 21:40:43 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 4 Feb 2012 21:40:43 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: start slides Message-ID: <20120204204043.067C6710770@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4079:09b3cfb13742 Date: 2012-02-04 15:38 -0500 http://bitbucket.org/pypy/extradoc/changeset/09b3cfb13742/ Log: start slides diff --git a/talk/pycon2012/tutorial/slides.rst b/talk/pycon2012/tutorial/slides.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/slides.rst @@ -0,0 +1,29 @@ +Why PyPy? +========= + +* Performance +* Memory +* Sandbox + +Performance +=========== + +* Sweetspot? + * CPython's sweetspot: stuff written in C + * PyPy's sweetspot: lots of stuff written in Python +* http://speed.pypy.org +* How do you hit the sweetspot? + * Be in this room for the next 3 hours. + +Memory +====== + +* PyPy memory usage is difficult to estimate. +* Very program dependent. +* Learn to predict! + +Sandbox +======= + +* We're not going to talk about it here. +* Run untrusted code. From noreply at buildbot.pypy.org Sat Feb 4 21:40:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 4 Feb 2012 21:40:44 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merged upstream Message-ID: <20120204204044.44D62710770@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4080:257cc228351a Date: 2012-02-04 15:40 -0500 http://bitbucket.org/pypy/extradoc/changeset/257cc228351a/ Log: merged upstream diff --git a/talk/pycon2012/tutorial/examples/03_memory.py b/talk/pycon2012/tutorial/examples/03_memory.py --- a/talk/pycon2012/tutorial/examples/03_memory.py +++ b/talk/pycon2012/tutorial/examples/03_memory.py @@ -31,11 +31,6 @@ print time.time() - t0 usage, kb = read_smaps() print usage, kb * 1024 / count, "per instance" - gc.collect() - usage, kb = read_smaps() - print "after collect", usage, kb * 1024 / count, "per instance" - #import pdb - #pdb.set_trace() time.sleep(1) if __name__ == '__main__': From noreply at buildbot.pypy.org Sat Feb 4 21:51:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 21:51:58 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix signature of set_tls. Message-ID: <20120204205158.4A22D710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52102:470ded236eeb Date: 2012-02-04 20:52 +0100 http://bitbucket.org/pypy/pypy/changeset/470ded236eeb/ Log: Fix signature of set_tls. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -2,6 +2,7 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase +from pypy.rpython.annlowlevel import llhelper from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.debug import ll_assert, debug_start, debug_stop from pypy.module.thread import ll_thread @@ -66,6 +67,11 @@ self.collector = Collector(self) self.max_nursery_size = max_nursery_size # + def _do_get_size(obj): # indirection to hide 'self' + return self.get_size(obj) + GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) + self._do_get_size = llhelper(GETSIZE, _do_get_size) + # self.declare_readers() self.declare_write_barrier() @@ -88,7 +94,8 @@ """Setup a thread. Allocates the thread-local data structures. Must be called only once per OS-level thread.""" tls = lltype.malloc(self.GCTLS, flavor='raw') - self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls)) + self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls), + self._do_get_size) tls.nursery_start = self._alloc_nursery() tls.nursery_size = self.max_nursery_size tls.nursery_free = tls.nursery_start diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -22,7 +22,7 @@ threadnum = 0 # 0 = main thread; 1,2,3... = transactional threads - def set_tls(self, tls): + def set_tls(self, tls, getsize_fn): assert lltype.typeOf(tls) == llmemory.Address assert tls if self.threadnum == 0: @@ -30,6 +30,7 @@ self._tls_dict = {0: tls} self._tldicts = {0: {}} self._tldicts_iterators = {} + self._getsize_fn = getsize_fn self._transactional_copies = [] else: self._tls_dict[self.threadnum] = tls @@ -359,3 +360,8 @@ self.checkflags(sr3, 1, 0, llmemory.NULL) self.checkflags(sr4, 1, 0, llmemory.NULL) self.checkflags(s , 1, 0, llmemory.NULL) + + def test_do_get_size(self): + s1, s1_adr = self.malloc(S) + assert (repr(self.gc.stm_operations._getsize_fn(s1_adr)) == + repr(fake_get_size(s1_adr))) From noreply at buildbot.pypy.org Sat Feb 4 21:51:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 4 Feb 2012 21:51:59 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix and test stm_read_word(). Separate getsize_fn from stm_set_tls(). Message-ID: <20120204205159.95949710770@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52103:0f73f6e99efd Date: 2012-02-04 21:51 +0100 http://bitbucket.org/pypy/pypy/changeset/0f73f6e99efd/ Log: Fix and test stm_read_word(). Separate getsize_fn from stm_set_tls(). diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -67,10 +67,9 @@ self.collector = Collector(self) self.max_nursery_size = max_nursery_size # - def _do_get_size(obj): # indirection to hide 'self' + def _get_size(obj): # indirection to hide 'self' return self.get_size(obj) - GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) - self._do_get_size = llhelper(GETSIZE, _do_get_size) + self._getsize_fn = _get_size # self.declare_readers() self.declare_write_barrier() @@ -78,6 +77,9 @@ def setup(self): """Called at run-time to initialize the GC.""" GCBase.setup(self) + GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) + self.stm_operations.setup_size_getter( + llhelper(GETSIZE, self._getsize_fn)) self.main_thread_tls = self.setup_thread(True) self.mutex_lock = ll_thread.allocate_ll_lock() @@ -95,7 +97,7 @@ Must be called only once per OS-level thread.""" tls = lltype.malloc(self.GCTLS, flavor='raw') self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls), - self._do_get_size) + int(in_main_thread)) tls.nursery_start = self._alloc_nursery() tls.nursery_size = self.max_nursery_size tls.nursery_free = tls.nursery_start diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -22,17 +22,21 @@ threadnum = 0 # 0 = main thread; 1,2,3... = transactional threads - def set_tls(self, tls, getsize_fn): + def setup_size_getter(self, getsize_fn): + self._getsize_fn = getsize_fn + + def set_tls(self, tls, in_main_thread): assert lltype.typeOf(tls) == llmemory.Address assert tls if self.threadnum == 0: + assert in_main_thread == 1 assert not hasattr(self, '_tls_dict') self._tls_dict = {0: tls} self._tldicts = {0: {}} self._tldicts_iterators = {} - self._getsize_fn = getsize_fn self._transactional_copies = [] else: + assert in_main_thread == 0 self._tls_dict[self.threadnum] = tls self._tldicts[self.threadnum] = {} diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -25,6 +25,8 @@ #ifdef PYPY_STANDALONE /* obscure: cannot include debug_print.h if compiled */ # define RPY_STM_DEBUG_PRINT /* via ll2ctypes; only include it in normal builds */ # include "src/debug_print.h" +#else +# define RPY_STM_ASSERT 1 #endif /************************************************************/ @@ -62,17 +64,16 @@ struct tx_descriptor { void *rpython_tls_object; - long (*rpython_get_size)(void*); jmp_buf *setjmp_buf; owner_version_t start_time; owner_version_t end_time; /*unsigned long last_known_global_timestamp;*/ + owner_version_t my_lock_word; struct OrecList reads; unsigned num_commits; unsigned num_aborts[ABORT_REASONS]; unsigned num_spinloops[SPINLOOP_REASONS]; /*unsigned int spinloop_counter;*/ - owner_version_t my_lock_word; struct RedoLog redolog; /* last item, because it's the biggest one */ }; @@ -80,6 +81,7 @@ if there is an inevitable transaction running */ static volatile unsigned long global_timestamp = 2; static __thread struct tx_descriptor *thread_descriptor = NULL; +static long (*rpython_get_size)(void*); /************************************************************/ @@ -130,6 +132,11 @@ #endif } +static _Bool is_main_thread(struct tx_descriptor *d) +{ + return d->my_lock_word == 0; +} + static _Bool is_inevitable(struct tx_descriptor *d) { return d->setjmp_buf == NULL; @@ -147,7 +154,7 @@ void *globalobj = item->addr; void *localobj = item->val; owner_version_t p = item->p; - long size = d->rpython_get_size(localobj); + long size = rpython_get_size(localobj); memcpy(((char *)globalobj) + sizeof(orec_t), ((char *)localobj) + sizeof(orec_t), size - sizeof(orec_t)); @@ -339,20 +346,26 @@ static void mutex_lock(void) { unsigned long pself = (unsigned long)pthread_self(); +#ifdef RPY_STM_DEBUG_PRINT if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev locking...\n", pself); +#endif assert(locked_by != pself); pthread_mutex_lock(&mutex_inevitable); locked_by = pself; +#ifdef RPY_STM_DEBUG_PRINT if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev locked\n", pself); +#endif } static void mutex_unlock(void) { unsigned long pself = (unsigned long)pthread_self(); locked_by = 0; +#ifdef RPY_STM_DEBUG_PRINT if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev unlocked\n", pself); +#endif pthread_mutex_unlock(&mutex_inevitable); } # else @@ -431,6 +444,10 @@ not_found:; } + // XXX try to remove this check from the main path + if (is_main_thread(d)) + return *(long *)(((char *)addr) + offset); + retry: // read the orec BEFORE we read anything else ovt = o->version; @@ -465,7 +482,7 @@ } -static struct tx_descriptor *descriptor_init(void) +static struct tx_descriptor *descriptor_init(_Bool is_main_thread) { assert(thread_descriptor == NULL); if (1) /* for hg diff */ @@ -477,11 +494,18 @@ PYPY_DEBUG_START("stm-init"); #endif - /* initialize 'my_lock_word' to be a unique negative number */ - d->my_lock_word = (owner_version_t)d; - if (!IS_LOCKED(d->my_lock_word)) - d->my_lock_word = ~d->my_lock_word; - assert(IS_LOCKED(d->my_lock_word)); + if (is_main_thread) + { + d->my_lock_word = 0; + } + else + { + /* initialize 'my_lock_word' to be a unique negative number */ + d->my_lock_word = (owner_version_t)d; + if (!IS_LOCKED(d->my_lock_word)) + d->my_lock_word = ~d->my_lock_word; + assert(IS_LOCKED(d->my_lock_word)); + } /*d->spinloop_counter = (unsigned int)(d->my_lock_word | 1);*/ thread_descriptor = d; @@ -549,6 +573,7 @@ static long commit_transaction(void) { struct tx_descriptor *d = thread_descriptor; + assert(!is_main_thread(d)); // if I don't have writes, I'm committed if (!redolog_any_entry(&d->redolog)) @@ -707,11 +732,10 @@ } -void stm_set_tls(void *newtls, long (*getsize)(void*)) +void stm_set_tls(void *newtls, long is_main_thread) { - struct tx_descriptor *d = descriptor_init(); + struct tx_descriptor *d = descriptor_init(is_main_thread); d->rpython_tls_object = newtls; - d->rpython_get_size = getsize; } void *stm_get_tls(void) @@ -752,4 +776,9 @@ } REDOLOG_LOOP_END; } +void stm_setup_size_getter(long(*getsize_fn)(void*)) +{ + rpython_get_size = getsize_fn; +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -12,7 +12,9 @@ #include "src/commondefs.h" -void stm_set_tls(void *, long(*)(void*)); +void stm_setup_size_getter(long(*)(void*)); + +void stm_set_tls(void *, long); void *stm_get_tls(void); void stm_del_tls(void); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -14,7 +14,10 @@ def _freeze_(self): return True - set_tls = smexternal('stm_set_tls', [llmemory.Address, GETSIZE], + setup_size_getter = smexternal('stm_setup_size_getter', [GETSIZE], + lltype.Void) + + set_tls = smexternal('stm_set_tls', [llmemory.Address, lltype.Signed], lltype.Void) get_tls = smexternal('stm_get_tls', [], llmemory.Address) del_tls = smexternal('stm_del_tls', [], lltype.Void) diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -2,17 +2,22 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.annlowlevel import llhelper from pypy.translator.stm.stmgcintf import StmOperations, CALLBACK, GETSIZE +from pypy.rpython.memory.gc import stmgc stm_operations = StmOperations() DEFAULT_TLS = lltype.Struct('DEFAULT_TLS') +S1 = lltype.Struct('S1', ('hdr', stmgc.StmGC.HDR), + ('x', lltype.Signed), + ('y', lltype.Signed)) + def test_set_get_del(): # assume that they are really thread-local; not checked here s = lltype.malloc(lltype.Struct('S'), flavor='raw') a = llmemory.cast_ptr_to_adr(s) - stm_operations.set_tls(a, lltype.nullptr(GETSIZE.TO)) + stm_operations.set_tls(a, 1) assert stm_operations.get_tls() == a stm_operations.del_tls() lltype.free(s, flavor='raw') @@ -25,15 +30,12 @@ s = lltype.malloc(TLS, flavor='raw', immortal=True) self.tls = s a = llmemory.cast_ptr_to_adr(s) - getsize = llhelper(GETSIZE, self.getsize) - stm_operations.set_tls(a, getsize) + in_main_thread = getattr(meth, 'in_main_thread', True) + stm_operations.set_tls(a, int(in_main_thread)) def teardown_method(self, meth): stm_operations.del_tls() - def getsize(self, obj): - xxx - def test_set_get_del(self): a = llmemory.cast_ptr_to_adr(self.tls) assert stm_operations.get_tls() == a @@ -80,3 +82,46 @@ p_callback, seen = self.get_callback() stm_operations.tldict_enum(p_callback) assert seen == [] + + def stm_read_case(self, flags, copied=False): + # doesn't test STM behavior, but just that it appears to work + s1 = lltype.malloc(S1, flavor='raw') + s1.hdr.tid = stmgc.GCFLAG_GLOBAL | flags + s1.hdr.version = llmemory.NULL + s1.x = 42042 + if copied: + s2 = lltype.malloc(S1, flavor='raw') + s2.hdr.tid = stmgc.GCFLAG_WAS_COPIED + s2.hdr.version = llmemory.NULL + s2.x = 84084 + a1 = llmemory.cast_ptr_to_adr(s1) + a2 = llmemory.cast_ptr_to_adr(s2) + stm_operations.tldict_add(a1, a2) + res = stm_operations.stm_read_word(llmemory.cast_ptr_to_adr(s1), + rffi.sizeof(S1.hdr)) # 'x' + lltype.free(s1, flavor='raw') + if copied: + lltype.free(s2, flavor='raw') + return res + + def test_stm_read_word_main_thread(self): + res = self.stm_read_case(0) # not copied + assert res == 42042 + res = self.stm_read_case(stmgc.GCFLAG_WAS_COPIED) # but ignored + assert res == 42042 + + def test_stm_read_word_transactional_thread(self): + res = self.stm_read_case(0) # not copied + assert res == 42042 + res = self.stm_read_case(stmgc.GCFLAG_WAS_COPIED) # but ignored + assert res == 42042 + res = self.stm_read_case(stmgc.GCFLAG_WAS_COPIED, copied=True) + assert res == 84084 + test_stm_read_word_transactional_thread.in_main_thread = False + + def test_stm_size_getter(self): + def getsize(addr): + xxx + getter = llhelper(GETSIZE, getsize) + stm_operations.setup_size_getter(getter) + # just tests that the function is really defined From noreply at buildbot.pypy.org Sat Feb 4 21:54:42 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 4 Feb 2012 21:54:42 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: email for numpy Message-ID: <20120204205442.CFC3D710770@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4081:110cc76492e1 Date: 2012-02-04 15:54 -0500 http://bitbucket.org/pypy/extradoc/changeset/110cc76492e1/ Log: email for numpy diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -0,0 +1,17 @@ +PyPy Tutorial - NumPy? +====================== + +Hi, + +We're very excited for our upcoming tutorial at PyCon, and we can't wait to +see you there. As we prepare our material, we're interested in what material +will be most interesting and helpful to you, so we're going to have a lot of +questions for you. Right now we'd like to know, are you interested in hearing +about NumPy for PyPy? As you may know PyPy is working on it's own +implementation of NumPy, with rapidly increasing compatibility, and +performance. However, it does have a pretty different performance profile from +CPython's NumPy. If you're interested in hearing about this in our tutorial, +please let us know. + +Thanks, +Alex Gaynor, Maciej Fijalkoski, Armin Rigo From noreply at buildbot.pypy.org Sat Feb 4 21:57:54 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 4 Feb 2012 21:57:54 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: (amaury_) fix compile-time error Message-ID: <20120204205754.9AB38710770@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52104:9945f9583021 Date: 2012-02-04 22:55 +0200 http://bitbucket.org/pypy/pypy/changeset/9945f9583021/ Log: (amaury_) fix compile-time error diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -227,7 +227,7 @@ ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) if OPENSSL_EXPORT_VAR_AS_FUNCTION: - ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) + ssl_external('ASN1_ITEM_ptr', [lltype.Ptr(lltype.FuncType([], ASN1_ITEM))], ASN1_ITEM, macro=True) else: ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) From noreply at buildbot.pypy.org Sat Feb 4 21:59:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 21:59:33 +0100 (CET) Subject: [pypy-commit] pypy default: When the no_nul check is disabled, correctly transform the signature Message-ID: <20120204205933.6AE52710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52105:1c332c518153 Date: 2012-02-04 21:59 +0100 http://bitbucket.org/pypy/pypy/changeset/1c332c518153/ Log: When the no_nul check is disabled, correctly transform the signature when the function takes a list of strings. diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -741,6 +741,14 @@ return s_obj def remove_no_nul(s_obj): + if isinstance(s_obj, SomeList): + s_item = s_obj.listdef.read_item() + new_s_item = remove_no_nul(s_item) + from pypy.annotation.listdef import ListDef + if s_item is not new_s_item: + return SomeList(ListDef(None, new_s_item, + resized=True)) + if not getattr(s_obj, 'no_nul', False): return s_obj new_s_obj = SomeObject.__new__(s_obj.__class__) diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -187,3 +187,23 @@ raises(Exception, a.build_types, g, [str]) a.build_types(g, [str0]) # Does not raise + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + From noreply at buildbot.pypy.org Sat Feb 4 22:09:42 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 4 Feb 2012 22:09:42 +0100 (CET) Subject: [pypy-commit] pypy default: Fish the list item directly, read_item() cannot be called during rtyping Message-ID: <20120204210942.8CA9B710770@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52106:85f2c70631d9 Date: 2012-02-04 22:09 +0100 http://bitbucket.org/pypy/pypy/changeset/85f2c70631d9/ Log: Fish the list item directly, read_item() cannot be called during rtyping diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -742,7 +742,7 @@ def remove_no_nul(s_obj): if isinstance(s_obj, SomeList): - s_item = s_obj.listdef.read_item() + s_item = s_obj.listdef.listitem.s_value new_s_item = remove_no_nul(s_item) from pypy.annotation.listdef import ListDef if s_item is not new_s_item: From noreply at buildbot.pypy.org Sat Feb 4 23:11:09 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 4 Feb 2012 23:11:09 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: declare arguments for c functions, use os.path.sep Message-ID: <20120204221109.7D493710770@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52107:e16f0468532b Date: 2012-02-05 00:06 +0200 http://bitbucket.org/pypy/pypy/changeset/e16f0468532b/ Log: declare arguments for c functions, use os.path.sep diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer From noreply at buildbot.pypy.org Sat Feb 4 23:46:41 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 4 Feb 2012 23:46:41 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: more os.path.sep fun Message-ID: <20120204224641.36A34710770@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52108:7d3ad3fd6a7e Date: 2012-02-05 00:45 +0200 http://bitbucket.org/pypy/pypy/changeset/7d3ad3fd6a7e/ Log: more os.path.sep fun diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -383,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) From noreply at buildbot.pypy.org Sun Feb 5 11:16:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 5 Feb 2012 11:16:37 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: slides Message-ID: <20120205101637.BDDEF820D0@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4082:9812358309f2 Date: 2012-02-05 12:16 +0200 http://bitbucket.org/pypy/extradoc/changeset/9812358309f2/ Log: slides diff --git a/talk/pycon2012/tutorial/examples.rst b/talk/pycon2012/tutorial/examples.rst --- a/talk/pycon2012/tutorial/examples.rst +++ b/talk/pycon2012/tutorial/examples.rst @@ -4,3 +4,12 @@ * A simple speedup example and show performance * Show memory consumption how it grows for tight instances + +* Some numpy example (?) + +* An example how to use execnet + +* An example how to use matplotlib + +* Large object performance problem (that might go away some time?) + diff --git a/talk/pycon2012/tutorial/examples/03_memory.py b/talk/pycon2012/tutorial/examples/03_memory.py --- a/talk/pycon2012/tutorial/examples/03_memory.py +++ b/talk/pycon2012/tutorial/examples/03_memory.py @@ -1,5 +1,5 @@ -import time, os, re, gc +import time, os, re class A(object): def __init__(self, a, b, c): diff --git a/talk/pycon2012/tutorial/slides.rst b/talk/pycon2012/tutorial/slides.rst --- a/talk/pycon2012/tutorial/slides.rst +++ b/talk/pycon2012/tutorial/slides.rst @@ -1,18 +1,111 @@ Why PyPy? ========= -* Performance -* Memory -* Sandbox +* performance + +* memory + +* sandbox + +Why not PyPy (yet)? +=================== + +* embedded python interpreter + +* embedded systems + +* not x86-based systems + +* extensions, extensions, extensions Performance =========== +* the main thing we'll concentrate on today + +* PyPy is an interpreter + a JIT + +* compiling Python to assembler via magic (we'll talk about it later) + +* very different performance characteristics from CPython + +Performance sweetspots +====================== + +* every VM has it's sweetspot + +* we try hard to make it wider and wider + +CPython's sweetspot +=================== + +* moving computations to C, example:: + + map(operator.... ) # XXX some obscure example + +PyPy's sweetpot +=============== + +* **simple** python + +* if you can't understand it, JIT won't either + +How PyPy runs your program, involved parts +========================================== + +* a simple bytecode compiler (just like CPython) + +* an interpreter loop written in RPython + +* a JIT written in RPython + +* an assembler backend + +Bytecode interpreter +==================== + +* executing one bytecode at a time + +* add opcode for example + +* .... goes on and on + +* XXX example 1 + +Tracing JIT +=========== + +* once the loop gets hot, it's starting tracing (1039 runs, or 1619 function + calls) + +* generating operations following how the interpreter would execute them + +* optimizing them + +* compiling to assembler (x86 only for now) + +PyPy's specific features +======================== + +* JIT complete by design, as long as the interpreter is correct + +* Only **one** language description, in a high level language + +* Decent tools for inspecting the generated code + +XXXXXXXXXXXXXXXXXXXXXXXXXXXX + + * Sweetspot? + * CPython's sweetspot: stuff written in C + * PyPy's sweetspot: lots of stuff written in Python + * http://speed.pypy.org + * How do you hit the sweetspot? + * Be in this room for the next 3 hours. Memory From noreply at buildbot.pypy.org Sun Feb 5 11:23:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 11:23:17 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Improve. Message-ID: <20120205102317.1D53C820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4083:d25dafc60318 Date: 2012-02-01 11:38 +0100 http://bitbucket.org/pypy/extradoc/changeset/d25dafc60318/ Log: Improve. diff --git a/planning/stm.txt b/planning/stm.txt --- a/planning/stm.txt +++ b/planning/stm.txt @@ -264,11 +264,19 @@ is called, we can try to do such a collection, but what about the pinned objects? -<< NOW: let this mode be rather slow. To implement this mode, we would -have only global objects, and have the stm_write barrier of 'obj' return -'obj'. Do only global collections (one we have them; at first, don't -collect at all). Allocation would allocate immediately a global object, -without being able to benefit from bump-pointer allocation. >> +<< NOW: let this mode be rather slow. Two solutions are considered: + + 1. we would have only global objects, and have the stm_write barrier + of 'obj' return 'obj'. Do only global collections (once we have + them; at first, don't collect at all). Allocation would allocate + immediately a global object, without being able to benefit from + bump-pointer allocation. + + 2. allocate in a nursery, never collected for now; but just do an + end-of-transaction collection when transaction.run() is first + called. + +>> Pointer equality From noreply at buildbot.pypy.org Sun Feb 5 11:23:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 11:23:18 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a comment Message-ID: <20120205102318.A7572820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4084:3e2e98a4aee4 Date: 2012-02-05 11:22 +0100 http://bitbucket.org/pypy/extradoc/changeset/3e2e98a4aee4/ Log: add a comment diff --git a/planning/stm.txt b/planning/stm.txt --- a/planning/stm.txt +++ b/planning/stm.txt @@ -126,6 +126,9 @@ << NOW: straightforward >> +TODO: how do we handle MemoryErrors when making a local copy?? +Maybe force the transaction to abort, and then re-raise MemoryError + End-of-transaction collections ------------------------------ From noreply at buildbot.pypy.org Sun Feb 5 11:23:20 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 11:23:20 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20120205102320.00A92820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4085:c6a6febee72c Date: 2012-02-05 11:23 +0100 http://bitbucket.org/pypy/extradoc/changeset/c6a6febee72c/ Log: merge heads diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -0,0 +1,17 @@ +PyPy Tutorial - NumPy? +====================== + +Hi, + +We're very excited for our upcoming tutorial at PyCon, and we can't wait to +see you there. As we prepare our material, we're interested in what material +will be most interesting and helpful to you, so we're going to have a lot of +questions for you. Right now we'd like to know, are you interested in hearing +about NumPy for PyPy? As you may know PyPy is working on it's own +implementation of NumPy, with rapidly increasing compatibility, and +performance. However, it does have a pretty different performance profile from +CPython's NumPy. If you're interested in hearing about this in our tutorial, +please let us know. + +Thanks, +Alex Gaynor, Maciej Fijalkoski, Armin Rigo diff --git a/talk/pycon2012/tutorial/examples.rst b/talk/pycon2012/tutorial/examples.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples.rst @@ -0,0 +1,15 @@ + +* Refcount example, where it won't work + +* A simple speedup example and show performance + +* Show memory consumption how it grows for tight instances + +* Some numpy example (?) + +* An example how to use execnet + +* An example how to use matplotlib + +* Large object performance problem (that might go away some time?) + diff --git a/talk/pycon2012/tutorial/examples/01_refcount.py b/talk/pycon2012/tutorial/examples/01_refcount.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/01_refcount.py @@ -0,0 +1,9 @@ + +def wrong(): + open("file").write("data") + assert open("file").read() == "data" + +def right(): + with open("file") as f: + f.write("data") + # contents *will be there* by now diff --git a/talk/pycon2012/tutorial/examples/02_speedup.py b/talk/pycon2012/tutorial/examples/02_speedup.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/02_speedup.py @@ -0,0 +1,11 @@ + +def f(): + s = 0 + for i in xrange(100000000): + s += 1 + return s + +if __name__ == '__main__': + import dis + dis.dis(f) + f() diff --git a/talk/pycon2012/tutorial/examples/03_memory.py b/talk/pycon2012/tutorial/examples/03_memory.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/03_memory.py @@ -0,0 +1,37 @@ + +import time, os, re + +class A(object): + def __init__(self, a, b, c): + self.a = a + self.b = b + self.c = c + +def read_smaps(): + with open("/proc/%d/smaps" % os.getpid()) as f: + mark = False + for line in f: + if mark: + assert line.startswith('Size:') + m = re.search('(\d+).*', line) + return m.group(0), int(m.group(1)) + if 'heap' in line: + mark = True + +def main(): + l = [] + count = 0 + for k in range(100): + t0 = time.time() + for i in range(100000): + l.append(A(1, 2, i)) + for j in range(4): + A(1, 1, 2) + count += i + print time.time() - t0 + usage, kb = read_smaps() + print usage, kb * 1024 / count, "per instance" + time.sleep(1) + +if __name__ == '__main__': + main() diff --git a/talk/pycon2012/tutorial/outline.rst b/talk/pycon2012/tutorial/outline.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/outline.rst @@ -0,0 +1,31 @@ +How to get the most out of PyPy +=============================== + +* Why would you use PyPy - a quick look: + * performance + * memory consumption + * numpy (soon) + * sandbox +* Why you would not use PyPy (yet) + * embedded + * some extensions don't work (lxml) + * but there are ways around it! +* How PyPy Works + * Bytecode VM + * GC + * not refcounting + * Generational + * Implications (building large objects) + * JIT + * JIT + Python + * mapdict + * globals/builtins + * tracing + * resops + * optimizations +* A case study + * Examples, examples, examples + * Open source application (TBD) + * Jitviewer +* Putting it to work + * Workshop style diff --git a/talk/pycon2012/tutorial/slides.rst b/talk/pycon2012/tutorial/slides.rst new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/slides.rst @@ -0,0 +1,122 @@ +Why PyPy? +========= + +* performance + +* memory + +* sandbox + +Why not PyPy (yet)? +=================== + +* embedded python interpreter + +* embedded systems + +* not x86-based systems + +* extensions, extensions, extensions + +Performance +=========== + +* the main thing we'll concentrate on today + +* PyPy is an interpreter + a JIT + +* compiling Python to assembler via magic (we'll talk about it later) + +* very different performance characteristics from CPython + +Performance sweetspots +====================== + +* every VM has it's sweetspot + +* we try hard to make it wider and wider + +CPython's sweetspot +=================== + +* moving computations to C, example:: + + map(operator.... ) # XXX some obscure example + +PyPy's sweetpot +=============== + +* **simple** python + +* if you can't understand it, JIT won't either + +How PyPy runs your program, involved parts +========================================== + +* a simple bytecode compiler (just like CPython) + +* an interpreter loop written in RPython + +* a JIT written in RPython + +* an assembler backend + +Bytecode interpreter +==================== + +* executing one bytecode at a time + +* add opcode for example + +* .... goes on and on + +* XXX example 1 + +Tracing JIT +=========== + +* once the loop gets hot, it's starting tracing (1039 runs, or 1619 function + calls) + +* generating operations following how the interpreter would execute them + +* optimizing them + +* compiling to assembler (x86 only for now) + +PyPy's specific features +======================== + +* JIT complete by design, as long as the interpreter is correct + +* Only **one** language description, in a high level language + +* Decent tools for inspecting the generated code + +XXXXXXXXXXXXXXXXXXXXXXXXXXXX + + +* Sweetspot? + + * CPython's sweetspot: stuff written in C + + * PyPy's sweetspot: lots of stuff written in Python + +* http://speed.pypy.org + +* How do you hit the sweetspot? + + * Be in this room for the next 3 hours. + +Memory +====== + +* PyPy memory usage is difficult to estimate. +* Very program dependent. +* Learn to predict! + +Sandbox +======= + +* We're not going to talk about it here. +* Run untrusted code. From noreply at buildbot.pypy.org Sun Feb 5 13:20:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 13:20:45 +0100 (CET) Subject: [pypy-commit] pypy default: Fix: always use self.bookkeeper, instead of sometimes having it and sometimes not. Message-ID: <20120205122045.5AFFB820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52109:68cacc899ee9 Date: 2012-02-05 12:20 +0000 http://bitbucket.org/pypy/pypy/changeset/68cacc899ee9/ Log: Fix: always use self.bookkeeper, instead of sometimes having it and sometimes not. This is needed to translate pypy. diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -153,11 +153,10 @@ "Argument number mismatch" check_no_nul = False - if hasattr(self, 'bookkeeper'): - config = self.bookkeeper.annotator.translator.config - if config.translation.check_str_without_nul: - check_no_nul = True - + config = self.bookkeeper.annotator.translator.config + if config.translation.check_str_without_nul: + check_no_nul = True + for i, expected in enumerate(signature_args): if not check_no_nul: expected = annmodel.remove_no_nul(expected) @@ -181,6 +180,7 @@ def specialize_call(self, hop): rtyper = hop.rtyper + self.bookkeeper = rtyper.annotator.bookkeeper signature_args = self.normalize_args(*hop.args_s) args_r = [rtyper.getrepr(s_arg) for s_arg in signature_args] args_ll = [r_arg.lowleveltype for r_arg in args_r] From noreply at buildbot.pypy.org Sun Feb 5 14:58:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 14:58:28 +0100 (CET) Subject: [pypy-commit] pypy default: Fix: even if there are libraries listed, fall back to look in Message-ID: <20120205135828.2876E820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52110:ba6857447ed7 Date: 2012-02-05 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/ba6857447ed7/ Log: Fix: even if there are libraries listed, fall back to look in the standard C library if the symbol is not found elsewhere. diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' From noreply at buildbot.pypy.org Sun Feb 5 18:42:47 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 18:42:47 +0100 (CET) Subject: [pypy-commit] pyrepl default: Ignore UnicodeDecodeErrors here instead of crashing. Useful Message-ID: <20120205174247.5C7B1820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r162:60e7c5ca06b5 Date: 2012-02-05 18:42 +0100 http://bitbucket.org/pypy/pyrepl/changeset/60e7c5ca06b5/ Log: Ignore UnicodeDecodeErrors here instead of crashing. Useful because otherwise typing by mistake an accented letter during a Pdb++ session crashes the program with UnicodeDecodeError. diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -411,7 +411,9 @@ e.args[4] == 'unexpected end of data': pass else: - raise + #raise -- but better to ignore UnicodeDecodeErrors here... + self.partial_char = '' + return else: self.partial_char = '' self.event_queue.push(c) From noreply at buildbot.pypy.org Sun Feb 5 18:44:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 18:44:54 +0100 (CET) Subject: [pypy-commit] pyrepl default: Sorry. Use the same version as pypy's pyrepl. Message-ID: <20120205174454.C590A820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r163:64f1c2e6d88e Date: 2012-02-05 18:44 +0100 http://bitbucket.org/pypy/pyrepl/changeset/64f1c2e6d88e/ Log: Sorry. Use the same version as pypy's pyrepl. diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -411,9 +411,12 @@ e.args[4] == 'unexpected end of data': pass else: - #raise -- but better to ignore UnicodeDecodeErrors here... + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. self.partial_char = '' - return + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) From noreply at buildbot.pypy.org Sun Feb 5 19:11:40 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 19:11:40 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for "translator/c/test/test_extfunc.py -k exec": Message-ID: <20120205181140.64BC2820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52111:5cc21b20455b Date: 2012-02-05 19:11 +0100 http://bitbucket.org/pypy/pypy/changeset/5cc21b20455b/ Log: Fix for "translator/c/test/test_extfunc.py -k exec": don't use remove_no_nul(), which doesn't know what to return if a SomeList is passed; instead, temporarily disable checking for 'no_nul' when comparing SomeStrings. diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -225,9 +225,7 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True can_be_None=False no_nul = False # No NUL character in the string. @@ -241,27 +239,29 @@ def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if getattr(TLS, 'ignore_no_nul', False): + d1 = d1.copy(); d1['no_nul'] = 0 + d2 = d2.copy(); d2['no_nul'] = 0 + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - can_be_None=False - no_nul = False - - def __init__(self, can_be_None=False, no_nul=False): - if can_be_None: - self.can_be_None = True - if no_nul: - self.no_nul = True - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." @@ -740,23 +740,6 @@ s_obj = new_s_obj return s_obj -def remove_no_nul(s_obj): - if isinstance(s_obj, SomeList): - s_item = s_obj.listdef.listitem.s_value - new_s_item = remove_no_nul(s_item) - from pypy.annotation.listdef import ListDef - if s_item is not new_s_item: - return SomeList(ListDef(None, new_s_item, - resized=True)) - - if not getattr(s_obj, 'no_nul', False): - return s_obj - new_s_obj = SomeObject.__new__(s_obj.__class__) - new_s_obj.__dict__ = s_obj.__dict__.copy() - del new_s_obj.no_nul - return new_s_obj - - # ____________________________________________________________ # internal diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -152,26 +152,25 @@ assert len(args_s) == len(signature_args),\ "Argument number mismatch" - check_no_nul = False config = self.bookkeeper.annotator.translator.config - if config.translation.check_str_without_nul: - check_no_nul = True - - for i, expected in enumerate(signature_args): - if not check_no_nul: - expected = annmodel.remove_no_nul(expected) - arg = annmodel.unionof(args_s[i], expected) - if not expected.contains(arg): - name = getattr(self, 'name', None) - if not name: - try: - name = self.instance.__name__ - except AttributeError: - name = '?' - raise Exception("In call to external function %r:\n" - "arg %d must be %s,\n" - " got %s" % ( - name, i+1, expected, args_s[i])) + if not config.translation.check_str_without_nul: + annmodel.TLS.ignore_no_nul = True + try: + for i, expected in enumerate(signature_args): + arg = annmodel.unionof(args_s[i], expected) + if not expected.contains(arg): + name = getattr(self, 'name', None) + if not name: + try: + name = self.instance.__name__ + except AttributeError: + name = '?' + raise Exception("In call to external function %r:\n" + "arg %d must be %s,\n" + " got %s" % ( + name, i+1, expected, args_s[i])) + finally: + annmodel.TLS.ignore_no_nul = False return signature_args def compute_result_annotation(self, *args_s): From noreply at buildbot.pypy.org Sun Feb 5 19:26:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 5 Feb 2012 19:26:26 +0100 (CET) Subject: [pypy-commit] pypy default: Hack more, by moving the check_str_without_nul at some global level. Message-ID: <20120205182626.18F5B820D0@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52112:cebcd354652f Date: 2012-02-05 19:26 +0100 http://bitbucket.org/pypy/pypy/changeset/cebcd354652f/ Log: Hack more, by moving the check_str_without_nul at some global level. Now Converge is happy. diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -244,9 +246,9 @@ return False d1 = self.__dict__ d2 = other.__dict__ - if getattr(TLS, 'ignore_no_nul', False): - d1 = d1.copy(); d1['no_nul'] = 0 - d2 = d2.copy(); d2['no_nul'] = 0 + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored return d1 == d2 class SomeString(SomeStringOrUnicode): diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -152,25 +152,19 @@ assert len(args_s) == len(signature_args),\ "Argument number mismatch" - config = self.bookkeeper.annotator.translator.config - if not config.translation.check_str_without_nul: - annmodel.TLS.ignore_no_nul = True - try: - for i, expected in enumerate(signature_args): - arg = annmodel.unionof(args_s[i], expected) - if not expected.contains(arg): - name = getattr(self, 'name', None) - if not name: - try: - name = self.instance.__name__ - except AttributeError: - name = '?' - raise Exception("In call to external function %r:\n" - "arg %d must be %s,\n" - " got %s" % ( - name, i+1, expected, args_s[i])) - finally: - annmodel.TLS.ignore_no_nul = False + for i, expected in enumerate(signature_args): + arg = annmodel.unionof(args_s[i], expected) + if not expected.contains(arg): + name = getattr(self, 'name', None) + if not name: + try: + name = self.instance.__name__ + except AttributeError: + name = '?' + raise Exception("In call to external function %r:\n" + "arg %d must be %s,\n" + " got %s" % ( + name, i+1, expected, args_s[i])) return signature_args def compute_result_annotation(self, *args_s): @@ -179,7 +173,6 @@ def specialize_call(self, hop): rtyper = hop.rtyper - self.bookkeeper = rtyper.annotator.bookkeeper signature_args = self.normalize_args(*hop.args_s) args_r = [rtyper.getrepr(s_arg) for s_arg in signature_args] args_ll = [r_arg.lowleveltype for r_arg in args_r] From noreply at buildbot.pypy.org Sun Feb 5 21:09:24 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 5 Feb 2012 21:09:24 +0100 (CET) Subject: [pypy-commit] pypy default: Fix ll_os module on windows. Message-ID: <20120205200924.94D06820D0@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52113:380432600a53 Date: 2012-02-05 21:08 +0100 http://bitbucket.org/pypy/pypy/changeset/380432600a53/ Log: Fix ll_os module on windows. diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -48,7 +48,10 @@ args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -823,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([str0, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') From noreply at buildbot.pypy.org Sun Feb 5 21:57:53 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 5 Feb 2012 21:57:53 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: merge with default Message-ID: <20120205205753.A53FF820D0@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52114:d11247cc0e08 Date: 2012-02-05 22:56 +0200 http://bitbucket.org/pypy/pypy/changeset/d11247cc0e08/ Log: merge with default diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,43 +227,57 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +518,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +733,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -1552,7 +1556,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1591,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1607,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1666,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2187,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2202,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -267,7 +207,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +220,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +453,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -679,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -744,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -817,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -858,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -876,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -973,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -997,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1033,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1084,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1289,11 +1263,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1345,12 +1321,15 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1359,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1387,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1419,22 +1398,29 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -482,6 +401,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -173,7 +175,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +308,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -936,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot @@ -1121,14 +1122,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1164,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1.0).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3.0).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,35 +1478,37 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros @@ -1740,10 +1762,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1757,6 +1780,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -548,7 +548,7 @@ for s in result ] else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +635,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +650,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +765,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +785,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +914,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1103,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1113,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -127,6 +127,7 @@ l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, + op.getarg(1).getint(), w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, @@ -163,14 +164,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + at unwrap_spec(repr=str, jd_name=str, call_depth=int) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, call_depth, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_greenkey) + repr, jd_name, call_depth, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,10 +206,11 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + def __init__(self, space, op, repr_of_resop, jd_name, call_depth, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) + self.jd_name = jd_name + self.call_depth = call_depth self.w_greenkey = w_greenkey - self.jd_name = jd_name def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: @@ -243,6 +245,7 @@ greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + call_depth = interp_attrproperty("call_depth", cls=DebugMergePoint), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -122,7 +122,8 @@ assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code assert dmp.greenkey == (self.f.func_code, 0, False) - #assert int_add.name == 'int_add' + assert dmp.call_depth == 0 + assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 @@ -223,11 +224,13 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num - op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + assert op.call_depth == 2 + op = DebugMergePoint([Box(0)], 'repr', 'notmain', 5, ('str',)) raises(AttributeError, 'op.pycode') + assert op.call_depth == 5 diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -344,7 +344,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,9 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -44,7 +48,10 @@ args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -68,6 +75,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +93,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +310,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +328,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +342,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +363,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +388,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +527,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +622,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1060,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1072,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1184,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1251,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1264,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1293,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1370,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1392,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1410,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1433,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1446,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1463,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1485,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1498,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1512,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1564,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1577,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,43 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Sun Feb 5 23:32:27 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 5 Feb 2012 23:32:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix test after merge Message-ID: <20120205223227.48F61820D0@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52115:30d9837842b5 Date: 2012-02-03 19:59 +0100 http://bitbucket.org/pypy/pypy/changeset/30d9837842b5/ Log: Fix test after merge diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -876,28 +876,19 @@ def test_const_fold_unicode_subscr(self): source = """def f(): - return u"abc"[0] + return "abc"[0] """ counts = self.count_instructions(source) assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): - return u"\U00012345"[0] + return "\U00012345"[0] """ counts = self.count_instructions(source) assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} - # getslice is not yet optimized. - # Still, check a case which yields the empty string. - source = """def f(): - return u"abc"[:0] - """ - counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 2, ops.SLICE+2: 1, - ops.RETURN_VALUE: 1} - def test_remove_dead_code(self): source = """def f(x): return 5 From noreply at buildbot.pypy.org Sun Feb 5 23:32:30 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 5 Feb 2012 23:32:30 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120205223230.6E66082E0F@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52116:abf983c4a829 Date: 2012-02-04 23:13 +0100 http://bitbucket.org/pypy/pypy/changeset/abf983c4a829/ Log: hg merge default diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -229,21 +229,33 @@ "Stands for an object which is known to be a string." knowntype = str immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) class SomeUnicodeString(SomeObject): "Stands for an object which is known to be an unicode string" knowntype = unicode immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None @@ -254,14 +266,16 @@ class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +516,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +731,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const @@ -726,6 +740,15 @@ s_obj = new_s_obj return s_obj +def remove_no_nul(s_obj): + if not getattr(s_obj, 'no_nul', False): + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + del new_s_obj.no_nul + return new_s_obj + + # ____________________________________________________________ # internal diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -481,13 +481,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -498,7 +498,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -509,18 +510,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1292,6 +1292,24 @@ def bytes_w(self, w_obj): return w_obj.bytes_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = self.str_w(w_obj) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + + def bytes0_w(self, w_obj): + "Like bytes_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = self.bytes_w(w_obj) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.str_w(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -137,7 +137,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -186,7 +186,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -229,7 +229,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -376,8 +376,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -431,7 +431,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -512,7 +512,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -670,7 +670,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -685,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -750,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -823,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -864,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -882,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -979,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -1003,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1039,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1090,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1353,6 +1321,9 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) @@ -1427,20 +1398,29 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) W_FlatIterator.typedef = TypeDef( 'flatiter', __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -477,6 +396,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -936,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot @@ -1488,24 +1489,26 @@ def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros @@ -1759,10 +1762,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1776,6 +1780,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.bytes_w(w_obj) + return space.bytes0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.bytes_w(self.w_obj) + return self.space.bytes0_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.bytes_w(w_fname) + fname = space.bytes0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -546,7 +546,7 @@ for s in result ] else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrapbytes(s) for s in result] except OSError, e: @@ -633,7 +633,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -648,7 +648,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -763,7 +763,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -783,18 +783,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -912,7 +912,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1101,7 +1101,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1111,7 +1111,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -342,7 +342,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,17 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + + check_no_nul = False + if hasattr(self, 'bookkeeper'): + config = self.bookkeeper.annotator.translator.config + if config.translation.check_str_without_nul: + check_no_nul = True + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + if not check_no_nul: + expected = annmodel.remove_no_nul(expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -68,6 +72,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +90,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +307,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +325,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +339,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +360,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +385,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +524,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +619,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +823,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1057,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1069,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1181,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1248,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1261,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1290,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1367,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1389,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1407,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1430,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1443,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1460,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1482,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1495,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1509,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1561,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1574,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,23 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Sun Feb 5 23:32:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 5 Feb 2012 23:32:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix no_nul-ness of importing.make_compiled_pathname() Message-ID: <20120205223233.08F86820D0@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52117:4e8b55a8b643 Date: 2012-02-05 19:51 +0100 http://bitbucket.org/pypy/pypy/changeset/4e8b55a8b643/ Log: Fix no_nul-ness of importing.make_compiled_pathname() diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -448,7 +448,8 @@ class __extend__(pairtype(SomeChar, SomeChar)): def union((chr1, chr2)): - return SomeChar() + no_nul = chr1.no_nul and chr2.no_nul + return SomeChar(no_nul=no_nul) class __extend__(pairtype(SomeChar, SomeUnicodeCodePoint), @@ -645,14 +646,14 @@ def getitem((str1, int2)): getbookkeeper().count("str_getitem", int2) - return SomeChar() + return SomeChar(no_nul=str1.no_nul) getitem.can_only_throw = [] getitem_key = getitem def getitem_idx((str1, int2)): getbookkeeper().count("str_getitem", int2) - return SomeChar() + return SomeChar(no_nul=str1.no_nul) getitem_idx.can_only_throw = [IndexError] getitem_idx_key = getitem_idx diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -181,7 +181,7 @@ if space.config.objspace.usemodules.thread: importing.getimportlock(space).reinit_lock() - at unwrap_spec(pathname=str) + at unwrap_spec(pathname='str0') def cache_from_source(space, pathname, w_debug_override=None): """cache_from_source(path, [debug_override]) -> path Given the path to a .py file, return the path to its .pyc/.pyo file. @@ -194,7 +194,7 @@ the value of __debug__ instead.""" return space.wrap(importing.make_compiled_pathname(pathname)) - at unwrap_spec(pathname=str) + at unwrap_spec(pathname='str0') def source_from_cache(space, pathname): """source_from_cache(path) -> path Given the path to a .pyc./.pyo file, return the path to its .py file. diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -980,6 +980,16 @@ finally: stream.close() + def test_annotation(self): + from pypy.annotation.annrpython import RPythonAnnotator + from pypy.annotation import model as annmodel + def f(): + return importing.make_compiled_pathname('abc/foo.py') + a = RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_PYTHONPATH_takes_precedence(space): if sys.platform == "win32": From noreply at buildbot.pypy.org Sun Feb 5 23:32:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 5 Feb 2012 23:32:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120205223235.929A8820D0@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52118:4c6488c2ca14 Date: 2012-02-05 19:52 +0100 http://bitbucket.org/pypy/pypy/changeset/4c6488c2ca14/ Log: hg merge default diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,9 +227,7 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True can_be_None=False no_nul = False # No NUL character in the string. @@ -241,27 +241,29 @@ def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - can_be_None=False - no_nul = False - - def __init__(self, can_be_None=False, no_nul=False): - if can_be_None: - self.can_be_None = True - if no_nul: - self.no_nul = True - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." @@ -740,15 +742,6 @@ s_obj = new_s_obj return s_obj -def remove_no_nul(s_obj): - if not getattr(s_obj, 'no_nul', False): - return s_obj - new_s_obj = SomeObject.__new__(s_obj.__class__) - new_s_obj.__dict__ = s_obj.__dict__.copy() - del new_s_obj.no_nul - return new_s_obj - - # ____________________________________________________________ # internal diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -152,15 +152,7 @@ assert len(args_s) == len(signature_args),\ "Argument number mismatch" - check_no_nul = False - if hasattr(self, 'bookkeeper'): - config = self.bookkeeper.annotator.translator.config - if config.translation.check_str_without_nul: - check_no_nul = True - for i, expected in enumerate(signature_args): - if not check_no_nul: - expected = annmodel.remove_no_nul(expected) arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -187,3 +187,23 @@ raises(Exception, a.build_types, g, [str]) a.build_types(g, [str0]) # Does not raise + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + From notifications-noreply at bitbucket.org Mon Feb 6 01:25:44 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Mon, 06 Feb 2012 00:25:44 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120206002544.6955.46690@bitbucket05.managed.contegix.com> You have received a notification from Justin Weeks. Hi, I forked pypy. My fork is at https://bitbucket.org/semyazza/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Mon Feb 6 05:47:58 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 05:47:58 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: (amaury_) redefine ASN1_ITEM_EXP Message-ID: <20120206044759.00C7B710711@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52119:fbfac731b543 Date: 2012-02-03 16:56 +0200 http://bitbucket.org/pypy/pypy/changeset/fbfac731b543/ Log: (amaury_) redefine ASN1_ITEM_EXP diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,7 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') -ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -106,7 +106,7 @@ rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -227,7 +227,7 @@ ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) if OPENSSL_EXPORT_VAR_AS_FUNCTION: - ssl_external('ASN1_ITEM_ptr', [lltype.Ptr(lltype.FuncType([], ASN1_ITEM))], ASN1_ITEM, macro=True) + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) else: ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) From noreply at buildbot.pypy.org Mon Feb 6 09:04:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 09:04:38 +0100 (CET) Subject: [pypy-commit] benchmarks default: hopefully this is the last special case for errors Message-ID: <20120206080438.119147107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r174:7a39bbeb0f05 Date: 2012-02-06 10:04 +0200 http://bitbucket.org/pypy/benchmarks/changeset/7a39bbeb0f05/ Log: hopefully this is the last special case for errors diff --git a/saveresults.py b/saveresults.py --- a/saveresults.py +++ b/saveresults.py @@ -121,7 +121,8 @@ response += ' Reason: ' + str(e.reason) elif hasattr(e, 'code'): response = '\n The server couldn\'t fulfill the request' - response = "".join([response] + e.readlines()) + if hasattr(e, 'readlines'): + response = "".join([response] + e.readlines()) print response with open('error.html', 'w') as error_file: error_file.write(response) From noreply at buildbot.pypy.org Mon Feb 6 09:38:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 6 Feb 2012 09:38:35 +0100 (CET) Subject: [pypy-commit] pypy default: Add documentation file. Message-ID: <20120206083835.9D27F7107EC@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52120:507a7fafd337 Date: 2012-02-06 09:38 +0100 http://bitbucket.org/pypy/pypy/changeset/507a7fafd337/ Log: Add documentation file. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. From noreply at buildbot.pypy.org Mon Feb 6 10:09:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 10:09:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove dead code Message-ID: <20120206090908.D8A4C7107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52121:340bb8d09ed7 Date: 2012-02-06 10:08 +0100 http://bitbucket.org/pypy/pypy/changeset/340bb8d09ed7/ Log: remove dead code diff --git a/pypy/jit/backend/ppc/instruction.py b/pypy/jit/backend/ppc/instruction.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/instruction.py +++ /dev/null @@ -1,842 +0,0 @@ -r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, \ - r13, r14, r15, r16, r17, r18, r19, r20, r21, r22, \ - r23, r24, r25, r26, r27, r28, r29, r30, r31 = range(32) -rSCRATCH = r0 -rSP = r1 -rFP = r2 # the ABI doesn't specify a frame pointer. however, we want one - -class AllocationSlot(object): - offset = 0 - number = 0 - def __init__(self): - # The field alloc points to a singleton used by the register - # allocator to detect conflicts. No two AllocationSlot - # instances with the same value in self.alloc can be used at - # once. - self.alloc = self - - def make_loc(self): - """ When we assign a variable to one of these registers, we - call make_loc() to get the actual location instance; that - instance will have its alloc field set to self. For - everything but condition registers, this is self.""" - return self - -class _StackSlot(AllocationSlot): - is_register = False - def __init__(self, offset): - AllocationSlot.__init__(self) - self.offset = offset - def __repr__(self): - return "stack@%s"%(self.offset,) - -_stack_slot_cache = {} -def stack_slot(offset): - # because stack slots are put into dictionaries which compare by - # identity, it is important that there's a unique _StackSlot - # object for each offset, at least per function generated or - # something. doing the caching here is easier, though. - if offset in _stack_slot_cache: - return _stack_slot_cache[offset] - _stack_slot_cache[offset] = res = _StackSlot(offset) - return res - -NO_REGISTER = -1 -GP_REGISTER = 0 -FP_REGISTER = 1 -CR_FIELD = 2 -CT_REGISTER = 3 - -class Register(AllocationSlot): - is_register = True - def __init__(self): - AllocationSlot.__init__(self) - -class GPR(Register): - regclass = GP_REGISTER - def __init__(self, number): - Register.__init__(self) - self.number = number - def __repr__(self): - return 'r' + str(self.number) -gprs = map(GPR, range(32)) - -class FPR(Register): - regclass = FP_REGISTER - def __init__(self, number): - Register.__init__(self) - self.number = number - -fprs = map(FPR, range(32)) - -class BaseCRF(Register): - """ These represent condition registers; however, we never actually - use these as the location of something in the register allocator. - Instead, we place it in an instance of CRF which indicates which - bits are required to extract the value. Note that CRF().alloc will - always be an instance of this. """ - regclass = CR_FIELD - def __init__(self, number): - self.number = number - self.alloc = self - def make_loc(self): - return CRF(self) - -crfs = map(BaseCRF, range(8)) - -class CRF(Register): - regclass = CR_FIELD - def __init__(self, crf): - Register.__init__(self) - self.alloc = crf - self.number = crf.number - self.info = (-1,-1) # (bit, negated) - def set_info(self, info): - assert len(info) == 2 - self.info = info - def make_loc(self): - # should never call this on a CRF, only a BaseCRF - raise NotImplementedError - def move_to_gpr(self, gpr): - bit, negated = self.info - return _CRF2GPR(gpr, self.alloc.number*4 + bit, negated) - def move_from_gpr(self, gpr): - # cmp2info['ne'] - self.set_info((2, 1)) - return _GPR2CRF(self, gpr) - def __repr__(self): - return 'crf' + str(self.number) + str(self.info) - -class CTR(Register): - regclass = CT_REGISTER - def move_from_gpr(self, gpr): - return _GPR2CTR(gpr) - -ctr = CTR() - -_insn_index = [0] - -class Insn(object): - ''' - result is the Var instance that holds the result, or None - result_regclass is the class of the register the result goes into - - reg_args is the vars that need to have registers allocated for them - reg_arg_regclasses is the type of register that needs to be allocated - ''' - def __init__(self): - self._magic_index = _insn_index[0] - _insn_index[0] += 1 - def __repr__(self): - return "<%s %d>" % (self.__class__.__name__, self._magic_index) - def emit(self, asm): - pass - -class Insn_GPR__GPR_GPR(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - - self.result_reg = None - self.arg_reg1 = None - self.arg_reg2 = None - - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg1 = allocator.loc_of(self.reg_args[0]) - self.arg_reg2 = allocator.loc_of(self.reg_args[1]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg1: - a1 = "%s@%s"%(self.reg_args[0], self.arg_reg1) - else: - a1 = str(self.reg_args[0]) - if self.arg_reg2: - a2 = "%s@%s"%(self.reg_args[1], self.arg_reg2) - else: - a2 = str(self.reg_args[1]) - return "<%s-%s %s %s, %s, %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, - r, a1, a2) - - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg1.number, - self.arg_reg2.number) - -class Insn_GPR__GPR_IMM(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[1] - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [args[0]] - self.reg_arg_regclasses = [GP_REGISTER] - self.result_reg = None - self.arg_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s, (%s)>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, - r, a, self.imm.value) - - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg.number, - self.imm.value) - -class Insn_GPR__GPR(Insn): - def __init__(self, methptr, result, arg): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [arg] - self.reg_arg_regclasses = [GP_REGISTER] - - self.result_reg = None - self.arg_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r, a) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg.number) - - -class Insn_GPR(Insn): - def __init__(self, methptr, result): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - return "<%s-%d %s %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number) - -class Insn_GPR__IMM(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[0] - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - return "<%s-%d %s %s, (%s)>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r, - self.imm.value) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.imm.value) - -class MoveCRB2GPR(Insn): - def __init__(self, result, gv_condition): - Insn.__init__(self) - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [gv_condition] - self.reg_arg_regclasses = [CR_FIELD] - def allocate(self, allocator): - self.targetreg = allocator.loc_of(self.result) - self.crf = allocator.loc_of(self.reg_args[0]) - def emit(self, asm): - assert isinstance(self.crf, CRF) - bit, negated = self.crf.info - asm.mfcr(self.targetreg.number) - asm.extrwi(self.targetreg.number, self.targetreg.number, 1, self.crf.number*4+bit) - if negated: - asm.xori(self.targetreg.number, self.targetreg.number, 1) - -class Insn_None__GPR_GPR_IMM(Insn): - def __init__(self, methptr, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[2] - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = args[:2] - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - def allocate(self, allocator): - self.reg1 = allocator.loc_of(self.reg_args[0]) - self.reg2 = allocator.loc_of(self.reg_args[1]) - def __repr__(self): - return "<%s %s %d>" % (self.__class__.__name__, self.methptr.im_func.func_name, self._magic_index) - - def emit(self, asm): - self.methptr(asm, - self.reg1.number, - self.reg2.number, - self.imm.value) - -class Insn_None__GPR_GPR_GPR(Insn): - def __init__(self, methptr, args): - Insn.__init__(self) - self.methptr = methptr - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER, GP_REGISTER] - def allocate(self, allocator): - self.reg1 = allocator.loc_of(self.reg_args[0]) - self.reg2 = allocator.loc_of(self.reg_args[1]) - self.reg3 = allocator.loc_of(self.reg_args[2]) - def __repr__(self): - return "<%s %s %d>" % (self.__class__.__name__, self.methptr.im_func.func_name, self._magic_index) - - def emit(self, asm): - self.methptr(asm, - self.reg1.number, - self.reg2.number, - self.reg3.number) - -class Extrwi(Insn): - def __init__(self, result, source, size, bit): - Insn.__init__(self) - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [source] - self.reg_arg_regclasses = [GP_REGISTER] - self.result_reg = None - self.arg_reg = None - - self.size = size - self.bit = bit - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d extrwi %s, %s, %s, %s>" % (self.__class__.__name__, self._magic_index, - r, a, self.size, self.bit) - - def emit(self, asm): - asm.extrwi(self.result_reg.number, - self.arg_reg.number, - self.size, self.bit) - - -class CMPInsn(Insn): - def __init__(self, info, result): - Insn.__init__(self) - self.info = info - self.result = result - self.result_reg = None - - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - assert isinstance(self.result_reg, CRF) - self.result_reg.set_info(self.info) - -class CMPW(CMPInsn): - def __init__(self, info, result, args): - CMPInsn.__init__(self, info, result) - self.result_regclass = CR_FIELD - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - self.arg_reg1 = None - self.arg_reg2 = None - - def allocate(self, allocator): - CMPInsn.allocate(self, allocator) - self.arg_reg1 = allocator.loc_of(self.reg_args[0]) - self.arg_reg2 = allocator.loc_of(self.reg_args[1]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg1: - a1 = "%s@%s"%(self.reg_args[0], self.arg_reg1) - else: - a1 = str(self.reg_args[0]) - if self.arg_reg2: - a2 = "%s@%s"%(self.reg_args[1], self.arg_reg2) - else: - a2 = str(self.reg_args[1]) - return "<%s-%d %s %s, %s, %s>"%(self.__class__.__name__, self._magic_index, - self.__class__.__name__.lower(), - r, a1, a2) - - def emit(self, asm): - asm.cmpw(self.result_reg.number, self.arg_reg1.number, self.arg_reg2.number) - -class CMPWL(CMPW): - def emit(self, asm): - asm.cmplw(self.result_reg.number, self.arg_reg1.number, self.arg_reg2.number) - -class CMPWI(CMPInsn): - def __init__(self, info, result, args): - CMPInsn.__init__(self, info, result) - self.imm = args[1] - self.result_regclass = CR_FIELD - self.reg_args = [args[0]] - self.reg_arg_regclasses = [GP_REGISTER] - self.arg_reg = None - - def allocate(self, allocator): - CMPInsn.allocate(self, allocator) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s, (%s)>"%(self.__class__.__name__, self._magic_index, - self.__class__.__name__.lower(), - r, a, self.imm.value) - def emit(self, asm): - #print "CMPWI", asm.mc.tell() - asm.cmpwi(self.result_reg.number, self.arg_reg.number, self.imm.value) - -class CMPWLI(CMPWI): - def emit(self, asm): - asm.cmplwi(self.result_reg.number, self.arg_reg.number, self.imm.value) - - -## class MTCTR(Insn): -## def __init__(self, result, args): -## Insn.__init__(self) -## self.result = result -## self.result_regclass = CT_REGISTER - -## self.reg_args = args -## self.reg_arg_regclasses = [GP_REGISTER] - -## def allocate(self, allocator): -## self.arg_reg = allocator.loc_of(self.reg_args[0]) - -## def emit(self, asm): -## asm.mtctr(self.arg_reg.number) - -class Jump(Insn): - def __init__(self, gv_cond, targetbuilder, jump_if_true, jump_args_gv): - Insn.__init__(self) - self.gv_cond = gv_cond - self.jump_if_true = jump_if_true - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = [gv_cond] - self.reg_arg_regclasses = [CR_FIELD] - self.crf = None - - self.jump_args_gv = jump_args_gv - self.targetbuilder = targetbuilder - def allocate(self, allocator): - self.crf = allocator.loc_of(self.reg_args[0]) - assert self.crf.info[0] != -1 - - assert self.targetbuilder.initial_var2loc is None - self.targetbuilder.initial_var2loc = {} - from pypy.jit.codegen.ppc.rgenop import Var - for gv_arg in self.jump_args_gv: - if isinstance(gv_arg, Var): - self.targetbuilder.initial_var2loc[gv_arg] = allocator.var2loc[gv_arg] - allocator.builders_to_tell_spill_offset_to.append(self.targetbuilder) - def __repr__(self): - if self.jump_if_true: - op = 'if_true' - else: - op = 'if_false' - if self.crf: - a = '%s@%s'%(self.reg_args[0], self.crf) - else: - a = self.reg_args[0] - return '<%s-%d %s %s>'%(self.__class__.__name__, self._magic_index, - op, a) - def emit(self, asm): - if self.targetbuilder.start: - asm.load_word(rSCRATCH, self.targetbuilder.start) - else: - self.targetbuilder.patch_start_here = asm.mc.tell() - asm.load_word(rSCRATCH, 0) - asm.mtctr(rSCRATCH) - bit, negated = self.crf.info - assert bit != -1 - if negated ^ self.jump_if_true: - BO = 12 # jump if relavent bit is set in the CR - else: - BO = 4 # jump if relavent bit is NOT set in the CR - asm.bcctr(BO, self.crf.number*4 + bit) - -class Label(Insn): - def __init__(self, label): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_regclass = NO_REGISTER - self.result = None - self.label = label - def allocate(self, allocator): - for gv in self.label.args_gv: - loc = allocator.loc_of(gv) - if isinstance(loc, CRF): - allocator.forget(gv, loc) - allocator.lru.remove(gv) - allocator.freeregs[loc.regclass].append(loc.alloc) - new_loc = allocator._allocate_reg(GP_REGISTER, gv) - allocator.lru.append(gv) - allocator.insns.append(loc.move_to_gpr(new_loc.number)) - loc = new_loc - self.label.arg_locations = [] - for gv in self.label.args_gv: - loc = allocator.loc_of(gv) - self.label.arg_locations.append(loc) - allocator.labels_to_tell_spill_offset_to.append(self.label) - def __repr__(self): - if hasattr(self.label, 'arg_locations'): - arg_locations = '[' + ', '.join( - ['%s@%s'%(gv, loc) for gv, loc in - zip(self.label.args_gv, self.label.arg_locations)]) + ']' - else: - arg_locations = str(self.label.args_gv) - return ''%(self._magic_index, - arg_locations) - def emit(self, asm): - self.label.startaddr = asm.mc.tell() - -class LoadFramePointer(Insn): - def __init__(self, result): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = result - self.result_regclass = GP_REGISTER - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def emit(self, asm): - asm.mr(self.result_reg.number, rFP) - -class CopyIntoStack(Insn): - def __init__(self, place, v): - Insn.__init__(self) - self.reg_args = [v] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = None - self.result_regclass = NO_REGISTER - self.place = place - def allocate(self, allocator): - self.arg_reg = allocator.loc_of(self.reg_args[0]) - self.target_slot = allocator.spill_slot() - self.place.offset = self.target_slot.offset - def emit(self, asm): - asm.stw(self.arg_reg.number, rFP, self.target_slot.offset) - -class CopyOffStack(Insn): - def __init__(self, v, place): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = v - self.result_regclass = GP_REGISTER - self.place = place - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - allocator.free_stack_slots.append(stack_slot(self.place.offset)) - def emit(self, asm): - asm.lwz(self.result_reg.number, rFP, self.place.offset) - -class SpillCalleeSaves(Insn): - def __init__(self): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = None - self.result_regclass = NO_REGISTER - def allocate(self, allocator): - # cough cough cough - callersave = gprs[3:13] - for v in allocator.var2loc: - loc = allocator.loc_of(v) - if loc in callersave: - allocator.spill(loc, v) - allocator.freeregs[GP_REGISTER].append(loc) - def emit(self, asm): - pass - -class LoadArg(Insn): - def __init__(self, argnumber, arg): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = None - self.result_regclass = NO_REGISTER - - self.argnumber = argnumber - self.arg = arg - def allocate(self, allocator): - from pypy.jit.codegen.ppc.rgenop import Var - if isinstance(self.arg, Var): - self.loc = allocator.loc_of(self.arg) - else: - self.loc = None - def emit(self, asm): - if self.argnumber < 8: # magic numbers 'r' us - targetreg = 3+self.argnumber - if self.loc is None: - self.arg.load_now(asm, gprs[targetreg]) - elif self.loc.is_register: - asm.mr(targetreg, self.loc.number) - else: - asm.lwz(targetreg, rFP, self.loc.offset) - else: - targetoffset = 24+self.argnumber*4 - if self.loc is None: - self.arg.load_now(asm, gprs[0]) - asm.stw(r0, r1, targetoffset) - elif self.loc.is_register: - asm.stw(self.loc.number, r1, targetoffset) - else: - asm.lwz(r0, rFP, self.loc.offset) - asm.stw(r0, r1, targetoffset) - -class CALL(Insn): - def __init__(self, result, target): - Insn.__init__(self) - from pypy.jit.codegen.ppc.rgenop import Var - if isinstance(target, Var): - self.reg_args = [target] - self.reg_arg_regclasses = [CT_REGISTER] - else: - self.reg_args = [] - self.reg_arg_regclasses = [] - self.target = target - self.result = result - self.result_regclass = GP_REGISTER - def allocate(self, allocator): - if self.reg_args: - assert allocator.loc_of(self.reg_args[0]) is ctr - self.resultreg = allocator.loc_of(self.result) - def emit(self, asm): - if not self.reg_args: - self.target.load_now(asm, gprs[0]) - asm.mtctr(0) - asm.bctrl() - asm.lwz(rFP, rSP, 0) - if self.resultreg != gprs[3]: - asm.mr(self.resultreg.number, 3) - - -class AllocTimeInsn(Insn): - def __init__(self): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_regclass = NO_REGISTER - self.result = None - -class Move(AllocTimeInsn): - def __init__(self, dest, src): - AllocTimeInsn.__init__(self) - self.dest = dest - self.src = src - def emit(self, asm): - asm.mr(self.dest.number, self.src.number) - -class Load(AllocTimeInsn): - def __init__(self, dest, const): - AllocTimeInsn.__init__(self) - self.dest = dest - self.const = const - def __repr__(self): - return ""%(self._magic_index, self.dest, self.const) - def emit(self, asm): - self.const.load_now(asm, self.dest) - -class Unspill(AllocTimeInsn): - """ A special instruction inserted by our register "allocator." It - indicates that we need to load a value from the stack into a register - because we spilled a particular value. """ - def __init__(self, var, reg, stack): - """ - var --- the var we spilled (a Var) - reg --- the reg we spilled it from (an integer) - offset --- the offset on the stack we spilled it to (an integer) - """ - AllocTimeInsn.__init__(self) - self.var = var - self.reg = reg - self.stack = stack - if not isinstance(self.reg, GPR): - assert isinstance(self.reg, CRF) or isinstance(self.reg, CTR) - self.moveinsn = self.reg.move_from_gpr(0) - else: - self.moveinsn = None - def __repr__(self): - return ''%(self._magic_index, self.var, self.reg, self.stack) - def emit(self, asm): - if isinstance(self.reg, GPR): - r = self.reg.number - else: - r = 0 - asm.lwz(r, rFP, self.stack.offset) - if self.moveinsn: - self.moveinsn.emit(asm) - -class Spill(AllocTimeInsn): - """ A special instruction inserted by our register "allocator." - It indicates that we need to store a value from the register into - the stack because we spilled a particular value.""" - def __init__(self, var, reg, stack): - """ - var --- the var we are spilling (a Var) - reg --- the reg we are spilling it from (an integer) - offset --- the offset on the stack we are spilling it to (an integer) - """ - AllocTimeInsn.__init__(self) - self.var = var - self.reg = reg - self.stack = stack - def __repr__(self): - return ''%(self._magic_index, self.var, self.stack, self.reg) - def emit(self, asm): - if isinstance(self.reg, GPR): - r = self.reg.number - else: - assert isinstance(self.reg, CRF) - self.reg.move_to_gpr(0).emit(asm) - r = 0 - #print 'spilling to', self.stack.offset - asm.stw(r, rFP, self.stack.offset) - -class _CRF2GPR(AllocTimeInsn): - def __init__(self, targetreg, bit, negated): - AllocTimeInsn.__init__(self) - self.targetreg = targetreg - self.bit = bit - self.negated = negated - def __repr__(self): - number = self.bit // 4 - bit = self.bit % 4 - return '' % ( - self._magic_index, self.targetreg, number, bit, self.negated) - def emit(self, asm): - asm.mfcr(self.targetreg) - asm.extrwi(self.targetreg, self.targetreg, 1, self.bit) - if self.negated: - asm.xori(self.targetreg, self.targetreg, 1) - -class _GPR2CRF(AllocTimeInsn): - def __init__(self, targetreg, fromreg): - AllocTimeInsn.__init__(self) - self.targetreg = targetreg - self.fromreg = fromreg - def __repr__(self): - return '' % ( - self._magic_index, self.targetreg, self.fromreg) - def emit(self, asm): - asm.cmpwi(self.targetreg.number, self.fromreg, 0) - -class _GPR2CTR(AllocTimeInsn): - def __init__(self, fromreg): - AllocTimeInsn.__init__(self) - self.fromreg = fromreg - def emit(self, asm): - asm.mtctr(self.fromreg) - -class Return(Insn): - """ Ensures the return value is in r3 """ - def __init__(self, var): - Insn.__init__(self) - self.var = var - self.reg_args = [self.var] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = None - self.result_regclass = NO_REGISTER - self.reg = None - def allocate(self, allocator): - self.reg = allocator.loc_of(self.reg_args[0]) - def emit(self, asm): - if self.reg.number != 3: - asm.mr(r3, self.reg.number) - -class FakeUse(Insn): - """ A fake use of a var to get it into a register. And reserving - a condition register field.""" - def __init__(self, rvar, var): - Insn.__init__(self) - self.var = var - self.reg_args = [self.var] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = rvar - self.result_regclass = CR_FIELD - def allocate(self, allocator): - pass - def emit(self, asm): - pass diff --git a/pypy/jit/backend/ppc/rgenop.py b/pypy/jit/backend/ppc/rgenop.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/rgenop.py +++ /dev/null @@ -1,1427 +0,0 @@ -import py -from pypy.jit.codegen.model import AbstractRGenOp, GenLabel, GenBuilder -from pypy.jit.codegen.model import GenVar, GenConst, CodeGenSwitch -from pypy.jit.codegen.model import ReplayBuilder, dummy_var -from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.lltypesystem import lloperation -from pypy.rpython.extfunc import register_external -from pypy.rlib.objectmodel import specialize, we_are_translated -from pypy.jit.codegen.conftest import option -from ctypes import POINTER, cast, c_void_p, c_int, CFUNCTYPE - -from pypy.jit.codegen.ppc import codebuf -from pypy.jit.codegen.ppc.instruction import rSP, rFP, rSCRATCH, gprs -from pypy.jit.codegen.ppc import instruction as insn -from pypy.jit.codegen.ppc.regalloc import RegisterAllocation -from pypy.jit.codegen.emit_moves import emit_moves, emit_moves_safe - -from pypy.jit.codegen.ppc.ppcgen.rassemblermaker import make_rassembler -from pypy.jit.codegen.ppc.ppcgen.ppc_assembler import MyPPCAssembler - -from pypy.jit.codegen.i386.rgenop import gc_malloc_fnaddr -from pypy.rpython.annlowlevel import llhelper - -class RPPCAssembler(make_rassembler(MyPPCAssembler)): - def emit(self, value): - self.mc.write(value) - -_PPC = RPPCAssembler - - -_flush_icache = None -def flush_icache(base, size): - global _flush_icache - if _flush_icache == None: - cpath = py.magic.autopath().dirpath().join('_flush_icache.c') - _flush_icache = cpath._getpymodule()._flush_icache - _flush_icache(base, size) -register_external(flush_icache, [int, int], None, "LL_flush_icache") - - -NSAVEDREGISTERS = 19 - -DEBUG_TRAP = option.trap -DEBUG_PRINT = option.debug_print - -_var_index = [0] -class Var(GenVar): - conditional = False - def __init__(self): - self.__magic_index = _var_index[0] - _var_index[0] += 1 - def __repr__(self): - return "v%d" % self.__magic_index - def fits_in_uimm(self): - return False - def fits_in_simm(self): - return False - -class ConditionVar(Var): - """ Used for vars that originated as the result of a conditional - operation, like a == b """ - conditional = True - -class IntConst(GenConst): - - def __init__(self, value): - self.value = value - - def __repr__(self): - return 'IntConst(%d)'%self.value - - @specialize.arg(1) - def revealconst(self, T): - if isinstance(T, lltype.Ptr): - return lltype.cast_int_to_ptr(T, self.value) - elif T is llmemory.Address: - return llmemory.cast_int_to_adr(self.value) - else: - return lltype.cast_primitive(T, self.value) - - def load(self, insns, var): - insns.append( - insn.Insn_GPR__IMM(_PPC.load_word, - var, [self])) - - def load_now(self, asm, loc): - if loc.is_register: - assert isinstance(loc, insn.GPR) - asm.load_word(loc.number, self.value) - else: - #print 'load_now to', loc.offset - asm.load_word(rSCRATCH, self.value) - asm.stw(rSCRATCH, rFP, loc.offset) - - def fits_in_simm(self): - return abs(self.value) < 2**15 - - def fits_in_uimm(self): - return 0 <= self.value < 2**16 - -class AddrConst(GenConst): - - def __init__(self, addr): - self.addr = addr - - @specialize.arg(1) - def revealconst(self, T): - if T is llmemory.Address: - return self.addr - elif isinstance(T, lltype.Ptr): - return llmemory.cast_adr_to_ptr(self.addr, T) - elif T is lltype.Signed: - return llmemory.cast_adr_to_int(self.addr) - else: - assert 0, "XXX not implemented" - - def fits_in_simm(self): - return False - - def fits_in_uimm(self): - return False - - def load(self, insns, var): - i = IntConst(llmemory.cast_adr_to_int(self.addr)) - insns.append( - insn.Insn_GPR__IMM(RPPCAssembler.load_word, - var, [i])) - - def load_now(self, asm, loc): - value = llmemory.cast_adr_to_int(self.addr) - if loc.is_register: - assert isinstance(loc, insn.GPR) - asm.load_word(loc.number, value) - else: - #print 'load_now to', loc.offset - asm.load_word(rSCRATCH, value) - asm.stw(rSCRATCH, rFP, loc.offset) - - -class JumpPatchupGenerator(object): - - def __init__(self, insns, allocator): - self.insns = insns - self.allocator = allocator - - def emit_move(self, tarloc, srcloc): - srcvar = None - if DEBUG_PRINT: - for v, loc in self.allocator.var2loc.iteritems(): - if loc is srcloc: - srcvar = v - break - emit = self.insns.append - if tarloc == srcloc: - return - if tarloc.is_register and srcloc.is_register: - assert isinstance(tarloc, insn.GPR) - if isinstance(srcloc, insn.GPR): - emit(insn.Move(tarloc, srcloc)) - else: - assert isinstance(srcloc, insn.CRF) - emit(srcloc.move_to_gpr(tarloc.number)) - elif tarloc.is_register and not srcloc.is_register: - emit(insn.Unspill(srcvar, tarloc, srcloc)) - elif not tarloc.is_register and srcloc.is_register: - emit(insn.Spill(srcvar, srcloc, tarloc)) - elif not tarloc.is_register and not srcloc.is_register: - emit(insn.Unspill(srcvar, insn.gprs[0], srcloc)) - emit(insn.Spill(srcvar, insn.gprs[0], tarloc)) - - def create_fresh_location(self): - return self.allocator.spill_slot() - -class StackInfo(Var): - # not really a Var at all, but needs to be mixable with Consts.... - # offset will be assigned later - offset = 0 - pass - -def prepare_for_jump(insns, sourcevars, src2loc, target, allocator): - - tar2src = {} # tar var -> src var - tar2loc = {} - - # construct mapping of targets to sources; note that "target vars" - # and "target locs" are the same thing right now - targetlocs = target.arg_locations - tarvars = [] - -## if DEBUG_PRINT: -## print targetlocs -## print allocator.var2loc - - for i in range(len(targetlocs)): - tloc = targetlocs[i] - src = sourcevars[i] - if isinstance(src, Var): - tar2loc[tloc] = tloc - tar2src[tloc] = src - tarvars.append(tloc) - if not tloc.is_register: - if tloc in allocator.free_stack_slots: - allocator.free_stack_slots.remove(tloc) - - gen = JumpPatchupGenerator(insns, allocator) - emit_moves(gen, tarvars, tar2src, tar2loc, src2loc) - - for i in range(len(targetlocs)): - tloc = targetlocs[i] - src = sourcevars[i] - if not isinstance(src, Var): - insns.append(insn.Load(tloc, src)) - -class Label(GenLabel): - - def __init__(self, args_gv): - self.args_gv = args_gv - #self.startaddr = startaddr - #self.arg_locations = arg_locations - self.min_stack_offset = 1 - -# our approach to stack layout: - -# on function entry, the stack looks like this: - -# .... -# | parameter area | -# | linkage area | <- rSP points to the last word of the linkage area -# +----------------+ - -# we set things up like so: - -# | parameter area | -# | linkage area | <- rFP points to where the rSP was -# | saved registers | -# | local variables | -# +-----------------+ <- rSP points here, and moves around between basic blocks - -# points of note (as of 2006-11-09 anyway :-): -# 1. we currently never spill to the parameter area (should fix?) -# 2. we always save all callee-save registers -# 3. as each basic block can move the SP around as it sees fit, we index -# into the local variables area from the FP (frame pointer; it is not -# usual on the PPC to have a frame pointer, but there's no reason we -# can't have one :-) - - -class Builder(GenBuilder): - - def __init__(self, rgenop): - self.rgenop = rgenop - self.asm = RPPCAssembler() - self.asm.mc = None - self.insns = [] - self.initial_spill_offset = 0 - self.initial_var2loc = None - self.max_param_space = -1 - self.final_jump_addr = 0 - - self.start = 0 - self.closed = True - self.patch_start_here = 0 - - # ---------------------------------------------------------------- - # the public Builder interface: - - def end(self): - pass - - @specialize.arg(1) - def genop1(self, opname, gv_arg): - #print opname, 'on', id(self) - genmethod = getattr(self, 'op_' + opname) - r = genmethod(gv_arg) - #print '->', id(r) - return r - - @specialize.arg(1) - def genop2(self, opname, gv_arg1, gv_arg2): - #print opname, 'on', id(self) - genmethod = getattr(self, 'op_' + opname) - r = genmethod(gv_arg1, gv_arg2) - #print '->', id(r) - return r - - @specialize.arg(1) - def genraisingop2(self, opname, gv_arg1, gv_arg2): - genmethod = getattr(self, 'raisingop_' + opname) - r = genmethod(gv_arg1, gv_arg2) - return r - - @specialize.arg(1) - def genraisingop1(self, opname, gv_arg): - genmethod = getattr(self, 'raisingop_' + opname) - r = genmethod(gv_arg) - return r - - def genop_call(self, sigtoken, gv_fnptr, args_gv): - self.insns.append(insn.SpillCalleeSaves()) - for i in range(len(args_gv)): - self.insns.append(insn.LoadArg(i, args_gv[i])) - gv_result = Var() - self.max_param_space = max(self.max_param_space, len(args_gv)*4) - self.insns.append(insn.CALL(gv_result, gv_fnptr)) - return gv_result - - def genop_getfield(self, fieldtoken, gv_ptr): - fieldoffset, fieldsize = fieldtoken - opcode = {1:_PPC.lbz, 2:_PPC.lhz, 4:_PPC.lwz}[fieldsize] - return self._arg_simm_op(gv_ptr, IntConst(fieldoffset), opcode) - - def genop_setfield(self, fieldtoken, gv_ptr, gv_value): - gv_result = Var() - fieldoffset, fieldsize = fieldtoken - opcode = {1:_PPC.stb, 2:_PPC.sth, 4:_PPC.stw}[fieldsize] - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(opcode, - [gv_value, gv_ptr, IntConst(fieldoffset)])) - return gv_result - - def genop_getsubstruct(self, fieldtoken, gv_ptr): - return self._arg_simm_op(gv_ptr, IntConst(fieldtoken[0]), _PPC.addi) - - def genop_getarrayitem(self, arraytoken, gv_ptr, gv_index): - _, _, itemsize = arraytoken - opcode = {1:_PPC.lbzx, - 2:_PPC.lhzx, - 4:_PPC.lwzx}[itemsize] - opcodei = {1:_PPC.lbz, - 2:_PPC.lhz, - 4:_PPC.lwz}[itemsize] - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - return self._arg_arg_op_with_simm(gv_ptr, gv_itemoffset, opcode, opcodei) - - def genop_getarraysubstruct(self, arraytoken, gv_ptr, gv_index): - _, _, itemsize = arraytoken - assert itemsize == 4 - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - return self._arg_arg_op_with_simm(gv_ptr, gv_itemoffset, _PPC.add, _PPC.addi, - commutative=True) - - def genop_getarraysize(self, arraytoken, gv_ptr): - lengthoffset, _, _ = arraytoken - return self._arg_simm_op(gv_ptr, IntConst(lengthoffset), _PPC.lwz) - - def genop_setarrayitem(self, arraytoken, gv_ptr, gv_index, gv_value): - _, _, itemsize = arraytoken - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - gv_result = Var() - if gv_itemoffset.fits_in_simm(): - opcode = {1:_PPC.stb, - 2:_PPC.sth, - 4:_PPC.stw}[itemsize] - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(opcode, - [gv_value, gv_ptr, gv_itemoffset])) - else: - opcode = {1:_PPC.stbx, - 2:_PPC.sthx, - 4:_PPC.stwx}[itemsize] - self.insns.append( - insn.Insn_None__GPR_GPR_GPR(opcode, - [gv_value, gv_ptr, gv_itemoffset])) - - def genop_malloc_fixedsize(self, alloctoken): - return self.genop_call(1, # COUGH - IntConst(gc_malloc_fnaddr()), - [IntConst(alloctoken)]) - - def genop_malloc_varsize(self, varsizealloctoken, gv_size): - gv_itemoffset = self.itemoffset(varsizealloctoken, gv_size) - gv_result = self.genop_call(1, # COUGH - IntConst(gc_malloc_fnaddr()), - [gv_itemoffset]) - lengthoffset, _, _ = varsizealloctoken - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(_PPC.stw, - [gv_size, gv_result, IntConst(lengthoffset)])) - return gv_result - - def genop_same_as(self, gv_arg): - if not isinstance(gv_arg, Var): - gv_result = Var() - gv_arg.load(self.insns, gv_result) - return gv_result - else: - return gv_arg - - def genop_cast_int_to_ptr(self, kind, gv_int): - return gv_int - -## def genop_debug_pdb(self): # may take an args_gv later - - def genop_get_frame_base(self): - gv_result = Var() - self.insns.append( - insn.LoadFramePointer(gv_result)) - return gv_result - - def get_frame_info(self, vars_gv): - result = [] - for v in vars_gv: - if isinstance(v, Var): - place = StackInfo() - self.insns.append(insn.CopyIntoStack(place, v)) - result.append(place) - else: - result.append(None) - return result - - def alloc_frame_place(self, kind, gv_initial_value=None): - place = StackInfo() - if gv_initial_value is None: - gv_initial_value = AddrConst(llmemory.NULL) - self.insns.append(insn.CopyIntoStack(place, gv_initial_value)) - return place - - def genop_absorb_place(self, place): - var = Var() - self.insns.append(insn.CopyOffStack(var, place)) - return var - - def enter_next_block(self, args_gv): - if DEBUG_PRINT: - print 'enter_next_block1', args_gv - seen = {} - for i in range(len(args_gv)): - gv = args_gv[i] - if isinstance(gv, Var): - if gv in seen: - new_gv = self._arg_op(gv, _PPC.mr) - args_gv[i] = new_gv - seen[gv] = True - else: - new_gv = Var() - gv.load(self.insns, new_gv) - args_gv[i] = new_gv - - if DEBUG_PRINT: - print 'enter_next_block2', args_gv - - r = Label(args_gv) - self.insns.append(insn.Label(r)) - return r - - def jump_if_false(self, gv_condition, args_gv): - return self._jump(gv_condition, False, args_gv) - - def jump_if_true(self, gv_condition, args_gv): - return self._jump(gv_condition, True, args_gv) - - def finish_and_return(self, sigtoken, gv_returnvar): - self.insns.append(insn.Return(gv_returnvar)) - self.allocate_and_emit([]) - - # standard epilogue: - - # restore old SP - self.asm.lwz(rSP, rSP, 0) - # restore all callee-save GPRs - self.asm.lmw(gprs[32-NSAVEDREGISTERS].number, rSP, -4*(NSAVEDREGISTERS+1)) - # restore Condition Register - self.asm.lwz(rSCRATCH, rSP, 4) - self.asm.mtcr(rSCRATCH) - # restore Link Register and jump to it - self.asm.lwz(rSCRATCH, rSP, 8) - self.asm.mtlr(rSCRATCH) - self.asm.blr() - - self._close() - - def finish_and_goto(self, outputargs_gv, target): - if target.min_stack_offset == 1: - self.pause_writing(outputargs_gv) - self.start_writing() - allocator = self.allocate(outputargs_gv) - if DEBUG_PRINT: - before_moves = len(self.insns) - print outputargs_gv - print target.args_gv - allocator.spill_offset = min(allocator.spill_offset, target.min_stack_offset) - prepare_for_jump( - self.insns, outputargs_gv, allocator.var2loc, target, allocator) - if DEBUG_PRINT: - print 'moves:' - for i in self.insns[before_moves:]: - print ' ', i - self.emit(allocator) - here_size = self._stack_size(allocator.spill_offset) - there_size = self._stack_size(target.min_stack_offset) - if here_size != there_size: - self.emit_stack_adjustment(there_size) - if self.rgenop.DEBUG_SCRIBBLE: - if here_size > there_size: - offsets = range(there_size, here_size, 4) - else: - offsets = range(here_size, there_size, 4) - for offset in offsets: - self.asm.load_word(rSCRATCH, 0x23456789) - self.asm.stw(rSCRATCH, rSP, -offset) - self.asm.load_word(rSCRATCH, target.startaddr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self._close() - - def flexswitch(self, gv_exitswitch, args_gv): - # make sure the exitswitch ends the block in a register: - crresult = Var() - self.insns.append(insn.FakeUse(crresult, gv_exitswitch)) - allocator = self.allocate_and_emit(args_gv) - switch_mc = self.asm.mc.reserve(7 * 5 + 4) - self._close() - result = FlexSwitch(self.rgenop, switch_mc, - allocator.loc_of(gv_exitswitch), - allocator.loc_of(crresult), - allocator.var2loc, - allocator.spill_offset) - return result, result.add_default() - - def start_writing(self): - if not self.closed: - return self - assert self.asm.mc is None - if self.final_jump_addr != 0: - mc = self.rgenop.open_mc() - target = mc.tell() - if target == self.final_jump_addr + 16: - mc.setpos(mc.getpos()-4) - else: - self.asm.mc = self.rgenop.ExistingCodeBlock( - self.final_jump_addr, self.final_jump_addr+8) - self.asm.load_word(rSCRATCH, target) - flush_icache(self.final_jump_addr, 8) - self._code_start = mc.tell() - self.asm.mc = mc - self.final_jump_addr = 0 - self.closed = False - return self - else: - self._open() - self.maybe_patch_start_here() - return self - - def maybe_patch_start_here(self): - if self.patch_start_here: - mc = self.asm.mc - self.asm.mc = self.rgenop.ExistingCodeBlock( - self.patch_start_here, self.patch_start_here+8) - self.asm.load_word(rSCRATCH, mc.tell()) - flush_icache(self.patch_start_here, 8) - self.asm.mc = mc - self.patch_start_here = 0 - - def pause_writing(self, args_gv): - allocator = self.allocate_and_emit(args_gv) - self.initial_var2loc = allocator.var2loc - self.initial_spill_offset = allocator.spill_offset - self.insns = [] - self.max_param_space = -1 - self.final_jump_addr = self.asm.mc.tell() - self.closed = True - self.asm.nop() - self.asm.nop() - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self._close() - return self - - # ---------------------------------------------------------------- - # ppc-specific interface: - - def itemoffset(self, arraytoken, gv_index): - # if gv_index is constant, this can return a constant... - lengthoffset, startoffset, itemsize = arraytoken - - gv_offset = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(RPPCAssembler.mulli, - gv_offset, [gv_index, IntConst(itemsize)])) - gv_itemoffset = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(RPPCAssembler.addi, - gv_itemoffset, [gv_offset, IntConst(startoffset)])) - return gv_itemoffset - - def _write_prologue(self, sigtoken): - numargs = sigtoken # for now - if DEBUG_TRAP: - self.asm.trap() - inputargs = [Var() for i in range(numargs)] - assert self.initial_var2loc is None - self.initial_var2loc = {} - for arg in inputargs[:8]: - self.initial_var2loc[arg] = gprs[3+len(self.initial_var2loc)] - if len(inputargs) > 8: - for i in range(8, len(inputargs)): - arg = inputargs[i] - self.initial_var2loc[arg] = insn.stack_slot(24 + 4 * len(self.initial_var2loc)) - self.initial_spill_offset = self._var_offset(0) - - # Standard prologue: - - # Minimum stack space = 24+params+lv+4*GPRSAVE+8*FPRSAVE - # params = stack space for parameters for functions we call - # lv = stack space for local variables - # GPRSAVE = the number of callee-save GPRs we save, currently - # NSAVEDREGISTERS which is 19, i.e. all of them - # FPRSAVE = the number of callee-save FPRs we save, currently 0 - # Initially, we set params == lv == 0 and allow each basic block to - # ensure it has enough space to continue. - - minspace = self._stack_size(self._var_offset(0)) - # save Link Register - self.asm.mflr(rSCRATCH) - self.asm.stw(rSCRATCH, rSP, 8) - # save Condition Register - self.asm.mfcr(rSCRATCH) - self.asm.stw(rSCRATCH, rSP, 4) - # save the callee-save GPRs - self.asm.stmw(gprs[32-NSAVEDREGISTERS].number, rSP, -4*(NSAVEDREGISTERS + 1)) - # set up frame pointer - self.asm.mr(rFP, rSP) - # save stack pointer into linkage area and set stack pointer for us. - self.asm.stwu(rSP, rSP, -minspace) - - if self.rgenop.DEBUG_SCRIBBLE: - # write junk into all non-argument, non rFP or rSP registers - self.asm.load_word(rSCRATCH, 0x12345678) - for i in range(min(11, 3+len(self.initial_var2loc)), 32): - self.asm.load_word(i, 0x12345678) - # scribble the part of the stack between - # self._var_offset(0) and minspace - for offset in range(self._var_offset(0), -minspace, -4): - self.asm.stw(rSCRATCH, rFP, offset) - # and then a bit more - for offset in range(-minspace-4, -minspace-200, -4): - self.asm.stw(rSCRATCH, rFP, offset) - - return inputargs - - def _var_offset(self, v): - """v represents an offset into the local variable area in bytes; - this returns the offset relative to rFP""" - return -(4*NSAVEDREGISTERS+4+v) - - def _stack_size(self, lv): - """ Returns the required stack size to store all data, assuming - that there are 'param' bytes of parameters for callee functions and - 'lv' is the largest (wrt to abs() :) rFP-relative byte offset of - any variable on the stack. Plus 4 because the rFP actually points - into our caller's linkage area.""" - assert lv <= 0 - if self.max_param_space >= 0: - param = max(self.max_param_space, 32) + 24 - else: - param = 0 - return ((4 + param - lv + 15) & ~15) - - def _open(self): - self.asm.mc = self.rgenop.open_mc() - self._code_start = self.asm.mc.tell() - self.closed = False - - def _close(self): - _code_stop = self.asm.mc.tell() - code_size = _code_stop - self._code_start - flush_icache(self._code_start, code_size) - self.rgenop.close_mc(self.asm.mc) - self.asm.mc = None - - def allocate_and_emit(self, live_vars_gv): - allocator = self.allocate(live_vars_gv) - return self.emit(allocator) - - def allocate(self, live_vars_gv): - assert self.initial_var2loc is not None - allocator = RegisterAllocation( - self.rgenop.freeregs, - self.initial_var2loc, - self.initial_spill_offset) - self.insns = allocator.allocate_for_insns(self.insns) - return allocator - - def emit(self, allocator): - in_size = self._stack_size(self.initial_spill_offset) - our_size = self._stack_size(allocator.spill_offset) - if in_size != our_size: - assert our_size > in_size - self.emit_stack_adjustment(our_size) - if self.rgenop.DEBUG_SCRIBBLE: - for offset in range(in_size, our_size, 4): - self.asm.load_word(rSCRATCH, 0x23456789) - self.asm.stw(rSCRATCH, rSP, -offset) - if self.rgenop.DEBUG_SCRIBBLE: - locs = {} - for _, loc in self.initial_var2loc.iteritems(): - locs[loc] = True - regs = insn.gprs[3:] - for reg in regs: - if reg not in locs: - self.asm.load_word(reg.number, 0x3456789) - self.asm.load_word(0, 0x3456789) - for offset in range(self._var_offset(0), - self.initial_spill_offset, - -4): - if insn.stack_slot(offset) not in locs: - self.asm.stw(0, rFP, offset) - for insn_ in self.insns: - insn_.emit(self.asm) - for label in allocator.labels_to_tell_spill_offset_to: - label.min_stack_offset = allocator.spill_offset - for builder in allocator.builders_to_tell_spill_offset_to: - builder.initial_spill_offset = allocator.spill_offset - return allocator - - def emit_stack_adjustment(self, newsize): - # the ABI requires that at all times that r1 is valid, in the - # sense that it must point to the bottom of the stack and that - # executing SP <- *(SP) repeatedly walks the stack. - # this code satisfies this, although there is a 1-instruction - # window where such walking would find a strange intermediate - # "frame" - self.asm.addi(rSCRATCH, rFP, -newsize) - self.asm.sub(rSCRATCH, rSCRATCH, rSP) - - # this is a pure debugging check that we avoid the situation - # where *(r1) == r1 which would violates the ABI rules listed - # above. after a while it can be removed or maybe made - # conditional on some --option passed to py.test - self.asm.tweqi(rSCRATCH, 0) - - self.asm.stwux(rSP, rSP, rSCRATCH) - self.asm.stw(rFP, rSP, 0) - - def _arg_op(self, gv_arg, opcode): - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR(opcode, gv_result, gv_arg)) - return gv_result - - def _arg_arg_op(self, gv_x, gv_y, opcode): - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_GPR(opcode, gv_result, [gv_x, gv_y])) - return gv_result - - def _arg_simm_op(self, gv_x, gv_imm, opcode): - assert gv_imm.fits_in_simm() - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(opcode, gv_result, [gv_x, gv_imm])) - return gv_result - - def _arg_uimm_op(self, gv_x, gv_imm, opcode): - assert gv_imm.fits_in_uimm() - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(opcode, gv_result, [gv_x, gv_imm])) - return gv_result - - def _arg_arg_op_with_simm(self, gv_x, gv_y, opcode, opcodei, - commutative=False): - if gv_y.fits_in_simm(): - return self._arg_simm_op(gv_x, gv_y, opcodei) - elif gv_x.fits_in_simm() and commutative: - return self._arg_simm_op(gv_y, gv_x, opcodei) - else: - return self._arg_arg_op(gv_x, gv_y, opcode) - - def _arg_arg_op_with_uimm(self, gv_x, gv_y, opcode, opcodei, - commutative=False): - if gv_y.fits_in_uimm(): - return self._arg_uimm_op(gv_x, gv_y, opcodei) - elif gv_x.fits_in_uimm() and commutative: - return self._arg_uimm_op(gv_y, gv_x, opcodei) - else: - return self._arg_arg_op(gv_x, gv_y, opcode) - - def _identity(self, gv_arg): - return gv_arg - - cmp2info = { - # bit-in-crf negated - 'gt': ( 1, 0 ), - 'lt': ( 0, 0 ), - 'le': ( 1, 1 ), - 'ge': ( 0, 1 ), - 'eq': ( 2, 0 ), - 'ne': ( 2, 1 ), - } - - cmp2info_flipped = { - # bit-in-crf negated - 'gt': ( 0, 0 ), - 'lt': ( 1, 0 ), - 'le': ( 0, 1 ), - 'ge': ( 1, 1 ), - 'eq': ( 2, 0 ), - 'ne': ( 2, 1 ), - } - - def _compare(self, op, gv_x, gv_y): - #print "op", op - gv_result = ConditionVar() - if gv_y.fits_in_simm(): - self.insns.append( - insn.CMPWI(self.cmp2info[op], gv_result, [gv_x, gv_y])) - elif gv_x.fits_in_simm(): - self.insns.append( - insn.CMPWI(self.cmp2info_flipped[op], gv_result, [gv_y, gv_x])) - else: - self.insns.append( - insn.CMPW(self.cmp2info[op], gv_result, [gv_x, gv_y])) - return gv_result - - def _compare_u(self, op, gv_x, gv_y): - gv_result = ConditionVar() - if gv_y.fits_in_uimm(): - self.insns.append( - insn.CMPWLI(self.cmp2info[op], gv_result, [gv_x, gv_y])) - elif gv_x.fits_in_uimm(): - self.insns.append( - insn.CMPWLI(self.cmp2info_flipped[op], gv_result, [gv_y, gv_x])) - else: - self.insns.append( - insn.CMPWL(self.cmp2info[op], gv_result, [gv_x, gv_y])) - return gv_result - - def _jump(self, gv_condition, if_true, args_gv): - targetbuilder = self.rgenop.newbuilder() - - self.insns.append( - insn.Jump(gv_condition, targetbuilder, if_true, args_gv)) - - return targetbuilder - - def _ov(self): - # mfxer rFOO - # extrwi rBAR, rFOO, 1, 1 - gv_xer = Var() - self.insns.append( - insn.Insn_GPR(_PPC.mfxer, gv_xer)) - gv_ov = Var() - self.insns.append(insn.Extrwi(gv_ov, gv_xer, 1, 1)) - return gv_ov - - def op_bool_not(self, gv_arg): - return self._arg_uimm_op(gv_arg, self.rgenop.genconst(1), RPPCAssembler.xori) - - def op_int_is_true(self, gv_arg): - return self._compare('ne', gv_arg, self.rgenop.genconst(0)) - - def op_int_neg(self, gv_arg): - return self._arg_op(gv_arg, _PPC.neg) - - def raisingop_int_neg_ovf(self, gv_arg): - gv_result = self._arg_op(gv_arg, _PPC.nego) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_abs(self, gv_arg): - gv_sign = self._arg_uimm_op(gv_arg, self.rgenop.genconst(31), _PPC.srawi) - gv_maybe_inverted = self._arg_arg_op(gv_arg, gv_sign, _PPC.xor) - return self._arg_arg_op(gv_sign, gv_maybe_inverted, _PPC.subf) - - def raisingop_int_abs_ovf(self, gv_arg): - gv_sign = self._arg_uimm_op(gv_arg, self.rgenop.genconst(31), _PPC.srawi) - gv_maybe_inverted = self._arg_arg_op(gv_arg, gv_sign, _PPC.xor) - gv_result = self._arg_arg_op(gv_sign, gv_maybe_inverted, _PPC.subfo) - return (gv_result, self._ov()) - - def op_int_invert(self, gv_arg): - return self._arg_op(gv_arg, _PPC.not_) - - def op_int_add(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.add, _PPC.addi, - commutative=True) - - def raisingop_int_add_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.addo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_sub(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.sub, _PPC.subi) - - def raisingop_int_sub_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.subo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_mul(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.mullw, _PPC.mulli, - commutative=True) - - def raisingop_int_mul_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.mullwo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_floordiv(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.divw) - - ## def op_int_floordiv_zer(self, gv_x, gv_y): - - def op_int_mod(self, gv_x, gv_y): - gv_dividend = self.op_int_floordiv(gv_x, gv_y) - gv_z = self.op_int_mul(gv_dividend, gv_y) - return self.op_int_sub(gv_x, gv_z) - - ## def op_int_mod_zer(self, gv_x, gv_y): - - def op_int_lt(self, gv_x, gv_y): - return self._compare('lt', gv_x, gv_y) - - def op_int_le(self, gv_x, gv_y): - return self._compare('le', gv_x, gv_y) - - def op_int_eq(self, gv_x, gv_y): - return self._compare('eq', gv_x, gv_y) - - def op_int_ne(self, gv_x, gv_y): - return self._compare('ne', gv_x, gv_y) - - def op_int_gt(self, gv_x, gv_y): - return self._compare('gt', gv_x, gv_y) - - def op_int_ge(self, gv_x, gv_y): - return self._compare('ge', gv_x, gv_y) - - op_char_lt = op_int_lt - op_char_le = op_int_le - op_char_eq = op_int_eq - op_char_ne = op_int_ne - op_char_gt = op_int_gt - op_char_ge = op_int_ge - - op_unichar_eq = op_int_eq - op_unichar_ne = op_int_ne - - def op_int_and(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.and_) - - def op_int_or(self, gv_x, gv_y): - return self._arg_arg_op_with_uimm(gv_x, gv_y, _PPC.or_, _PPC.ori, - commutative=True) - - def op_int_lshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - return self.rgenop.genconst(0) - else: - return self._arg_uimm_op(gv_x, gv_y, _PPC.slwi) - # computing x << y when you don't know y is <=32 - # (we can assume y >= 0 though) - # here's the plan: - # - # z = nltu(y, 32) (as per cwg) - # w = x << y - # r = w&z - gv_a = self._arg_simm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_z = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - gv_w = self._arg_arg_op(gv_x, gv_y, _PPC.slw) - return self._arg_arg_op(gv_z, gv_w, _PPC.and_) - - ## def op_int_lshift_val(self, gv_x, gv_y): - - def op_int_rshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - gv_y = self.rgenop.genconst(31) - return self._arg_simm_op(gv_x, gv_y, _PPC.srawi) - # computing x >> y when you don't know y is <=32 - # (we can assume y >= 0 though) - # here's the plan: - # - # ntlu_y_32 = nltu(y, 32) (as per cwg) - # o = srawi(x, 31) & ~ntlu_y_32 - # w = (x >> y) & ntlu_y_32 - # r = w|o - gv_a = self._arg_uimm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_ntlu_y_32 = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - - gv_c = self._arg_uimm_op(gv_x, self.rgenop.genconst(31), _PPC.srawi) - gv_o = self._arg_arg_op(gv_c, gv_ntlu_y_32, _PPC.andc_) - - gv_e = self._arg_arg_op(gv_x, gv_y, _PPC.sraw) - gv_w = self._arg_arg_op(gv_e, gv_ntlu_y_32, _PPC.and_) - - return self._arg_arg_op(gv_o, gv_w, _PPC.or_) - - ## def op_int_rshift_val(self, gv_x, gv_y): - - def op_int_xor(self, gv_x, gv_y): - return self._arg_arg_op_with_uimm(gv_x, gv_y, _PPC.xor, _PPC.xori, - commutative=True) - - ## various int_*_ovfs - - op_uint_is_true = op_int_is_true - op_uint_invert = op_int_invert - - op_uint_add = op_int_add - op_uint_sub = op_int_sub - op_uint_mul = op_int_mul - - def op_uint_floordiv(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.divwu) - - ## def op_uint_floordiv_zer(self, gv_x, gv_y): - - def op_uint_mod(self, gv_x, gv_y): - gv_dividend = self.op_uint_floordiv(gv_x, gv_y) - gv_z = self.op_uint_mul(gv_dividend, gv_y) - return self.op_uint_sub(gv_x, gv_z) - - ## def op_uint_mod_zer(self, gv_x, gv_y): - - def op_uint_lt(self, gv_x, gv_y): - return self._compare_u('lt', gv_x, gv_y) - - def op_uint_le(self, gv_x, gv_y): - return self._compare_u('le', gv_x, gv_y) - - def op_uint_eq(self, gv_x, gv_y): - return self._compare_u('eq', gv_x, gv_y) - - def op_uint_ne(self, gv_x, gv_y): - return self._compare_u('ne', gv_x, gv_y) - - def op_uint_gt(self, gv_x, gv_y): - return self._compare_u('gt', gv_x, gv_y) - - def op_uint_ge(self, gv_x, gv_y): - return self._compare_u('ge', gv_x, gv_y) - - op_uint_and = op_int_and - op_uint_or = op_int_or - - op_uint_lshift = op_int_lshift - - ## def op_uint_lshift_val(self, gv_x, gv_y): - - def op_uint_rshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - return self.rgenop.genconst(0) - else: - return self._arg_simm_op(gv_x, gv_y, _PPC.srwi) - # computing x << y when you don't know y is <=32 - # (we can assume y >=0 though, i think) - # here's the plan: - # - # z = ngeu(y, 32) (as per cwg) - # w = x >> y - # r = w&z - gv_a = self._arg_simm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_z = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - gv_w = self._arg_arg_op(gv_x, gv_y, _PPC.srw) - return self._arg_arg_op(gv_z, gv_w, _PPC.and_) - ## def op_uint_rshift_val(self, gv_x, gv_y): - - op_uint_xor = op_int_xor - - # ... floats ... - - # ... llongs, ullongs ... - - # here we assume that booleans are always 1 or 0 and chars are - # always zero-padded. - - op_cast_bool_to_int = _identity - op_cast_bool_to_uint = _identity - ## def op_cast_bool_to_float(self, gv_arg): - op_cast_char_to_int = _identity - op_cast_unichar_to_int = _identity - op_cast_int_to_char = _identity - - op_cast_int_to_unichar = _identity - op_cast_int_to_uint = _identity - ## def op_cast_int_to_float(self, gv_arg): - ## def op_cast_int_to_longlong(self, gv_arg): - op_cast_uint_to_int = _identity - ## def op_cast_uint_to_float(self, gv_arg): - ## def op_cast_float_to_int(self, gv_arg): - ## def op_cast_float_to_uint(self, gv_arg): - ## def op_truncate_longlong_to_int(self, gv_arg): - - # many pointer operations are genop_* special cases above - - op_ptr_eq = op_int_eq - op_ptr_ne = op_int_ne - - op_ptr_nonzero = op_int_is_true - op_ptr_ne = op_int_ne - op_ptr_eq = op_int_eq - - def op_ptr_iszero(self, gv_arg): - return self._compare('eq', gv_arg, self.rgenop.genconst(0)) - - op_cast_ptr_to_int = _identity - - # ... address operations ... - - at specialize.arg(0) -def cast_int_to_whatever(T, value): - if isinstance(T, lltype.Ptr): - return lltype.cast_int_to_ptr(T, value) - elif T is llmemory.Address: - return llmemory.cast_int_to_adr(value) - else: - return lltype.cast_primitive(T, value) - - at specialize.arg(0) -def cast_whatever_to_int(T, value): - if isinstance(T, lltype.Ptr): - return lltype.cast_ptr_to_int(value) - elif T is llmemory.Address: - return llmemory.cast_adr_to_int(value) - else: - return lltype.cast_primitive(lltype.Signed, value) - -class RPPCGenOp(AbstractRGenOp): - - # the set of registers we consider available for allocation - # we can artifically restrict it for testing purposes - freeregs = { - insn.GP_REGISTER:insn.gprs[3:], - insn.FP_REGISTER:insn.fprs, - insn.CR_FIELD:insn.crfs, - insn.CT_REGISTER:[insn.ctr]} - DEBUG_SCRIBBLE = option.debug_scribble - MC_SIZE = 65536 - - def __init__(self): - self.mcs = [] # machine code blocks where no-one is currently writing - self.keepalive_gc_refs = [] - - # ---------------------------------------------------------------- - # the public RGenOp interface - - def newgraph(self, sigtoken, name): - numargs = sigtoken # for now - builder = self.newbuilder() - builder._open() - entrypoint = builder.asm.mc.tell() - inputargs_gv = builder._write_prologue(sigtoken) - return builder, IntConst(entrypoint), inputargs_gv - - @specialize.genconst(1) - def genconst(self, llvalue): - T = lltype.typeOf(llvalue) - if T is llmemory.Address: - return AddrConst(llvalue) - elif isinstance(T, lltype.Primitive): - return IntConst(lltype.cast_primitive(lltype.Signed, llvalue)) - elif isinstance(T, lltype.Ptr): - lladdr = llmemory.cast_ptr_to_adr(llvalue) - if T.TO._gckind == 'gc': - self.keepalive_gc_refs.append(lltype.cast_opaque_ptr(llmemory.GCREF, llvalue)) - return AddrConst(lladdr) - else: - assert 0, "XXX not implemented" - -## @staticmethod -## @specialize.genconst(0) -## def constPrebuiltGlobal(llvalue): - - @staticmethod - def genzeroconst(kind): - return zero_const - - def replay(self, label): - return ReplayBuilder(self), [dummy_var] * len(label.args_gv) - - @staticmethod - def erasedType(T): - if T is llmemory.Address: - return llmemory.Address - if isinstance(T, lltype.Primitive): - return lltype.Signed - elif isinstance(T, lltype.Ptr): - return llmemory.GCREF - else: - assert 0, "XXX not implemented" - - @staticmethod - @specialize.memo() - def fieldToken(T, name): - FIELD = getattr(T, name) - if isinstance(FIELD, lltype.ContainerType): - fieldsize = 0 # not useful for getsubstruct - else: - fieldsize = llmemory.sizeof(FIELD) - return (llmemory.offsetof(T, name), fieldsize) - - @staticmethod - @specialize.memo() - def allocToken(T): - return llmemory.sizeof(T) - - @staticmethod - @specialize.memo() - def varsizeAllocToken(T): - if isinstance(T, lltype.Array): - return RPPCGenOp.arrayToken(T) - else: - # var-sized structs - arrayfield = T._arrayfld - ARRAYFIELD = getattr(T, arrayfield) - arraytoken = RPPCGenOp.arrayToken(ARRAYFIELD) - length_offset, items_offset, item_size = arraytoken - arrayfield_offset = llmemory.offsetof(T, arrayfield) - return (arrayfield_offset+length_offset, - arrayfield_offset+items_offset, - item_size) - - @staticmethod - @specialize.memo() - def arrayToken(A): - return (llmemory.ArrayLengthOffset(A), - llmemory.ArrayItemsOffset(A), - llmemory.ItemOffset(A.OF)) - - @staticmethod - @specialize.memo() - def kindToken(T): - if T is lltype.Float: - py.test.skip("not implemented: floats in the i386^WPPC back-end") - return None # for now - - @staticmethod - @specialize.memo() - def sigToken(FUNCTYPE): - return len(FUNCTYPE.ARGS) # for now - - @staticmethod - @specialize.arg(0) - def read_frame_var(T, base, info, index): - """Read from the stack frame of a caller. The 'base' is the - frame stack pointer captured by the operation generated by - genop_get_frame_base(). The 'info' is the object returned by - get_frame_info(); we are looking for the index-th variable - in the list passed to get_frame_info().""" - place = info[index] - if isinstance(place, StackInfo): - #print '!!!', base, place.offset - #print '???', [peek_word_at(base + place.offset + i) - # for i in range(-64, 65, 4)] - assert place.offset != 0 - value = peek_word_at(base + place.offset) - return cast_int_to_whatever(T, value) - else: - assert isinstance(place, GenConst) - return place.revealconst(T) - - @staticmethod - @specialize.arg(0) - def genconst_from_frame_var(kind, base, info, index): - place = info[index] - if isinstance(place, StackInfo): - #print '!!!', base, place.offset - #print '???', [peek_word_at(base + place.offset + i) - # for i in range(-64, 65, 4)] - assert place.offset != 0 - value = peek_word_at(base + place.offset) - return IntConst(value) - else: - assert isinstance(place, GenConst) - return place - - - @staticmethod - @specialize.arg(0) - def write_frame_place(T, base, place, value): - assert place.offset != 0 - value = cast_whatever_to_int(T, value) - poke_word_into(base + place.offset, value) - - @staticmethod - @specialize.arg(0) - def read_frame_place(T, base, place): - value = peek_word_at(base + place.offset) - return cast_int_to_whatever(T, value) - - def check_no_open_mc(self): - pass - - # ---------------------------------------------------------------- - # ppc-specific interface: - - MachineCodeBlock = codebuf.OwningMachineCodeBlock - ExistingCodeBlock = codebuf.ExistingCodeBlock - - def open_mc(self): - if self.mcs: - return self.mcs.pop() - else: - return self.MachineCodeBlock(self.MC_SIZE) # XXX supposed infinite for now - - def close_mc(self, mc): -## from pypy.jit.codegen.ppc.ppcgen.asmfunc import get_ppcgen -## print '!!!!', cast(mc._data, c_void_p).value -## print '!!!!', mc._data.contents[0] -## get_ppcgen().flush2(cast(mc._data, c_void_p).value, -## mc._size*4) - self.mcs.append(mc) - - def newbuilder(self): - return Builder(self) - -# a switch can take 7 instructions: - -# load_word rSCRATCH, gv_case.value (really two instructions) -# cmpw crf, rSWITCH, rSCRATCH -# load_word rSCRATCH, targetaddr (again two instructions) -# mtctr rSCRATCH -# beqctr crf - -# yay RISC :/ - -class FlexSwitch(CodeGenSwitch): - - # a fair part of this code could likely be shared with the i386 - # backend. - - def __init__(self, rgenop, mc, switch_reg, crf, var2loc, initial_spill_offset): - self.rgenop = rgenop - self.crf = crf - self.switch_reg = switch_reg - self.var2loc = var2loc - self.initial_spill_offset = initial_spill_offset - self.asm = RPPCAssembler() - self.asm.mc = mc - self.default_target_addr = 0 - - def add_case(self, gv_case): - targetbuilder = self.rgenop.newbuilder() - targetbuilder._open() - targetbuilder.initial_var2loc = self.var2loc - targetbuilder.initial_spill_offset = self.initial_spill_offset - target_addr = targetbuilder.asm.mc.tell() - p = self.asm.mc.getpos() - # that this works depends a bit on the fixed length of the - # instruction sequences we use to jump around. if the code is - # ever updated to use the branch-relative instructions (a good - # idea, btw) this will need to be thought about again - try: - self._add_case(gv_case, target_addr) - except codebuf.CodeBlockOverflow: - self.asm.mc.setpos(p) - base = self.asm.mc.tell() - mc = self.rgenop.open_mc() - newmc = mc.reserve(7 * 5 + 4) - self.rgenop.close_mc(mc) - new_addr = newmc.tell() - self.asm.load_word(rSCRATCH, new_addr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - size = self.asm.mc.tell() - base - flush_icache(base, size) - self.asm.mc = newmc - self._add_case(gv_case, target_addr) - return targetbuilder - - def _add_case(self, gv_case, target_addr): - asm = self.asm - base = self.asm.mc.tell() - assert isinstance(gv_case, GenConst) - gv_case.load_now(asm, insn.gprs[0]) - asm.cmpw(self.crf.number, rSCRATCH, self.switch_reg.number) - asm.load_word(rSCRATCH, target_addr) - asm.mtctr(rSCRATCH) - asm.bcctr(12, self.crf.number*4 + 2) - if self.default_target_addr: - self._write_default() - size = self.asm.mc.tell() - base - flush_icache(base, size) - - def add_default(self): - targetbuilder = self.rgenop.newbuilder() - targetbuilder._open() - targetbuilder.initial_var2loc = self.var2loc - targetbuilder.initial_spill_offset = self.initial_spill_offset - base = self.asm.mc.tell() - self.default_target_addr = targetbuilder.asm.mc.tell() - self._write_default() - size = self.asm.mc.tell() - base - flush_icache(base, size) - return targetbuilder - - def _write_default(self): - pos = self.asm.mc.getpos() - self.asm.load_word(rSCRATCH, self.default_target_addr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self.asm.mc.setpos(pos) - -global_rgenop = RPPCGenOp() -RPPCGenOp.constPrebuiltGlobal = global_rgenop.genconst - -def peek_word_at(addr): - # now the Very Obscure Bit: when translated, 'addr' is an - # address. When not, it's an integer. It just happens to - # make the test pass, but that's probably going to change. - if we_are_translated(): - return addr.signed[0] - else: - from ctypes import cast, c_void_p, c_int, POINTER - p = cast(c_void_p(addr), POINTER(c_int)) - return p[0] - -def poke_word_into(addr, value): - # now the Very Obscure Bit: when translated, 'addr' is an - # address. When not, it's an integer. It just happens to - # make the test pass, but that's probably going to change. - if we_are_translated(): - addr.signed[0] = value - else: - from ctypes import cast, c_void_p, c_int, POINTER - p = cast(c_void_p(addr), POINTER(c_int)) - p[0] = value - -zero_const = AddrConst(llmemory.NULL) From noreply at buildbot.pypy.org Mon Feb 6 10:13:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 10:13:05 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: merge default Message-ID: <20120206091305.04AB77107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52122:f77e2af16e6d Date: 2012-02-06 11:12 +0200 http://bitbucket.org/pypy/pypy/changeset/f77e2af16e6d/ Log: merge default diff too long, truncating to 10000 out of 156612 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
\n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::=
\n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Mon Feb 6 10:17:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 6 Feb 2012 10:17:18 +0100 (CET) Subject: [pypy-commit] pypy default: Fix this test. Message-ID: <20120206091718.2F61C7107EC@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52123:28b333b86328 Date: 2012-02-06 10:16 +0100 http://bitbucket.org/pypy/pypy/changeset/28b333b86328/ Log: Fix this test. diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) From noreply at buildbot.pypy.org Mon Feb 6 10:28:49 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 10:28:49 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: kill more unused code Message-ID: <20120206092849.313967107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52124:aaf78782650f Date: 2012-02-06 10:26 +0100 http://bitbucket.org/pypy/pypy/changeset/aaf78782650f/ Log: kill more unused code diff --git a/pypy/jit/backend/ppc/ppcgen/autopath.py b/pypy/jit/backend/ppc/ppcgen/autopath.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/autopath.py +++ /dev/null @@ -1,114 +0,0 @@ -""" -self cloning, automatic path configuration - -copy this into any subdirectory of pypy from which scripts need -to be run, typically all of the test subdirs. -The idea is that any such script simply issues - - import autopath - -and this will make sure that the parent directory containing "pypy" -is in sys.path. - -If you modify the master "autopath.py" version (in pypy/tool/autopath.py) -you can directly run it which will copy itself on all autopath.py files -it finds under the pypy root directory. - -This module always provides these attributes: - - pypydir pypy root directory path - this_dir directory where this autopath.py resides - -""" - - -def __dirinfo(part): - """ return (partdir, this_dir) and insert parent of partdir - into sys.path. If the parent directories don't have the part - an EnvironmentError is raised.""" - - import sys, os - try: - head = this_dir = os.path.realpath(os.path.dirname(__file__)) - except NameError: - head = this_dir = os.path.realpath(os.path.dirname(sys.argv[0])) - - while head: - partdir = head - head, tail = os.path.split(head) - if tail == part: - break - else: - raise EnvironmentError, "'%s' missing in '%r'" % (partdir, this_dir) - - pypy_root = os.path.join(head, '') - try: - sys.path.remove(head) - except ValueError: - pass - sys.path.insert(0, head) - - munged = {} - for name, mod in sys.modules.items(): - if '.' in name: - continue - fn = getattr(mod, '__file__', None) - if not isinstance(fn, str): - continue - newname = os.path.splitext(os.path.basename(fn))[0] - if not newname.startswith(part + '.'): - continue - path = os.path.join(os.path.dirname(os.path.realpath(fn)), '') - if path.startswith(pypy_root) and newname != part: - modpaths = os.path.normpath(path[len(pypy_root):]).split(os.sep) - if newname != '__init__': - modpaths.append(newname) - modpath = '.'.join(modpaths) - if modpath not in sys.modules: - munged[modpath] = mod - - for name, mod in munged.iteritems(): - if name not in sys.modules: - sys.modules[name] = mod - if '.' in name: - prename = name[:name.rfind('.')] - postname = name[len(prename)+1:] - if prename not in sys.modules: - __import__(prename) - if not hasattr(sys.modules[prename], postname): - setattr(sys.modules[prename], postname, mod) - - return partdir, this_dir - -def __clone(): - """ clone master version of autopath.py into all subdirs """ - from os.path import join, walk - if not this_dir.endswith(join('pypy','tool')): - raise EnvironmentError("can only clone master version " - "'%s'" % join(pypydir, 'tool',_myname)) - - - def sync_walker(arg, dirname, fnames): - if _myname in fnames: - fn = join(dirname, _myname) - f = open(fn, 'rwb+') - try: - if f.read() == arg: - print "checkok", fn - else: - print "syncing", fn - f = open(fn, 'w') - f.write(arg) - finally: - f.close() - s = open(join(pypydir, 'tool', _myname), 'rb').read() - walk(pypydir, sync_walker, s) - -_myname = 'autopath.py' - -# set guaranteed attributes - -pypydir, this_dir = __dirinfo('pypy') - -if __name__ == '__main__': - __clone() diff --git a/pypy/jit/backend/ppc/ppcgen/pystructs.py b/pypy/jit/backend/ppc/ppcgen/pystructs.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/pystructs.py +++ /dev/null @@ -1,22 +0,0 @@ -class PyVarObject(object): - ob_size = 8 - -class PyObject(object): - ob_refcnt = 0 - ob_type = 4 - -class PyTupleObject(object): - ob_item = 12 - -class PyTypeObject(object): - tp_name = 12 - tp_basicsize = 16 - tp_itemsize = 20 - tp_dealloc = 24 - -class PyFloatObject(object): - ob_fval = 8 - -class PyIntObject(object): - ob_ival = 8 - diff --git a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py +++ /dev/null @@ -1,63 +0,0 @@ -from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import r_uint -from pypy.jit.backend.ppc.ppcgen.form import IDesc, IDupDesc - -## "opcode": ( 0, 5), -## "rA": (11, 15, 'unsigned', regname._R), -## "rB": (16, 20, 'unsigned', regname._R), -## "Rc": (31, 31), -## "rD": ( 6, 10, 'unsigned', regname._R), -## "OE": (21, 21), -## "XO2": (22, 30), - -## XO = Form("rD", "rA", "rB", "OE", "XO2", "Rc") - -## add = XO(31, XO2=266, OE=0, Rc=0) - -## def add(rD, rA, rB): -## v = 0 -## v |= (31&(2**(5-0+1)-1)) << (32-5-1) -## ... -## return v - -def make_func(name, desc): - sig = [] - fieldvalues = [] - for field in desc.fields: - if field in desc.specializations: - fieldvalues.append((field, desc.specializations[field])) - else: - sig.append(field.name) - fieldvalues.append((field, field.name)) - if isinstance(desc, IDupDesc): - for destfield, srcfield in desc.dupfields.iteritems(): - fieldvalues.append((destfield, srcfield.name)) - body = ['v = r_uint(0)'] - assert 'v' not in sig # that wouldn't be funny - #body.append('print %r'%name + ', ' + ', '.join(["'%s:', %s"%(s, s) for s in sig])) - for field, value in fieldvalues: - if field.name == 'spr': - body.append('spr = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) - value = 'spr' - body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, - field.mask, - (32 - field.right - 1))) - body.append('self.emit(v)') - src = 'def %s(self, %s):\n %s'%(name, ', '.join(sig), '\n '.join(body)) - d = {'r_uint':r_uint} - #print src - exec compile2(src) in d - return d[name] - -def make_rassembler(cls): - bases = [make_rassembler(b) for b in cls.__bases__] - ns = {} - for k, v in cls.__dict__.iteritems(): - if isinstance(v, IDesc): - v = make_func(k, v) - ns[k] = v - rcls = type('R' + cls.__name__, tuple(bases), ns) - def emit(self, value): - self.insts.append(value) - rcls.emit = emit - return rcls From noreply at buildbot.pypy.org Mon Feb 6 10:35:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 10:35:28 +0100 (CET) Subject: [pypy-commit] pypy default: start writing the release announcement Message-ID: <20120206093528.163547107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52125:1be9e4030a4e Date: 2012-02-06 11:34 +0200 http://bitbucket.org/pypy/pypy/changeset/1be9e4030a4e/ Log: start writing the release announcement diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,52 @@ +============================ +PyPy 1.7 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As became a habit, this +release brings a lot of bugfixes, performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +list strategies which makes homogenous lists more efficient both in terms +of performance and memory. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are too many examples + which python constructs now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress, for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + xxxx # list it, multidim arrays in particular + +* Fundraising XXX + +.. _`numpy status page`: xxx +.. _`numpy status update blog report`: xxx From noreply at buildbot.pypy.org Mon Feb 6 10:35:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 10:35:29 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20120206093529.6358B7107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52126:ad2705041965 Date: 2012-02-06 11:35 +0200 http://bitbucket.org/pypy/pypy/changeset/ad2705041965/ Log: merge default diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) From noreply at buildbot.pypy.org Mon Feb 6 11:05:13 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 11:05:13 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix imports in test_ppc.py Message-ID: <20120206100513.863537107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52127:93e0d769bf2a Date: 2012-02-06 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/93e0d769bf2a/ Log: fix imports in test_ppc.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py b/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py +++ b/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py @@ -2,10 +2,9 @@ import random, sys, os from pypy.jit.backend.ppc.ppcgen.codebuilder import BasicPPCAssembler, PPCBuilder -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup from pypy.jit.backend.ppc.ppcgen.regname import * from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen import form, pystructs +from pypy.jit.backend.ppc.ppcgen import form from pypy.jit.backend.detect_cpu import autodetect_main_model from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD From noreply at buildbot.pypy.org Mon Feb 6 11:25:09 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 11:25:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: ppc/regalloc.py not used Message-ID: <20120206102509.74BA77107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52128:9f1e30dd076f Date: 2012-02-06 11:24 +0100 http://bitbucket.org/pypy/pypy/changeset/9f1e30dd076f/ Log: ppc/regalloc.py not used diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/regalloc.py +++ /dev/null @@ -1,213 +0,0 @@ -from pypy.jit.codegen.ppc.instruction import \ - gprs, fprs, crfs, ctr, \ - NO_REGISTER, GP_REGISTER, FP_REGISTER, CR_FIELD, CT_REGISTER, \ - CMPInsn, Spill, Unspill, stack_slot, \ - rSCRATCH - -from pypy.jit.codegen.ppc.conftest import option - -DEBUG_PRINT = option.debug_print - -class RegisterAllocation: - def __init__(self, freeregs, initial_mapping, initial_spill_offset): - if DEBUG_PRINT: - print - print "RegisterAllocation __init__", initial_mapping.items() - - self.insns = [] # output list of instructions - - # registers with dead values - self.freeregs = {} - for regcls in freeregs: - self.freeregs[regcls] = freeregs[regcls][:] - - self.var2loc = {} # maps Vars to AllocationSlots - self.lru = [] # least-recently-used list of vars; first is oldest. - # contains all vars in registers, and no vars on stack - - self.spill_offset = initial_spill_offset # where to put next spilled - # value, relative to rFP, - # measured in bytes - self.free_stack_slots = [] # a free list for stack slots - - # go through the initial mapping and initialize the data structures - for var, loc in initial_mapping.iteritems(): - self.set(var, loc) - if loc.is_register: - if loc.alloc in self.freeregs[loc.regclass]: - self.freeregs[loc.regclass].remove(loc.alloc) - self.lru.append(var) - else: - assert loc.offset >= self.spill_offset - - self.labels_to_tell_spill_offset_to = [] - self.builders_to_tell_spill_offset_to = [] - - def set(self, var, loc): - assert var not in self.var2loc - self.var2loc[var] = loc - - def forget(self, var, loc): - assert self.var2loc[var] is loc - del self.var2loc[var] - - def loc_of(self, var): - return self.var2loc[var] - - def spill_slot(self): - """ Returns an unused stack location. """ - if self.free_stack_slots: - return self.free_stack_slots.pop() - else: - self.spill_offset -= 4 - return stack_slot(self.spill_offset) - - def spill(self, reg, argtospill): - if argtospill in self.lru: - self.lru.remove(argtospill) - self.forget(argtospill, reg) - spillslot = self.spill_slot() - if reg.regclass != GP_REGISTER: - self.insns.append(reg.move_to_gpr(0)) - reg = gprs[0] - self.insns.append(Spill(argtospill, reg, spillslot)) - self.set(argtospill, spillslot) - - def _allocate_reg(self, regclass, newarg): - - # check if there is a register available - freeregs = self.freeregs[regclass] - - if freeregs: - reg = freeregs.pop().make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Putting %r into fresh register %r" % (newarg, reg) - return reg - - # if not, find something to spill - for i in range(len(self.lru)): - argtospill = self.lru[i] - reg = self.loc_of(argtospill) - assert reg.is_register - if reg.regclass == regclass: - del self.lru[i] - break - else: - assert 0 - - # Move the value we are spilling onto the stack, both in the - # data structures and in the instructions: - - self.spill(reg, argtospill) - - if DEBUG_PRINT: - print "allocate_reg: Spilled %r from %r to %r." % (argtospill, reg, self.loc_of(argtospill)) - - # update data structures to put newarg into the register - reg = reg.alloc.make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Put %r in stolen reg %r." % (newarg, reg) - return reg - - def _promote(self, arg): - if arg in self.lru: - self.lru.remove(arg) - self.lru.append(arg) - - def allocate_for_insns(self, insns): - from pypy.jit.codegen.ppc.rgenop import Var - - insns2 = [] - - # make a pass through the instructions, loading constants into - # Vars where needed. - for insn in insns: - newargs = [] - for arg in insn.reg_args: - if not isinstance(arg, Var): - newarg = Var() - arg.load(insns2, newarg) - newargs.append(newarg) - else: - newargs.append(arg) - insn.reg_args[0:len(newargs)] = newargs - insns2.append(insn) - - # Walk through instructions in forward order - for insn in insns2: - - if DEBUG_PRINT: - print "Processing instruction" - print insn - print "LRU list was:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru] - - # put things into the lru - for arg in insn.reg_args: - self._promote(arg) - if insn.result: - self._promote(insn.result) - if DEBUG_PRINT: - print "LRU list is now:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru if a is not insn.result] - - # We need to allocate a register for each used - # argument that is not already in one - for i in range(len(insn.reg_args)): - arg = insn.reg_args[i] - argcls = insn.reg_arg_regclasses[i] - if DEBUG_PRINT: - print "Allocating register for", arg, "..." - argloc = self.loc_of(arg) - if DEBUG_PRINT: - print "currently in", argloc - - if not argloc.is_register: - # It has no register now because it has been spilled - self.forget(arg, argloc) - newargloc = self._allocate_reg(argcls, arg) - if DEBUG_PRINT: - print "unspilling to", newargloc - self.insns.append(Unspill(arg, newargloc, argloc)) - self.free_stack_slots.append(argloc) - elif argloc.regclass != argcls: - # it's in the wrong kind of register - # (this code is excessively confusing) - self.forget(arg, argloc) - self.freeregs[argloc.regclass].append(argloc.alloc) - if argloc.regclass != GP_REGISTER: - if argcls == GP_REGISTER: - gpr = self._allocate_reg(GP_REGISTER, arg).number - else: - gpr = rSCRATCH - self.insns.append( - argloc.move_to_gpr(gpr)) - else: - gpr = argloc.number - if argcls != GP_REGISTER: - newargloc = self._allocate_reg(argcls, arg) - self.insns.append( - newargloc.move_from_gpr(gpr)) - else: - if DEBUG_PRINT: - print "it was in ", argloc - pass - - # Need to allocate a register for the destination - assert not insn.result or insn.result not in self.var2loc - if insn.result_regclass != NO_REGISTER: - if DEBUG_PRINT: - print "Allocating register for result %r..." % (insn.result,) - resultreg = self._allocate_reg(insn.result_regclass, insn.result) - insn.allocate(self) - if DEBUG_PRINT: - print insn - print - self.insns.append(insn) - #print 'allocation done' - #for i in self.insns: - # print i - #print self.var2loc - return self.insns From noreply at buildbot.pypy.org Mon Feb 6 11:50:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 11:50:19 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: a banch to implement record dtypes Message-ID: <20120206105019.3D67F7107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52129:52434fe40e76 Date: 2012-02-06 12:45 +0200 http://bitbucket.org/pypy/pypy/changeset/52434fe40e76/ Log: a banch to implement record dtypes diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -3,10 +3,10 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, interp_attrproperty_w) -from pypy.module.micronumpy import types, signature, interp_boxes +from pypy.module.micronumpy import types, interp_boxes from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype UNSIGNEDLTR = "u" From noreply at buildbot.pypy.org Mon Feb 6 12:00:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 12:00:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: kill ppcgen directory and move stuff into ppc directory Message-ID: <20120206110008.5CE887107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52130:27d728c3294a Date: 2012-02-06 11:59 +0100 http://bitbucket.org/pypy/pypy/changeset/27d728c3294a/ Log: kill ppcgen directory and move stuff into ppc directory diff --git a/pypy/jit/backend/ppc/ppcgen/_ppcgen.c b/pypy/jit/backend/ppc/_ppcgen.c rename from pypy/jit/backend/ppc/ppcgen/_ppcgen.c rename to pypy/jit/backend/ppc/_ppcgen.c diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/arch.py rename from pypy/jit/backend/ppc/ppcgen/arch.py rename to pypy/jit/backend/ppc/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/arch.py @@ -1,8 +1,8 @@ # Constants that depend on whether we are on 32-bit or 64-bit -from pypy.jit.backend.ppc.ppcgen.register import (NONVOLATILES, - NONVOLATILES_FLOAT, - MANAGED_REGS) +from pypy.jit.backend.ppc.register import (NONVOLATILES, + NONVOLATILES_FLOAT, + MANAGED_REGS) import sys if sys.maxint == (2**31 - 1): diff --git a/pypy/jit/backend/ppc/ppcgen/asmfunc.py b/pypy/jit/backend/ppc/asmfunc.py rename from pypy/jit/backend/ppc/ppcgen/asmfunc.py rename to pypy/jit/backend/ppc/asmfunc.py --- a/pypy/jit/backend/ppc/ppcgen/asmfunc.py +++ b/pypy/jit/backend/ppc/asmfunc.py @@ -4,7 +4,7 @@ from pypy.jit.backend.ppc.codebuf import MachineCodeBlockWrapper from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager from pypy.rpython.lltypesystem import lltype, rffi -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD _ppcgen = None diff --git a/pypy/jit/backend/ppc/ppcgen/assembler.py b/pypy/jit/backend/ppc/assembler.py rename from pypy/jit/backend/ppc/ppcgen/assembler.py rename to pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -1,5 +1,5 @@ import os -from pypy.jit.backend.ppc.ppcgen import form +from pypy.jit.backend.ppc import form # don't be fooled by the fact that there's some separation between a # generic assembler class and a PPC assembler class... there's @@ -62,7 +62,7 @@ def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): insns = self.assemble0(dump) - from pypy.jit.backend.ppc.ppcgen import asmfunc + from pypy.jit.backend.ppc import asmfunc c = asmfunc.AsmCode(len(insns)*4) for i in insns: c.emit(i) diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py rename from pypy/jit/backend/ppc/ppcgen/codebuilder.py rename to pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1,16 +1,16 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, NONVOLATILES, GPR_SAVE_AREA, IS_PPC_64) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import gen_emit_cmp_op -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +from pypy.jit.backend.ppc.helper.assembler import gen_emit_cmp_op +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, AsmMemoryManager, MachineDataBlockWrapper) diff --git a/pypy/jit/backend/ppc/ppcgen/condition.py b/pypy/jit/backend/ppc/condition.py rename from pypy/jit/backend/ppc/ppcgen/condition.py rename to pypy/jit/backend/ppc/condition.py diff --git a/pypy/jit/backend/ppc/ppcgen/field.py b/pypy/jit/backend/ppc/field.py rename from pypy/jit/backend/ppc/ppcgen/field.py rename to pypy/jit/backend/ppc/field.py diff --git a/pypy/jit/backend/ppc/ppcgen/form.py b/pypy/jit/backend/ppc/form.py rename from pypy/jit/backend/ppc/ppcgen/form.py rename to pypy/jit/backend/ppc/form.py diff --git a/pypy/jit/backend/ppc/ppcgen/func_builder.py b/pypy/jit/backend/ppc/func_builder.py rename from pypy/jit/backend/ppc/ppcgen/func_builder.py rename to pypy/jit/backend/ppc/func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/func_builder.py +++ b/pypy/jit/backend/ppc/func_builder.py @@ -1,6 +1,6 @@ -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.regname import * def load_arg(code, argi, typecode): rD = r3+argi diff --git a/pypy/jit/backend/ppc/ppcgen/helper/__init__.py b/pypy/jit/backend/ppc/helper/__init__.py rename from pypy/jit/backend/ppc/ppcgen/helper/__init__.py rename to pypy/jit/backend/ppc/helper/__init__.py diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/helper/assembler.py rename from pypy/jit/backend/ppc/ppcgen/helper/assembler.py rename to pypy/jit/backend/ppc/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/helper/assembler.py @@ -1,10 +1,10 @@ -import pypy.jit.backend.ppc.ppcgen.condition as c +import pypy.jit.backend.ppc.condition as c from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask -from pypy.jit.backend.ppc.ppcgen.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, +from pypy.jit.backend.ppc.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, BACKCHAIN_SIZE) from pypy.jit.metainterp.history import FLOAT from pypy.rlib.unroll import unrolling_iterable -import pypy.jit.backend.ppc.ppcgen.register as r +import pypy.jit.backend.ppc.register as r from pypy.rpython.lltypesystem import rffi, lltype def gen_emit_cmp_op(condition, signed=True): diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py rename from pypy/jit/backend/ppc/ppcgen/helper/regalloc.py rename to pypy/jit/backend/ppc/helper/regalloc.py diff --git a/pypy/jit/backend/ppc/ppcgen/jump.py b/pypy/jit/backend/ppc/jump.py rename from pypy/jit/backend/ppc/ppcgen/jump.py rename to pypy/jit/backend/ppc/jump.py --- a/pypy/jit/backend/ppc/ppcgen/jump.py +++ b/pypy/jit/backend/ppc/jump.py @@ -76,7 +76,7 @@ src_locations2, dst_locations2, tmpreg2): # find and push the xmm stack locations from src_locations2 that # are going to be overwritten by dst_locations1 - from pypy.jit.backend.ppc.ppcgen.arch import WORD + from pypy.jit.backend.ppc.arch import WORD extrapushes = [] dst_keys = {} for loc in dst_locations1: diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/locations.py rename from pypy/jit/backend/ppc/ppcgen/locations.py rename to pypy/jit/backend/ppc/locations.py diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/opassembler.py rename from pypy/jit/backend/ppc/ppcgen/opassembler.py rename to pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1,19 +1,19 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, gen_emit_unary_cmp_op) -import pypy.jit.backend.ppc.ppcgen.condition as c -import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, +import pypy.jit.backend.ppc.condition as c +import pypy.jit.backend.ppc.register as r +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, GPR_SAVE_AREA, BACKCHAIN_SIZE, MAX_REG_PARAMS) from pypy.jit.metainterp.history import (JitCellToken, TargetToken, Box, AbstractFailDescr, FLOAT, INT, REF) from pypy.rlib.objectmodel import we_are_translated -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, +from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.codebuilder import OverwritingBuilder -from pypy.jit.backend.ppc.ppcgen.regalloc import TempPtr, TempInt +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype from pypy.jit.metainterp.resoperation import rop diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py rename from pypy/jit/backend/ppc/ppcgen/ppc_assembler.py rename to pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -1,27 +1,27 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, - Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.opassembler import OpAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, IS_PPC_64, WORD, - NONVOLATILES, MAX_REG_PARAMS, - GPR_SAVE_AREA, BACKCHAIN_SIZE, - FPR_SAVE_AREA, - FLOAT_INT_CONVERSION, FORCE_INDEX, - SIZE_LOAD_IMM_PATCH_SP) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, - encode32, encode64, - decode32, decode64, - count_reg_args, - Saved_Volatiles) -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, + Regalloc) +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.opassembler import OpAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, + NONVOLATILES, MAX_REG_PARAMS, + GPR_SAVE_AREA, BACKCHAIN_SIZE, + FPR_SAVE_AREA, + FLOAT_INT_CONVERSION, FORCE_INDEX, + SIZE_LOAD_IMM_PATCH_SP) +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, + encode32, encode64, + decode32, decode64, + count_reg_args, + Saved_Volatiles) +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, @@ -40,7 +40,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.ppc.ppcgen.locations import StackLocation, get_spp_offset +from pypy.jit.backend.ppc.locations import StackLocation, get_spp_offset memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_field.py b/pypy/jit/backend/ppc/ppc_field.py rename from pypy/jit/backend/ppc/ppcgen/ppc_field.py rename to pypy/jit/backend/ppc/ppc_field.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_field.py +++ b/pypy/jit/backend/ppc/ppc_field.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen import regname +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc import regname fields = { # bit margins are *inclusive*! (and bit 0 is # most-significant, 31 least significant) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_form.py b/pypy/jit/backend/ppc/ppc_form.py rename from pypy/jit/backend/ppc/ppcgen/ppc_form.py rename to pypy/jit/backend/ppc/ppc_form.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_form.py +++ b/pypy/jit/backend/ppc/ppc_form.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.form import Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields +from pypy.jit.backend.ppc.form import Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields class PPCForm(Form): fieldmap = ppc_fields diff --git a/pypy/jit/backend/ppc/ppcgen/__init__.py b/pypy/jit/backend/ppc/ppcgen/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/test/__init__.py b/pypy/jit/backend/ppc/ppcgen/test/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/regalloc.py rename from pypy/jit/backend/ppc/ppcgen/regalloc.py rename to pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -1,26 +1,26 @@ from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, TempBox, compute_vars_longevity) -from pypy.jit.backend.ppc.ppcgen.arch import (WORD, MY_COPY_OF_REGS) -from pypy.jit.backend.ppc.ppcgen.jump import (remap_frame_layout_mixed, - remap_frame_layout) -from pypy.jit.backend.ppc.ppcgen.locations import imm -from pypy.jit.backend.ppc.ppcgen.helper.regalloc import (_check_imm_arg, - check_imm_box, - prepare_cmp_op, - prepare_unary_int_op, - prepare_binary_int_op, - prepare_binary_int_op_with_imm, - prepare_unary_cmp) +from pypy.jit.backend.ppc.arch import (WORD, MY_COPY_OF_REGS) +from pypy.jit.backend.ppc.jump import (remap_frame_layout_mixed, + remap_frame_layout) +from pypy.jit.backend.ppc.locations import imm +from pypy.jit.backend.ppc.helper.regalloc import (_check_imm_arg, + check_imm_box, + prepare_cmp_op, + prepare_unary_int_op, + prepare_binary_int_op, + prepare_binary_int_op_with_imm, + prepare_unary_cmp) from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, ConstPtr, Box) from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop -from pypy.jit.backend.ppc.ppcgen import locations +from pypy.jit.backend.ppc import locations from pypy.rpython.lltypesystem import rffi, lltype, rstr from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.descr import ArrayDescr from pypy.jit.codewriter.effectinfo import EffectInfo -import pypy.jit.backend.ppc.ppcgen.register as r +import pypy.jit.backend.ppc.register as r from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.descr import unpack_arraydescr from pypy.jit.backend.llsupport.descr import unpack_fielddescr diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/register.py rename from pypy/jit/backend/ppc/ppcgen/register.py rename to pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.locations import (RegisterLocation, - FPRegisterLocation) +from pypy.jit.backend.ppc.locations import (RegisterLocation, + FPRegisterLocation) ALL_REGS = [RegisterLocation(i) for i in range(32)] ALL_FLOAT_REGS = [FPRegisterLocation(i) for i in range(32)] diff --git a/pypy/jit/backend/ppc/ppcgen/regname.py b/pypy/jit/backend/ppc/regname.py rename from pypy/jit/backend/ppc/ppcgen/regname.py rename to pypy/jit/backend/ppc/regname.py diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -6,16 +6,16 @@ from pypy.jit.metainterp import history, compile from pypy.jit.metainterp.history import BoxPtr from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.ppc.ppcgen.arch import FORCE_INDEX_OFS +from pypy.jit.backend.ppc.arch import FORCE_INDEX_OFS from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc from pypy.jit.backend.x86.support import values_array -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import NONVOLATILES, GPR_SAVE_AREA, WORD -from pypy.jit.backend.ppc.ppcgen.regalloc import PPCRegisterManager, PPCFrameManager -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen import register as r +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import NONVOLATILES, GPR_SAVE_AREA, WORD +from pypy.jit.backend.ppc.regalloc import PPCRegisterManager, PPCFrameManager +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc import register as r import sys from pypy.tool.ansi_print import ansi_log diff --git a/pypy/jit/backend/ppc/ppcgen/symbol_lookup.py b/pypy/jit/backend/ppc/symbol_lookup.py rename from pypy/jit/backend/ppc/ppcgen/symbol_lookup.py rename to pypy/jit/backend/ppc/symbol_lookup.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/autopath.py b/pypy/jit/backend/ppc/test/autopath.py rename from pypy/jit/backend/ppc/ppcgen/test/autopath.py rename to pypy/jit/backend/ppc/test/autopath.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py b/pypy/jit/backend/ppc/test/test_call_assembler.py rename from pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py rename to pypy/jit/backend/ppc/test/test_call_assembler.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_field.py b/pypy/jit/backend/ppc/test/test_field.py rename from pypy/jit/backend/ppc/ppcgen/test/test_field.py rename to pypy/jit/backend/ppc/test/test_field.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_field.py +++ b/pypy/jit/backend/ppc/test/test_field.py @@ -1,6 +1,6 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.field import Field +from pypy.jit.backend.ppc.field import Field from py.test import raises import random diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_form.py b/pypy/jit/backend/ppc/test/test_form.py rename from pypy/jit/backend/ppc/ppcgen/test/test_form.py rename to pypy/jit/backend/ppc/test/test_form.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_form.py +++ b/pypy/jit/backend/ppc/test/test_form.py @@ -1,11 +1,11 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import b +from pypy.jit.backend.ppc.ppc_assembler import b import random import sys -from pypy.jit.backend.ppc.ppcgen.form import Form, FormException -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler +from pypy.jit.backend.ppc.form import Form, FormException +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc.assembler import Assembler # 0 31 # +-------------------------------+ diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py b/pypy/jit/backend/ppc/test/test_func_builder.py rename from pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py rename to pypy/jit/backend/ppc/test/test_func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py +++ b/pypy/jit/backend/ppc/test/test_func_builder.py @@ -1,11 +1,11 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.func_builder import make_func -from pypy.jit.backend.ppc.ppcgen import form, func_builder -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.func_builder import make_func +from pypy.jit.backend.ppc import form, func_builder +from pypy.jit.backend.ppc.regname import * from pypy.jit.backend.detect_cpu import autodetect_main_model class TestFuncBuilderTest(object): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py b/pypy/jit/backend/ppc/test/test_helper.py rename from pypy/jit/backend/ppc/ppcgen/test/test_helper.py rename to pypy/jit/backend/ppc/test/test_helper.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py +++ b/pypy/jit/backend/ppc/test/test_helper.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (encode32, decode32) +from pypy.jit.backend.ppc.helper.assembler import (encode32, decode32) #encode64, decode64) def test_encode32(): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_ppc.py rename to pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -1,12 +1,12 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.codebuilder import BasicPPCAssembler, PPCBuilder -from pypy.jit.backend.ppc.ppcgen.regname import * -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen import form +from pypy.jit.backend.ppc.codebuilder import BasicPPCAssembler, PPCBuilder +from pypy.jit.backend.ppc.regname import * +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc import form from pypy.jit.backend.detect_cpu import autodetect_main_model -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD from pypy.rpython.lltypesystem import lltype, rffi from pypy.rpython.annlowlevel import llhelper diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py rename from pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py rename to pypy/jit/backend/ppc/test/test_rassemblermaker.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py +++ b/pypy/jit/backend/ppc/test/test_rassemblermaker.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.rassemblermaker import make_rassembler -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.rassemblermaker import make_rassembler +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler RPPCAssembler = make_rassembler(PPCAssembler) diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py b/pypy/jit/backend/ppc/test/test_regalloc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py rename to pypy/jit/backend/ppc/test/test_regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py +++ b/pypy/jit/backend/ppc/test/test_regalloc.py @@ -1,10 +1,10 @@ from pypy.rlib.objectmodel import instantiate -from pypy.jit.backend.ppc.ppcgen.locations import (imm, RegisterLocation, - ImmLocation, StackLocation) -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen.codebuilder import hi, lo -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import WORD +from pypy.jit.backend.ppc.locations import (imm, RegisterLocation, + ImmLocation, StackLocation) +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc.codebuilder import hi, lo +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import WORD class MockBuilder(object): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py b/pypy/jit/backend/ppc/test/test_register_manager.py rename from pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py rename to pypy/jit/backend/ppc/test/test_register_manager.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py +++ b/pypy/jit/backend/ppc/test/test_register_manager.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen import regalloc, register +from pypy.jit.backend.ppc import regalloc, register class TestPPCRegisterManager(object): def test_allocate_scratch_register(self): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py b/pypy/jit/backend/ppc/test/test_stackframe.py rename from pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py rename to pypy/jit/backend/ppc/test/test_stackframe.py diff --git a/pypy/jit/backend/ppc/test/test_ztranslation.py b/pypy/jit/backend/ppc/test/test_ztranslation.py --- a/pypy/jit/backend/ppc/test/test_ztranslation.py +++ b/pypy/jit/backend/ppc/test/test_ztranslation.py @@ -8,7 +8,7 @@ from pypy.jit.backend.test.support import CCompiledMixin from pypy.jit.codewriter.policy import StopAtXPolicy from pypy.translator.translator import TranslationContext -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64 +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64 from pypy.config.translationoption import DEFL_GC from pypy.rlib import rgc diff --git a/pypy/jit/backend/ppc/ppcgen/util.py b/pypy/jit/backend/ppc/util.py rename from pypy/jit/backend/ppc/ppcgen/util.py rename to pypy/jit/backend/ppc/util.py --- a/pypy/jit/backend/ppc/ppcgen/util.py +++ b/pypy/jit/backend/ppc/util.py @@ -1,5 +1,5 @@ -from pypy.jit.codegen.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.codegen.ppc.ppcgen.func_builder import make_func +from pypy.jit.codegen.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.codegen.ppc.func_builder import make_func from regname import * From noreply at buildbot.pypy.org Mon Feb 6 12:04:37 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 12:04:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove obsolete test Message-ID: <20120206110438.031447107EC@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52131:78b4aedb2fce Date: 2012-02-06 12:04 +0100 http://bitbucket.org/pypy/pypy/changeset/78b4aedb2fce/ Log: remove obsolete test diff --git a/pypy/jit/backend/ppc/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/test/test_rassemblermaker.py +++ /dev/null @@ -1,39 +0,0 @@ -from pypy.jit.backend.ppc.rassemblermaker import make_rassembler -from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler - -RPPCAssembler = make_rassembler(PPCAssembler) - -_a = PPCAssembler() -_a.add(3, 3, 4) -add_r3_r3_r4 = _a.insts[0].assemble() - -def test_simple(): - ra = RPPCAssembler() - ra.add(3, 3, 4) - assert ra.insts == [add_r3_r3_r4] - -def test_rtyped(): - from pypy.rpython.test.test_llinterp import interpret - def f(): - ra = RPPCAssembler() - ra.add(3, 3, 4) - ra.lwz(1, 1, 1) # ensure that high bit doesn't produce long but r_uint - return ra.insts[0] - res = interpret(f, []) - assert res == add_r3_r3_r4 - -def test_mnemonic(): - mrs = [] - for A in PPCAssembler, RPPCAssembler: - a = A() - a.mr(3, 4) - mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] - -def test_spr_coding(): - mrs = [] - for A in PPCAssembler, RPPCAssembler: - a = A() - a.mtctr(3) - mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] From noreply at buildbot.pypy.org Mon Feb 6 12:28:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 12:28:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: export few more types and check their __mro__ Message-ID: <20120206112811.551167107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52132:25d165290ecd Date: 2012-02-06 13:27 +0200 http://bitbucket.org/pypy/pypy/changeset/25d165290ecd/ Log: export few more types and check their __mro__ diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -43,9 +43,12 @@ 'signedinteger': 'interp_boxes.W_SignedIntegerBox', 'unsignedinteger': 'interp_boxes.W_UnsignedIntegerBox', 'bool_': 'interp_boxes.W_BoolBox', + 'bool8': 'interp_boxes.W_BoolBox', 'int8': 'interp_boxes.W_Int8Box', + 'byte': 'interp_boxes.W_Int8Box', 'uint8': 'interp_boxes.W_UInt8Box', 'int16': 'interp_boxes.W_Int16Box', + 'short': 'interp_boxes.W_Int16Box', 'uint16': 'interp_boxes.W_UInt16Box', 'int32': 'interp_boxes.W_Int32Box', 'uint32': 'interp_boxes.W_UInt32Box', @@ -57,6 +60,9 @@ 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', + #'str_': 'interp_boxes.W_StringBox', + #'unicode_': 'interp_boxes.W_UnicodeBox', + #'void': 'interp_boxes.W_VoidBox', } # ufuncs diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -13,6 +13,7 @@ SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" +VOID = 'V' VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) @@ -69,6 +70,8 @@ for dtype in cache.builtin_dtypes: if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype + elif space.isinstance_w(w_dtype, space.w_list): + xxx else: for dtype in cache.builtin_dtypes: if w_dtype in dtype.alternate_constructors: diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -188,10 +188,11 @@ raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) exc = raises(TypeError, numpy.signedinteger, 0) - assert str(exc.value) == "cannot create 'signedinteger' instances" + assert 'cannot create' in str(exc.value) + assert 'signedinteger' in str(exc.value) exc = raises(TypeError, numpy.unsignedinteger, 0) - assert str(exc.value) == "cannot create 'unsignedinteger' instances" - + assert 'cannot create' in str(exc.value) + assert 'unsignedinteger' in str(exc.value) raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) @@ -401,3 +402,19 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_various_types(self): + import _numpypy as numpy + + assert numpy.int16 is numpy.short + assert numpy.int8 is numpy.byte + assert numpy.bool_ is numpy.bool8 + + def test_mro(self): + import _numpypy as numpy + + assert numpy.int16.__mro__ == (numpy.int16, numpy.signedinteger, + numpy.integer, numpy.number, + numpy.generic, object) + assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) + #assert numpy.str_.__mro__ == From noreply at buildbot.pypy.org Mon Feb 6 12:32:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 12:32:02 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: grumble, fix the patchlevel release no Message-ID: <20120206113202.3041B7107EC@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52133:fa0a3384dd31 Date: 2012-02-06 13:31 +0200 http://bitbucket.org/pypy/pypy/changeset/fa0a3384dd31/ Log: grumble, fix the patchlevel release no diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.8.1" +#define PYPY_VERSION "1.8.0" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ From noreply at buildbot.pypy.org Mon Feb 6 12:58:01 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Mon, 6 Feb 2012 12:58:01 +0100 (CET) Subject: [pypy-commit] pypy default: Document configuration environment variables Message-ID: <20120206115801.725CB82E4F@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r52134:565fba5a3441 Date: 2012-02-06 13:57 +0200 http://bitbucket.org/pypy/pypy/changeset/565fba5a3441/ Log: Document configuration environment variables Add gc_info.rst, documenting minimark's configuration variables, link into tree. Add cpython compatibility variables to the pypy manpage. Add missing command line options to the pypy manpage. Link the manpage into the document tree. diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 5 + SEE ALSO ======== From noreply at buildbot.pypy.org Mon Feb 6 13:00:46 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Mon, 6 Feb 2012 13:00:46 +0100 (CET) Subject: [pypy-commit] pypy default: Bump start-line for gc_info in pypy.1.rst Message-ID: <20120206120047.017BB82E4F@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r52135:a4b5035a856b Date: 2012-02-06 14:00 +0200 http://bitbucket.org/pypy/pypy/changeset/a4b5035a856b/ Log: Bump start-line for gc_info in pypy.1.rst diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -146,7 +146,7 @@ performance issues under PyPy. .. include:: ../gc_info.rst - :start-line: 5 + :start-line: 7 SEE ALSO ======== From noreply at buildbot.pypy.org Mon Feb 6 13:52:21 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 6 Feb 2012 13:52:21 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: repair tests in test_regalloc.py Message-ID: <20120206125221.AB42C82CE3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52136:3252afe40dd2 Date: 2012-02-06 13:51 +0100 http://bitbucket.org/pypy/pypy/changeset/3252afe40dd2/ Log: repair tests in test_regalloc.py diff --git a/pypy/jit/backend/ppc/test/test_regalloc.py b/pypy/jit/backend/ppc/test/test_regalloc.py --- a/pypy/jit/backend/ppc/test/test_regalloc.py +++ b/pypy/jit/backend/ppc/test/test_regalloc.py @@ -5,6 +5,7 @@ from pypy.jit.backend.ppc.codebuilder import hi, lo from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC from pypy.jit.backend.ppc.arch import WORD +from pypy.jit.backend.ppc.locations import get_spp_offset class MockBuilder(object): @@ -94,23 +95,31 @@ big = 2 << 28 self.asm.regalloc_mov(imm(big), stack(7)) - exp_instr = [MI("load_imm", 0, 5), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD)), - MI("load_imm", 0, big), - MI("stw", r0.value, SPP.value, -(7 * WORD + WORD))] + exp_instr = [MI("alloc_scratch_reg"), + MI("load_imm", r0, 5), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg"), + + MI("alloc_scratch_reg"), + MI("load_imm", r0, big), + MI("store", r0.value, SPP.value, get_spp_offset(7)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instr def test_mem_to_reg(self): self.asm.regalloc_mov(stack(5), reg(10)) self.asm.regalloc_mov(stack(0), reg(0)) - exp_instrs = [MI("lwz", r10.value, SPP.value, -(5 * WORD + WORD)), - MI("lwz", r0.value, SPP.value, -(WORD))] + exp_instrs = [MI("load", r10.value, SPP.value, -(5 * WORD + WORD)), + MI("load", r0.value, SPP.value, -(WORD))] assert self.asm.mc.instrs == exp_instrs def test_mem_to_mem(self): self.asm.regalloc_mov(stack(5), stack(6)) - exp_instrs = [MI("lwz", r0.value, SPP.value, -(5 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD))] + exp_instrs = [ + MI("alloc_scratch_reg"), + MI("load", r0.value, SPP.value, get_spp_offset(5)), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instrs def test_reg_to_reg(self): @@ -123,8 +132,8 @@ def test_reg_to_mem(self): self.asm.regalloc_mov(reg(5), stack(10)) self.asm.regalloc_mov(reg(0), stack(2)) - exp_instrs = [MI("stw", r5.value, SPP.value, -(10 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(2 * WORD + WORD))] + exp_instrs = [MI("store", r5.value, SPP.value, -(10 * WORD + WORD)), + MI("store", r0.value, SPP.value, -(2 * WORD + WORD))] assert self.asm.mc.instrs == exp_instrs def reg(i): From noreply at buildbot.pypy.org Mon Feb 6 16:25:49 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 6 Feb 2012 16:25:49 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Reading fields of various sizes. Message-ID: <20120206152549.3706782CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52137:c200bc59446e Date: 2012-02-06 16:25 +0100 http://bitbucket.org/pypy/pypy/changeset/c200bc59446e/ Log: Reading fields of various sizes. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, llmemory, llarena +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase @@ -16,6 +16,11 @@ GCFLAG_GLOBAL = first_gcflag << 0 # keep in sync with et.c GCFLAG_WAS_COPIED = first_gcflag << 1 # keep in sync with et.c +PRIMITIVE_SIZES = {1: lltype.Char, + 2: rffi.SHORT, + 4: rffi.INT, + 8: lltype.SignedLongLong} + def always_inline(fn): fn._always_inline_ = True @@ -71,7 +76,8 @@ return self.get_size(obj) self._getsize_fn = _get_size # - self.declare_readers() + for size, TYPE in PRIMITIVE_SIZES.items(): + self.declare_reader(size, TYPE) self.declare_write_barrier() def setup(self): @@ -205,18 +211,22 @@ # ---------- - def declare_readers(self): + def declare_reader(self, size, TYPE): # Reading functions. Defined here to avoid the extra burden of # passing 'self' explicitly. - stm_read_word = self.stm_operations.stm_read_word + assert rffi.sizeof(TYPE) == size + PTYPE = rffi.CArrayPtr(TYPE) + stm_read_int = getattr(self.stm_operations, 'stm_read_int%d' % size) # @always_inline - def read_signed(obj, offset): + def reader(obj, offset): if self.header(obj).tid & GCFLAG_GLOBAL == 0: - return (obj + offset).signed[0] # local obj: read directly + # local obj: read directly + adr = rffi.cast(PTYPE, obj + offset) + return adr[0] else: - return stm_read_word(obj, offset) # else: call a helper - self.read_signed = read_signed + return stm_read_int(obj, offset) # else: call a helper + setattr(self, 'read_int%d' % size, reader) # # the following logic was moved to et.c to avoid a double call ## @dont_inline diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,5 +1,5 @@ -from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.memory.gc.stmgc import StmGC +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi +from pypy.rpython.memory.gc.stmgc import StmGC, PRIMITIVE_SIZES, WORD from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED @@ -87,14 +87,23 @@ assert state[2] is not None return state[2] - def stm_read_word(self, obj, offset): - hdr = self._gc.header(obj) - if hdr.tid & GCFLAG_WAS_COPIED != 0: - localobj = self.tldict_lookup(obj) - if localobj: - assert self._gc.header(localobj).tid & GCFLAG_GLOBAL == 0 - return (localobj + offset).signed[0] - return 'stm_ll_read_word(%r, %r)' % (obj, offset) + def _get_stm_reader(size, TYPE): + assert rffi.sizeof(TYPE) == size + PTYPE = rffi.CArrayPtr(TYPE) + def stm_reader(self, obj, offset): + hdr = self._gc.header(obj) + if hdr.tid & GCFLAG_WAS_COPIED != 0: + localobj = self.tldict_lookup(obj) + if localobj: + assert self._gc.header(localobj).tid & GCFLAG_GLOBAL == 0 + adr = rffi.cast(PTYPE, localobj + offset) + return adr[0] + return 'stm_ll_read_int%d(%r, %r)' % (size, obj, offset) + return stm_reader + + for _size, _TYPE in PRIMITIVE_SIZES.items(): + _func = _get_stm_reader(_size, _TYPE) + locals()['stm_read_int%d' % _size] = _func def stm_copy_transactional_to_raw(self, srcobj, dstobj, size): sizehdr = self._gc.gcheaderbuilder.size_gc_header @@ -143,6 +152,8 @@ self.gc.setup() def teardown_method(self, meth): + if not hasattr(self, 'gc'): + return for key in self.gc.stm_operations._tls_dict.keys(): if key != 0: self.gc.stm_operations.threadnum = key @@ -151,7 +162,8 @@ # ---------- # test helpers def malloc(self, STRUCT): - gcref = self.gc.malloc_fixedsize_clear(123, llmemory.sizeof(STRUCT)) + size = llarena.round_up_for_allocation(llmemory.sizeof(STRUCT)) + gcref = self.gc.malloc_fixedsize_clear(123, size) realobj = lltype.cast_opaque_ptr(lltype.Ptr(STRUCT), gcref) addr = llmemory.cast_ptr_to_adr(realobj) return realobj, addr @@ -171,6 +183,9 @@ assert (hdr.tid & GCFLAG_WAS_COPIED != 0) == must_have_was_copied if must_have_version != '?': assert hdr.version == must_have_version + def read_signed(self, obj, offset): + meth = getattr(self.gc, 'read_int%d' % WORD) + return meth(obj, offset) def test_gc_creation_works(self): pass @@ -206,15 +221,15 @@ s, s_adr = self.malloc(S) assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL != 0 s.a = 42 - value = self.gc.read_signed(s_adr, ofs_a) - assert value == 'stm_ll_read_word(%r, %r)' % (s_adr, ofs_a) + value = self.read_signed(s_adr, ofs_a) + assert value == 'stm_ll_read_int%d(%r, %r)' % (WORD, s_adr, ofs_a) # self.select_thread(1) s, s_adr = self.malloc(S) assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL == 0 self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED # should be ignored s.a = 42 - value = self.gc.read_signed(s_adr, ofs_a) + value = self.read_signed(s_adr, ofs_a) assert value == 42 def test_reader_through_dict(self): @@ -228,9 +243,30 @@ self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED self.gc.stm_operations._tldicts[1][s_adr] = t_adr # - value = self.gc.read_signed(s_adr, ofs_a) + value = self.read_signed(s_adr, ofs_a) assert value == 84 + def test_reader_sizes(self): + for size, TYPE in PRIMITIVE_SIZES.items(): + T = lltype.GcStruct('T', ('a', TYPE)) + ofs_a = llmemory.offsetof(T, 'a') + # + self.select_thread(0) + t, t_adr = self.malloc(T) + assert self.gc.header(t_adr).tid & GCFLAG_GLOBAL != 0 + t.a = lltype.cast_primitive(TYPE, 42) + # + value = getattr(self.gc, 'read_int%d' % size)(t_adr, ofs_a) + assert value == 'stm_ll_read_int%d(%r, %r)' % (size, t_adr, ofs_a) + # + self.select_thread(1) + t, t_adr = self.malloc(T) + assert self.gc.header(t_adr).tid & GCFLAG_GLOBAL == 0 + t.a = lltype.cast_primitive(TYPE, 42) + value = getattr(self.gc, 'read_int%d' % size)(t_adr, ofs_a) + assert lltype.typeOf(value) == TYPE + assert lltype.cast_primitive(lltype.Signed, value) == 42 + def test_write_barrier_exists(self): self.select_thread(1) t, t_adr = self.malloc(S) From noreply at buildbot.pypy.org Mon Feb 6 16:59:51 2012 From: noreply at buildbot.pypy.org (gutworth) Date: Mon, 6 Feb 2012 16:59:51 +0100 (CET) Subject: [pypy-commit] pypy default: fix ugly formatting Message-ID: <20120206155951.2963B7107FA@wyvern.cs.uni-duesseldorf.de> Author: Benjamin Peterson Branch: Changeset: r52138:7e8d570f4490 Date: 2012-02-06 10:23 -0500 http://bitbucket.org/pypy/pypy/changeset/7e8d570f4490/ Log: fix ugly formatting diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: From noreply at buildbot.pypy.org Mon Feb 6 16:59:52 2012 From: noreply at buildbot.pypy.org (gutworth) Date: Mon, 6 Feb 2012 16:59:52 +0100 (CET) Subject: [pypy-commit] pypy default: allow folding subscripts of BMP characters higher than surrogates Message-ID: <20120206155952.5CC0A7107FA@wyvern.cs.uni-duesseldorf.de> Author: Benjamin Peterson Branch: Changeset: r52139:e112d1cfaa95 Date: 2012-02-06 10:59 -0500 http://bitbucket.org/pypy/pypy/changeset/e112d1cfaa95/ Log: allow folding subscripts of BMP characters higher than surrogates diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -310,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): From noreply at buildbot.pypy.org Mon Feb 6 18:02:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 6 Feb 2012 18:02:05 +0100 (CET) Subject: [pypy-commit] pypy default: fix version number Message-ID: <20120206170205.4635782CE3@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52140:646611ce782f Date: 2012-02-06 18:01 +0100 http://bitbucket.org/pypy/pypy/changeset/646611ce782f/ Log: fix version number diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -1,5 +1,5 @@ ============================ -PyPy 1.7 - business as usual +PyPy 1.8 - business as usual ============================ We're pleased to announce the 1.8 release of PyPy. As became a habit, this From noreply at buildbot.pypy.org Mon Feb 6 18:33:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 18:33:07 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: intp Message-ID: <20120206173307.0B30B82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52141:6f8ec889129b Date: 2012-02-06 19:32 +0200 http://bitbucket.org/pypy/pypy/changeset/6f8ec889129b/ Log: intp diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -60,6 +60,7 @@ 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', + 'intp': 'types.IntP.BoxType', #'str_': 'interp_boxes.W_StringBox', #'unicode_': 'interp_boxes.W_UnicodeBox', #'void': 'interp_boxes.W_VoidBox', diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -7,7 +7,6 @@ from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () @@ -280,4 +279,3 @@ __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) - diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -405,10 +405,15 @@ def test_various_types(self): import _numpypy as numpy + import sys assert numpy.int16 is numpy.short assert numpy.int8 is numpy.byte assert numpy.bool_ is numpy.bool8 + if sys.maxint == (1 << 63) - 1: + assert numpy.intp is numpy.int64 + else: + assert numpy.intp is numpy.int32 def test_mro(self): import _numpypy as numpy diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -513,3 +513,9 @@ T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box format_code = "d" + +for tp in [Int32, Int64]: + if tp.T == lltype.Signed: + IntP = tp + break +del tp From noreply at buildbot.pypy.org Mon Feb 6 18:34:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 18:34:52 +0100 (CET) Subject: [pypy-commit] buildbot default: grumble grumble grumble, bad fijal, a crappy review Message-ID: <20120206173452.7EF507107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r633:1982b9dfa2b2 Date: 2012-02-06 19:34 +0200 http://bitbucket.org/pypy/buildbot/changeset/1982b9dfa2b2/ Log: grumble grumble grumble, bad fijal, a crappy review diff --git a/bot2/pypybuildbot/builds.py b/bot2/pypybuildbot/builds.py --- a/bot2/pypybuildbot/builds.py +++ b/bot2/pypybuildbot/builds.py @@ -412,7 +412,7 @@ '--upload-baseline-revision', WithProperties('%(got_revision)s'), '--upload-baseline-branch', WithProperties('%(branch)s'), - '--upload-baseline-urls', 'http://localhost', + '--upload-baseline-urls', 'http://speed.pypy.org', ], workdir='./benchmarks', timeout=3600)) From noreply at buildbot.pypy.org Mon Feb 6 18:40:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 18:40:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: make those tests pass with -A Message-ID: <20120206174057.C98C47107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52142:b73818b96a24 Date: 2012-02-06 19:40 +0200 http://bitbucket.org/pypy/pypy/changeset/b73818b96a24/ Log: make those tests pass with -A diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -53,13 +53,13 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from _numpypy import array, False_, True_, int64 + from _numpypy import array, False_, longlong a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit - assert isinstance(a[0], int64) + assert isinstance(a[0], longlong) b = a.copy() - assert isinstance(b[0], int64) + assert isinstance(b[0], longlong) a = array([0, 1, 2, 3], dtype=bool) assert a[0] is False_ @@ -81,17 +81,17 @@ assert a[i] is True_ def test_zeros_long(self): - from _numpypy import zeros, int64 + from _numpypy import zeros, longlong a = zeros(10, dtype=long) for i in range(10): - assert isinstance(a[i], int64) + assert isinstance(a[i], longlong) assert a[1] == 0 def test_ones_long(self): - from _numpypy import ones, int64 + from _numpypy import ones, longlong a = ones(10, dtype=long) for i in range(10): - assert isinstance(a[i], int64) + assert isinstance(a[i], longlong) assert a[1] == 1 def test_overflow(self): From noreply at buildbot.pypy.org Mon Feb 6 18:50:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 18:50:49 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: export longlong and ulonglong Message-ID: <20120206175049.1B7867107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52143:b91d1bdc5810 Date: 2012-02-06 19:50 +0200 http://bitbucket.org/pypy/pypy/changeset/b91d1bdc5810/ Log: export longlong and ulonglong diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -54,6 +54,8 @@ 'uint32': 'interp_boxes.W_UInt32Box', 'int64': 'interp_boxes.W_Int64Box', 'uint64': 'interp_boxes.W_UInt64Box', + 'longlong': 'interp_boxes.W_LongLongBox', + 'ulonglong': 'interp_boxes.W_ULongLongBox', 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -141,9 +141,15 @@ class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): descr__new__, get_dtype = new_dtype_getter("int64") +class W_LongLongBox(W_SignedIntegerBox, PrimitiveBox): + descr__new__, get_dtype = new_dtype_getter('longlong') + class W_UInt64Box(W_UnsignedIntegerBox, PrimitiveBox): descr__new__, get_dtype = new_dtype_getter("uint64") +class W_ULongLongBox(W_SignedIntegerBox, PrimitiveBox): + descr__new__, get_dtype = new_dtype_getter('ulonglong') + class W_InexactBox(W_NumberBox): _attrs_ = () diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -214,7 +214,6 @@ name="int64", char="q", w_box_type=space.gettypefor(interp_boxes.W_Int64Box), - alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( types.UInt64(), @@ -242,12 +241,30 @@ alternate_constructors=[space.w_float], aliases=["float"], ) + self.w_longlongdtype = W_Dtype( + types.Int64(), + num=9, + kind=SIGNEDLTR, + name='int64', + char='q', + w_box_type = space.gettypefor(interp_boxes.W_LongLongBox), + alternate_constructors=[space.w_long], + ) + self.w_ulonglongdtype = W_Dtype( + types.UInt64(), + num=10, + kind=UNSIGNEDLTR, + name='uint64', + char='Q', + w_box_type = space.gettypefor(interp_boxes.W_ULongLongBox), + ) self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, - self.w_int64dtype, self.w_uint64dtype, self.w_float32dtype, + self.w_longlongdtype, self.w_ulonglongdtype, + self.w_float32dtype, self.w_float64dtype ] self.dtypes_by_num_bytes = sorted( From noreply at buildbot.pypy.org Mon Feb 6 19:29:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 19:29:04 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: expose some more names Message-ID: <20120206182904.7F0607107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52144:e275565bc002 Date: 2012-02-06 20:28 +0200 http://bitbucket.org/pypy/pypy/changeset/e275565bc002/ Log: expose some more names diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -47,11 +47,15 @@ 'int8': 'interp_boxes.W_Int8Box', 'byte': 'interp_boxes.W_Int8Box', 'uint8': 'interp_boxes.W_UInt8Box', + 'ubyte': 'interp_boxes.W_UInt8Box', 'int16': 'interp_boxes.W_Int16Box', 'short': 'interp_boxes.W_Int16Box', 'uint16': 'interp_boxes.W_UInt16Box', + 'ushort': 'interp_boxes.W_UInt16Box', 'int32': 'interp_boxes.W_Int32Box', + 'intc': 'interp_boxes.W_Int32Box', 'uint32': 'interp_boxes.W_UInt32Box', + 'uintc': 'interp_boxes.W_UInt32Box', 'int64': 'interp_boxes.W_Int64Box', 'uint64': 'interp_boxes.W_UInt64Box', 'longlong': 'interp_boxes.W_LongLongBox', @@ -63,6 +67,7 @@ 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', 'intp': 'types.IntP.BoxType', + 'uintp': 'types.UIntP.BoxType', #'str_': 'interp_boxes.W_StringBox', #'unicode_': 'interp_boxes.W_UnicodeBox', #'void': 'interp_boxes.W_VoidBox', diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -518,4 +518,8 @@ if tp.T == lltype.Signed: IntP = tp break +for tp in [UInt32, UInt64]: + if tp.T == lltype.Unsigned: + UIntP = tp + break del tp From noreply at buildbot.pypy.org Mon Feb 6 20:39:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 20:39:22 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: A branch to make ppc jit rpython, readd rassembler Message-ID: <20120206193922.41A347107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: ppc-jit-backend-rpythonization Changeset: r52145:135d4fd0b053 Date: 2012-02-06 21:36 +0200 http://bitbucket.org/pypy/pypy/changeset/135d4fd0b053/ Log: A branch to make ppc jit rpython, readd rassembler diff --git a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py @@ -0,0 +1,63 @@ +from pypy.tool.sourcetools import compile2 +from pypy.rlib.rarithmetic import r_uint +from pypy.jit.backend.ppc.ppcgen.form import IDesc, IDupDesc + +## "opcode": ( 0, 5), +## "rA": (11, 15, 'unsigned', regname._R), +## "rB": (16, 20, 'unsigned', regname._R), +## "Rc": (31, 31), +## "rD": ( 6, 10, 'unsigned', regname._R), +## "OE": (21, 21), +## "XO2": (22, 30), + +## XO = Form("rD", "rA", "rB", "OE", "XO2", "Rc") + +## add = XO(31, XO2=266, OE=0, Rc=0) + +## def add(rD, rA, rB): +## v = 0 +## v |= (31&(2**(5-0+1)-1)) << (32-5-1) +## ... +## return v + +def make_func(name, desc): + sig = [] + fieldvalues = [] + for field in desc.fields: + if field in desc.specializations: + fieldvalues.append((field, desc.specializations[field])) + else: + sig.append(field.name) + fieldvalues.append((field, field.name)) + if isinstance(desc, IDupDesc): + for destfield, srcfield in desc.dupfields.iteritems(): + fieldvalues.append((destfield, srcfield.name)) + body = ['v = r_uint(0)'] + assert 'v' not in sig # that wouldn't be funny + #body.append('print %r'%name + ', ' + ', '.join(["'%s:', %s"%(s, s) for s in sig])) + for field, value in fieldvalues: + if field.name == 'spr': + body.append('spr = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) + value = 'spr' + body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, + field.mask, + (32 - field.right - 1))) + body.append('self.emit(v)') + src = 'def %s(self, %s):\n %s'%(name, ', '.join(sig), '\n '.join(body)) + d = {'r_uint':r_uint} + #print src + exec compile2(src) in d + return d[name] + +def make_rassembler(cls): + bases = [make_rassembler(b) for b in cls.__bases__] + ns = {} + for k, v in cls.__dict__.iteritems(): + if isinstance(v, IDesc): + v = make_func(k, v) + ns[k] = v + rcls = type('R' + cls.__name__, tuple(bases), ns) + def emit(self, value): + self.insts.append(value) + rcls.emit = emit + return rcls From noreply at buildbot.pypy.org Mon Feb 6 20:46:50 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 20:46:50 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: more tests Message-ID: <20120206194650.AD0FF7107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52146:3068735b0215 Date: 2012-02-06 21:44 +0200 http://bitbucket.org/pypy/pypy/changeset/3068735b0215/ Log: more tests diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -138,7 +138,7 @@ axis = space.int_w(w_axis) if space.is_w(w_out, space.w_None): out = None - elif not isinstance(w_out, W_NDimArray): + elif not isinstance(w_out, BaseArray): raise OperationError(space.w_TypeError, space.wrap( 'output must be an array')) else: diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -155,6 +155,10 @@ shape = obj.shape[:axis] + obj.shape[axis + 1:] if out: #Test for shape agreement + #Test for dtype agreement, perhaps create an itermediate + if out.dtype != dtype: + raise OperationError(space.w_TypeError, space.wrap( + "mismatched dtypes")) return self.do_axis_reduce(obj, dtype, axis, out) else: result = W_NDimArray(support.product(shape), shape, dtype) @@ -166,7 +170,7 @@ raise operationerrfmt(space.w_ValueError, "output parameter " "for reduction operation %s has too many" " dimensions",self.name) - out.setitem(0, out.dtype.coerce(space, val)) + out.value = out.dtype.coerce(space, val) return out return val diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -794,10 +794,10 @@ assert a.sum() == 5 raises(TypeError, 'a.sum(2, 3)') - skip('fails since Scalar is not a subclass of W_NDimArray') - d = zeros(()) + d = array(0.) b = a.sum(out=d) assert b == d + assert b.dtype == d.dtype def test_reduce_nd(self): from numpypy import arange, array, multiply @@ -826,12 +826,20 @@ assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_reduce_out(self): - from numpypy import arange, array, multiply + from numpypy import arange, array a = arange(15).reshape(5, 3) - b = arange(3) - c = a.sum(0, out=b) + b = arange(12).reshape(4,3) + c = a.sum(0, out=b[1]) assert (c == [30, 35, 40]).all() - assert (c == b).all() + assert (c == b[1]).all() + raises(ValueError, 'a.prod(0, out=arange(10, dtype=float))') + + def test_reduce_intermediary(self): + from numpypy import arange, array + a = arange(15).reshape(5, 3) + b = array(range(3), dtype=bool) + c = a.prod(0, out=b) + assert(b == [False, True, True]).all() def test_identity(self): from _numpypy import identity, array From noreply at buildbot.pypy.org Mon Feb 6 21:26:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 6 Feb 2012 21:26:53 +0100 (CET) Subject: [pypy-commit] buildbot default: ARGH ARGJH ARGH Message-ID: <20120206202653.02B1F7107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r634:cb420b0a33c1 Date: 2012-02-06 22:26 +0200 http://bitbucket.org/pypy/buildbot/changeset/cb420b0a33c1/ Log: ARGH ARGJH ARGH diff --git a/bot2/pypybuildbot/builds.py b/bot2/pypybuildbot/builds.py --- a/bot2/pypybuildbot/builds.py +++ b/bot2/pypybuildbot/builds.py @@ -412,7 +412,7 @@ '--upload-baseline-revision', WithProperties('%(got_revision)s'), '--upload-baseline-branch', WithProperties('%(branch)s'), - '--upload-baseline-urls', 'http://speed.pypy.org', + '--upload-baseline-urls', 'http://speed.pypy.org/', ], workdir='./benchmarks', timeout=3600)) From noreply at buildbot.pypy.org Mon Feb 6 22:25:43 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 6 Feb 2012 22:25:43 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: Fix generation of str/unicode dispatch function for Windows Message-ID: <20120206212543.2798182CE3@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: win32-cleanup Changeset: r52147:4278af6a7f51 Date: 2012-02-06 22:25 +0100 http://bitbucket.org/pypy/pypy/changeset/4278af6a7f51/ Log: Fix generation of str/unicode dispatch function for Windows diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,7 +43,7 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) @@ -67,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix From noreply at buildbot.pypy.org Mon Feb 6 22:57:20 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 22:57:20 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: add test for posix module Message-ID: <20120206215720.D21747107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52148:2da738b6eea6 Date: 2012-02-06 23:29 +0200 http://bitbucket.org/pypy/pypy/changeset/2da738b6eea6/ Log: add test for posix module diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file From noreply at buildbot.pypy.org Mon Feb 6 22:57:22 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 22:57:22 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: merge branches Message-ID: <20120206215722.D5AA57107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52149:d34c77fa0f42 Date: 2012-02-06 23:48 +0200 http://bitbucket.org/pypy/pypy/changeset/d34c77fa0f42/ Log: merge branches diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file From noreply at buildbot.pypy.org Mon Feb 6 22:57:25 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 22:57:25 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: remove improper link flag Message-ID: <20120206215725.3F8AC7107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52150:f42334abe866 Date: 2012-02-06 23:56 +0200 http://bitbucket.org/pypy/pypy/changeset/f42334abe866/ Log: remove improper link flag diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -7,7 +7,7 @@ if sys.platform == 'win32' and platform.name != 'mingw32': libraries = ['libeay32', 'ssleay32', - 'user32', 'advapi32', 'gdi32', 'msvcrt', 'ws2_32', 'zlib'] + 'user32', 'advapi32', 'gdi32', 'msvcrt', 'ws2_32'] includes = [ # ssl.h includes winsock.h, which will conflict with our own # need of winsock2. Remove this when separate compilation is From noreply at buildbot.pypy.org Mon Feb 6 23:17:31 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 23:17:31 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: add 'WindowsError' to base objspace Message-ID: <20120206221731.5D1667107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52151:6fc1023ba5e3 Date: 2012-02-07 00:16 +0200 http://bitbucket.org/pypy/pypy/changeset/6fc1023ba5e3/ Log: add 'WindowsError' to base objspace diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1638,6 +1638,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # From noreply at buildbot.pypy.org Mon Feb 6 23:25:00 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 6 Feb 2012 23:25:00 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: more tests, start to think about intermediaries Message-ID: <20120206222500.80D897107FA@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52152:5bdd210811e1 Date: 2012-02-06 09:08 +0200 http://bitbucket.org/pypy/pypy/changeset/5bdd210811e1/ Log: more tests, start to think about intermediaries diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -155,8 +155,20 @@ shape = obj.shape[:axis] + obj.shape[axis + 1:] if out: #Test for shape agreement + if len(out.shape) > len(shape): + raise operationerrfmt(space.w_ValueError, + 'output parameter for reduction operation %s' + + ' has too many dimensions', self.name) + elif len(out.shape) < len(shape): + raise operationerrfmt(space.w_ValueError, + 'output parameter for reduction operation %s' + + ' does not have enough dimensions', self.name) + elif out.shape != shape: + raise operationerrfmt(space.w_ValueError, + 'output parameter shape mismatch, expecting %s' + + ' , got %s', str(shape), str(out.shape)) #Test for dtype agreement, perhaps create an itermediate - if out.dtype != dtype: + if out.dtype != dtype raise OperationError(space.w_TypeError, space.wrap( "mismatched dtypes")) return self.do_axis_reduce(obj, dtype, axis, out) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -832,7 +832,10 @@ c = a.sum(0, out=b[1]) assert (c == [30, 35, 40]).all() assert (c == b[1]).all() - raises(ValueError, 'a.prod(0, out=arange(10, dtype=float))') + raises(ValueError, 'a.prod(0, out=arange(10))') + a=arange(12).reshape(3,2,2) + raises(ValueError, 'a.sum(0, out=arange(12).reshape(3,2,2))') + raises(ValueError, 'a.sum(0, out=arange(3))') def test_reduce_intermediary(self): from numpypy import arange, array From noreply at buildbot.pypy.org Tue Feb 7 00:13:19 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 7 Feb 2012 00:13:19 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: Add space.unicode0_w, which returns a unicode string without NUL bytes Message-ID: <20120206231319.1FC6E82CE3@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: win32-cleanup Changeset: r52153:dbf53f6e1d73 Date: 2012-02-07 00:11 +0100 http://bitbucket.org/pypy/pypy/changeset/dbf53f6e1d73/ Log: Add space.unicode0_w, which returns a unicode string without NUL bytes diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1340,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): From noreply at buildbot.pypy.org Tue Feb 7 00:16:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 00:16:45 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: (fijal, edelsohn) improve rassembler until it works. more work required to make it rpython Message-ID: <20120206231645.82B7382CE3@wyvern.cs.uni-duesseldorf.de> Author: fijal Branch: ppc-jit-backend-rpythonization Changeset: r52154:39f8e2a9fd18 Date: 2012-02-06 15:15 -0800 http://bitbucket.org/pypy/pypy/changeset/39f8e2a9fd18/ Log: (fijal, edelsohn) improve rassembler until it works. more work required to make it rpython diff --git a/pypy/jit/backend/ppc/ppcgen/asmfunc.py b/pypy/jit/backend/ppc/ppcgen/asmfunc.py --- a/pypy/jit/backend/ppc/ppcgen/asmfunc.py +++ b/pypy/jit/backend/ppc/ppcgen/asmfunc.py @@ -5,6 +5,7 @@ from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager from pypy.rpython.lltypesystem import lltype, rffi from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.rlib.rarithmetic import r_uint _ppcgen = None @@ -19,17 +20,14 @@ self.code = MachineCodeBlockWrapper() if IS_PPC_64: # allocate function descriptor - 3 doublewords - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) + for i in range(6): + self.emit(r_uint(0)) - def emit(self, insn): - bytes = struct.pack("i", insn) - for byte in bytes: - self.code.writechar(byte) + def emit(self, word): + self.code.writechar(chr((word >> 24) & 0xFF)) + self.code.writechar(chr((word >> 16) & 0xFF)) + self.code.writechar(chr((word >> 8) & 0xFF)) + self.code.writechar(chr(word & 0xFF)) def get_function(self): i = self.code.materialize(AsmMemoryManager(), []) diff --git a/pypy/jit/backend/ppc/ppcgen/assembler.py b/pypy/jit/backend/ppc/ppcgen/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/assembler.py @@ -51,7 +51,7 @@ inst.fields[f] = l buf = [] for inst in self.insts: - buf.append(inst.assemble()) + buf.append(inst)#.assemble()) if dump: for i in range(len(buf)): inst = self.disassemble(buf[i], self.rlabels, i*4) @@ -61,11 +61,11 @@ return buf def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): - insns = self.assemble0(dump) + #insns = self.assemble0(dump) from pypy.jit.backend.ppc.ppcgen import asmfunc - c = asmfunc.AsmCode(len(insns)*4) - for i in insns: - c.emit(i) + c = asmfunc.AsmCode(len(self.insts)*4) + for i in self.insts: + c.emit(i)#.assemble()) return c.get_function() def get_idescs(cls): diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -27,6 +27,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.ppc.ppcgen.rassemblermaker import make_rassembler A = Form("frD", "frA", "frB", "XO3", "Rc") A1 = Form("frD", "frB", "XO3", "Rc") @@ -888,10 +889,7 @@ mtcr = BA.mtcrf(CRM=0xFF) - def emit(self, insn): - bytes = struct.pack("i", insn) - for byte in bytes: - self.writechar(byte) +PPCAssembler = make_rassembler(PPCAssembler) def hi(w): return w >> 16 @@ -967,6 +965,14 @@ self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.r0_in_use = r0_in_use + def check(self, desc, v, *args): + desc.__get__(self)(*args) + ins = self.insts.pop() + expected = ins.assemble() + if expected < 0: + expected += 1<<32 + assert v == expected + def load_imm(self, rD, word): rD = rD.as_key() if word <= 32767 and word >= -32768: @@ -1075,7 +1081,7 @@ self.assemble(show) insts = self.insts for inst in insts: - self.write32(inst.assemble()) + self.write32(inst)#.assemble()) def _dump_trace(self, addr, name, formatter=-1): if not we_are_translated(): diff --git a/pypy/jit/backend/ppc/ppcgen/form.py b/pypy/jit/backend/ppc/ppcgen/form.py --- a/pypy/jit/backend/ppc/ppcgen/form.py +++ b/pypy/jit/backend/ppc/ppcgen/form.py @@ -14,8 +14,8 @@ self.fields = fields self.lfields = [k for (k,v) in fields.iteritems() if isinstance(v, str)] - if not self.lfields: - self.assemble() # for error checking only + #if not self.lfields: + # self.assemble() # for error checking only def assemble(self): r = 0 for field in self.fields: diff --git a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py --- a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py +++ b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py @@ -1,6 +1,7 @@ from pypy.tool.sourcetools import compile2 from pypy.rlib.rarithmetic import r_uint from pypy.jit.backend.ppc.ppcgen.form import IDesc, IDupDesc +from pypy.jit.backend.ppc.ppcgen.ppc_field import IField ## "opcode": ( 0, 5), ## "rA": (11, 15, 'unsigned', regname._R), @@ -37,14 +38,24 @@ #body.append('print %r'%name + ', ' + ', '.join(["'%s:', %s"%(s, s) for s in sig])) for field, value in fieldvalues: if field.name == 'spr': - body.append('spr = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) - value = 'spr' - body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, - field.mask, - (32 - field.right - 1))) + body.append('spr1 = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) + value = 'spr1' + elif field.name == 'mbe': + body.append('mbe1 = (%s & 31) << 1 | (%s & 32) >> 5' % (value, value)) + value = 'mbe1' + elif field.name == 'sh': + body.append('sh1 = (%s & 31) << 10 | (%s & 32) >> 5' % (value, value)) + value = 'sh1' + if isinstance(field, IField): + body.append('v |= ((%3s >> 2) & r_uint(%#05x)) << 2' % (value, field.mask)) + else: + body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, + field.mask, + (32 - field.right - 1))) + #body.append('self.check(desc, v, %s)' % ', '.join(sig)) body.append('self.emit(v)') src = 'def %s(self, %s):\n %s'%(name, ', '.join(sig), '\n '.join(body)) - d = {'r_uint':r_uint} + d = {'r_uint':r_uint, 'desc': desc} #print src exec compile2(src) in d return d[name] From noreply at buildbot.pypy.org Tue Feb 7 04:36:28 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 7 Feb 2012 04:36:28 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) integer class mixin Message-ID: <20120207033628.5F9BE7107FA@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52155:d72d77b4a76e Date: 2012-02-06 13:35 -0800 http://bitbucket.org/pypy/pypy/changeset/d72d77b4a76e/ Log: o) integer class mixin o) long integer default parameters for ffi call diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -208,9 +208,14 @@ [C_TYPEHANDLE, rffi.INT], rffi.INT, compilation_info=backend.eci) -c_atoi = rffi.llexternal( - "cppyy_atoi", - [rffi.CCHARP], rffi.INT, +c_strtoll = rffi.llexternal( + "cppyy_strtoll", + [rffi.CCHARP], rffi.LONGLONG, + compilation_info=backend.eci) + +c_strtoull = rffi.llexternal( + "cppyy_strtoull", + [rffi.CCHARP], rffi.ULONGLONG, compilation_info=backend.eci) c_free = rffi.llexternal( diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -137,6 +137,34 @@ space.wrap("raw buffer interface not supported")) +class IntTypeConverterMixin(object): + _mixin_ = True + _immutable_ = True + + def __init__(self, space, default): + self.default = rffi.cast(self.rffitype, capi.c_strtoll(default)) + + def convert_argument(self, space, w_obj, address): + x = rffi.cast(self.rffiptype, address) + x[0] = self._unwrap_object(space, w_obj) + + def convert_argument_libffi(self, space, w_obj, argchain): + argchain.arg(self._unwrap_object(space, w_obj)) + + def default_argument_libffi(self, space, argchain): + argchain.arg(self.default) + + def from_memory(self, space, w_obj, w_type, offset): + address = self._get_raw_address(space, w_obj, offset) + intptr = rffi.cast(self.rffiptype, address) + return space.wrap(intptr[0]) + + def to_memory(self, space, w_obj, w_value, offset): + address = self._get_raw_address(space, w_obj, offset) + intptr = rffi.cast(self.rffiptype, address) + intptr[0] = self._unwrap_object(space, w_value) + + class VoidConverter(TypeConverter): _immutable_ = True libffitype = libffi.types.void @@ -217,36 +245,15 @@ address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) address[0] = self._unwrap_object(space, w_value) -class IntConverter(TypeConverter): +class IntConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.sint - - def __init__(self, space, default): - self.default = capi.c_atoi(default) + rffitype = rffi.INT + rffiptype = rffi.INTP def _unwrap_object(self, space, w_obj): return rffi.cast(rffi.INT, space.c_int_w(w_obj)) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.INTP, address) - x[0] = self._unwrap_object(space, w_obj) - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def default_argument_libffi(self, space, argchain): - argchain.arg(self.default) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - intptr = rffi.cast(rffi.INTP, address) - return space.wrap(intptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - intptr = rffi.cast(rffi.INTP, address) - intptr[0] = self._unwrap_object(space, w_value) - class UnsignedIntConverter(TypeConverter): _immutable_ = True libffitype = libffi.types.uint @@ -271,30 +278,15 @@ ulongptr = rffi.cast(rffi.UINTP, address) ulongptr[0] = self._unwrap_object(space, w_value) -class LongConverter(TypeConverter): +class LongConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.slong + rffitype = rffi.LONG + rffiptype = rffi.LONGP def _unwrap_object(self, space, w_obj): return space.int_w(w_obj) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.LONGP, address) - x[0] = self._unwrap_object(space, w_obj) - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - longptr = rffi.cast(rffi.LONGP, address) - return space.wrap(longptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - longptr = rffi.cast(rffi.LONGP, address) - longptr[0] = self._unwrap_object(space, w_value) - class UnsignedLongConverter(TypeConverter): _immutable_ = True libffitype = libffi.types.ulong diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -73,7 +73,8 @@ /* misc helpers */ void cppyy_free(void* ptr); - int cppyy_atoi(const char* str); + long long cppyy_strtoll(const char* str); + unsigned long long cppyy_strtuoll(const char* str); #ifdef __cplusplus } diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -462,8 +462,12 @@ /* misc helpers ----------------------------------------------------------- */ -int cppyy_atoi(const char* str) { - return atoi(str); +long long cppyy_strtoll(const char* str) { + return strtoll(str, NULL, 0); +} + +unsigned long long cppyy_strtoull(const char* str) { + return strtoull(str, NULL, 0); } void cppyy_free(void* ptr) { diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -360,8 +360,12 @@ /* misc helpers ----------------------------------------------------------- */ -int cppyy_atoi(const char* str) { - return atoi(str); +long long cppyy_strtoll(const char* str) { + return strtoll(str, NULL, 0); +} + +unsigned long long cppyy_strtoull(const char* str) { + return strtoull(str, NULL, 0); } void cppyy_free(void* ptr) { diff --git a/pypy/module/cppyy/test/example01.cxx b/pypy/module/cppyy/test/example01.cxx --- a/pypy/module/cppyy/test/example01.cxx +++ b/pypy/module/cppyy/test/example01.cxx @@ -138,21 +138,25 @@ // argument passing -int ArgPasser::intValue(int arg0, int argn, int arg1, int arg2) -{ - switch (argn) { - case 0: - return arg0; - case 1: - return arg1; - case 2: - return arg2; - default: - break; - } +#define typeValueImp(itype) \ +itype ArgPasser::itype##Value(itype arg0, int argn, itype arg1, itype arg2) \ +{ \ + switch (argn) { \ + case 0: \ + return arg0; \ + case 1: \ + return arg1; \ + case 2: \ + return arg2; \ + default: \ + break; \ + } \ + \ + return itype(-1); \ +} - return -1; -} +typeValueImp(int) +typeValueImp(long) std::string ArgPasser::stringValue(std::string arg0, int argn, std::string arg1) { diff --git a/pypy/module/cppyy/test/example01.h b/pypy/module/cppyy/test/example01.h --- a/pypy/module/cppyy/test/example01.h +++ b/pypy/module/cppyy/test/example01.h @@ -61,11 +61,14 @@ int globalAddOneToInt(int a); } +#define typeValue(itype)\ + itype itype##Value(itype arg0, int argn=0, itype arg1=itype(1), itype arg2=itype(2)) // argument passing class ArgPasser { // use a class for now as methptrgetter not public: // implemented for global functions - int intValue(int arg0, int argn=0, int arg1=1, int arg2=2); + typeValue(int); + typeValue(long); std::string stringValue( std::string arg0, int argn=0, std::string arg1 = "default"); diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -257,15 +257,16 @@ assert f(s("noot"), 1).c_str() == "default" assert f(s("mies")).c_str() == "mies" - g = a.intValue - raises(TypeError, 'g(1, 2, 3, 4, 6)') - assert g(11, 0, 12, 13) == 11 - assert g(11, 1, 12, 13) == 12 - assert g(11, 1, 12) == 12 - assert g(11, 2, 12) == 2 - assert g(11, 1) == 1 - assert g(11, 2) == 2 - assert g(11) == 11 + for itype in ['int', 'long']: + g = getattr(a, '%sValue' % itype) + raises(TypeError, 'g(1, 2, 3, 4, 6)') + assert g(11, 0, 12, 13) == 11 + assert g(11, 1, 12, 13) == 12 + assert g(11, 1, 12) == 12 + assert g(11, 2, 12) == 2 + assert g(11, 1) == 1 + assert g(11, 2) == 2 + assert g(11) == 11 def test12_underscore_in_class_name(self): """Test recognition of '_' as part of a valid class name""" From noreply at buildbot.pypy.org Tue Feb 7 04:36:29 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 7 Feb 2012 04:36:29 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) fleshed out mixins for unsigned integer types Message-ID: <20120207033629.990807107FB@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52156:b637b18b01b8 Date: 2012-02-06 15:32 -0800 http://bitbucket.org/pypy/pypy/changeset/b637b18b01b8/ Log: o) fleshed out mixins for unsigned integer types o) default args supported for all integer types diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -57,6 +57,10 @@ from pypy.module.cppyy.interp_cppyy import FastCallNotPossible raise FastCallNotPossible + def default_argument_libffi(self, space, argchain): + from pypy.module.cppyy.interp_cppyy import FastCallNotPossible + raise FastCallNotPossible + def from_memory(self, space, w_obj, w_type, offset): self._is_abstract(space) @@ -140,9 +144,6 @@ class IntTypeConverterMixin(object): _mixin_ = True _immutable_ = True - - def __init__(self, space, default): - self.default = rffi.cast(self.rffitype, capi.c_strtoll(default)) def convert_argument(self, space, w_obj, address): x = rffi.cast(self.rffiptype, address) @@ -156,13 +157,13 @@ def from_memory(self, space, w_obj, w_type, offset): address = self._get_raw_address(space, w_obj, offset) - intptr = rffi.cast(self.rffiptype, address) - return space.wrap(intptr[0]) + rffiptr = rffi.cast(self.rffiptype, address) + return space.wrap(rffiptr[0]) def to_memory(self, space, w_obj, w_value, offset): address = self._get_raw_address(space, w_obj, offset) - intptr = rffi.cast(self.rffiptype, address) - intptr[0] = self._unwrap_object(space, w_value) + rffiptr = rffi.cast(self.rffiptype, address) + rffiptr[0] = self._unwrap_object(space, w_value) class VoidConverter(TypeConverter): @@ -245,96 +246,73 @@ address = rffi.cast(rffi.CCHARP, self._get_raw_address(space, w_obj, offset)) address[0] = self._unwrap_object(space, w_value) + +class ShortConverter(IntTypeConverterMixin, TypeConverter): + _immutable_ = True + libffitype = libffi.types.sshort + rffiptype = rffi.SHORTP + + def __init__(self, space, default): + self.default = rffi.cast(rffi.SHORT, capi.c_strtoll(default)) + + def _unwrap_object(self, space, w_obj): + return rffi.cast(rffi.SHORT, space.int_w(w_obj)) + +class UnsignedShortConverter(IntTypeConverterMixin, TypeConverter): + _immutable_ = True + libffitype = libffi.types.sshort + rffiptype = rffi.USHORTP + + def __init__(self, space, default): + self.default = rffi.cast(rffi.USHORT, capi.c_strtoull(default)) + + def _unwrap_object(self, space, w_obj): + return rffi.cast(rffi.USHORT, space.int_w(w_obj)) + class IntConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.sint - rffitype = rffi.INT rffiptype = rffi.INTP + def __init__(self, space, default): + self.default = rffi.cast(rffi.INT, capi.c_strtoll(default)) + def _unwrap_object(self, space, w_obj): return rffi.cast(rffi.INT, space.c_int_w(w_obj)) -class UnsignedIntConverter(TypeConverter): +class UnsignedIntConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.uint + rffiptype = rffi.UINTP + + def __init__(self, space, default): + self.default = rffi.cast(rffi.UINT, capi.c_strtoull(default)) def _unwrap_object(self, space, w_obj): return rffi.cast(rffi.UINT, space.uint_w(w_obj)) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.UINTP, address) - x[0] = self._unwrap_object(space, w_obj) - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - ulongptr = rffi.cast(rffi.UINTP, address) - return space.wrap(ulongptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - ulongptr = rffi.cast(rffi.UINTP, address) - ulongptr[0] = self._unwrap_object(space, w_value) - class LongConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.slong - rffitype = rffi.LONG rffiptype = rffi.LONGP + def __init__(self, space, default): + self.default = rffi.cast(rffi.LONG, capi.c_strtoll(default)) + def _unwrap_object(self, space, w_obj): return space.int_w(w_obj) -class UnsignedLongConverter(TypeConverter): +class UnsignedLongConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.ulong + rffiptype = rffi.ULONGP + + def __init__(self, space, default): + self.default = rffi.cast(rffi.ULONG, capi.c_strtoull(default)) def _unwrap_object(self, space, w_obj): return space.uint_w(w_obj) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.ULONGP, address) - x[0] = self._unwrap_object(space, w_obj) - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - ulongptr = rffi.cast(rffi.ULONGP, address) - return space.wrap(ulongptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - ulongptr = rffi.cast(rffi.ULONGP, address) - ulongptr[0] = self._unwrap_object(space, w_value) - -class ShortConverter(TypeConverter): - _immutable_ = True - libffitype = libffi.types.sshort - - def _unwrap_object(self, space, w_obj): - return rffi.cast(rffi.SHORT, space.int_w(w_obj)) - - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.SHORTP, address) - x[0] = self._unwrap_object(space, w_obj) - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - shortptr = rffi.cast(rffi.SHORTP, address) - return space.wrap(shortptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - shortptr = rffi.cast(rffi.SHORTP, address) - shortptr[0] = self._unwrap_object(space, w_value) - class FloatConverter(TypeConverter): _immutable_ = True libffitype = libffi.types.float @@ -454,6 +432,11 @@ typecode = 'h' typesize = rffi.sizeof(rffi.SHORT) +class UnsignedShortArrayConverter(ArrayTypeConverterMixin, TypeConverter): + _immutable_ = True + typecode = 'H' + typesize = rffi.sizeof(rffi.USHORT) + class IntArrayConverter(ArrayTypeConverterMixin, TypeConverter): _immutable_ = True typecode = 'i' @@ -469,6 +452,11 @@ typecode = 'l' typesize = rffi.sizeof(rffi.LONG) +class UnsignedLongArrayConverter(ArrayTypeConverterMixin, TypeConverter): + _immutable_ = True + typecode = 'L' + typesize = rffi.sizeof(rffi.ULONG) + class FloatArrayConverter(ArrayTypeConverterMixin, TypeConverter): _immutable_ = True typecode = 'f' @@ -485,6 +473,11 @@ typecode = 'h' typesize = rffi.sizeof(rffi.SHORT) +class UnsignedShortPtrConverter(PtrTypeConverterMixin, TypeConverter): + _immutable_ = True + typecode = 'H' + typesize = rffi.sizeof(rffi.USHORT) + class IntPtrConverter(PtrTypeConverterMixin, TypeConverter): _immutable_ = True typecode = 'i' @@ -500,6 +493,11 @@ typecode = 'l' typesize = rffi.sizeof(rffi.LONG) +class UnsignedLongPtrConverter(PtrTypeConverterMixin, TypeConverter): + _immutable_ = True + typecode = 'L' + typesize = rffi.sizeof(rffi.ULONG) + class FloatPtrConverter(PtrTypeConverterMixin, TypeConverter): _immutable_ = True typecode = 'f' @@ -623,7 +621,7 @@ _converters["unsigned char"] = CharConverter _converters["short int"] = ShortConverter _converters["short"] = _converters["short int"] -_converters["unsigned short int"] = ShortConverter +_converters["unsigned short int"] = UnsignedShortConverter _converters["unsigned short"] = _converters["unsigned short int"] _converters["int"] = IntConverter _converters["unsigned int"] = UnsignedIntConverter @@ -644,9 +642,9 @@ _a_converters["short*"] = _a_converters["short int*"] _a_converters["short int[]"] = ShortArrayConverter _a_converters["short[]"] = _a_converters["short int[]"] -_a_converters["unsigned short int*"] = ShortPtrConverter +_a_converters["unsigned short int*"] = UnsignedShortPtrConverter _a_converters["unsigned short*"] = _a_converters["unsigned short int*"] -_a_converters["unsigned short int[]"] = ShortArrayConverter +_a_converters["unsigned short int[]"] = UnsignedShortArrayConverter _a_converters["unsigned short[]"] = _a_converters["unsigned short int[]"] _a_converters["int*"] = IntPtrConverter _a_converters["int[]"] = IntArrayConverter @@ -656,9 +654,9 @@ _a_converters["long*"] = _a_converters["long int*"] _a_converters["long int[]"] = LongArrayConverter _a_converters["long[]"] = _a_converters["long int[]"] -_a_converters["unsigned long int*"] = LongPtrConverter +_a_converters["unsigned long int*"] = UnsignedLongPtrConverter _a_converters["unsigned long*"] = _a_converters["unsigned long int*"] -_a_converters["unsigned long int[]"] = LongArrayConverter +_a_converters["unsigned long int[]"] = UnsignedLongArrayConverter _a_converters["unsigned long[]"] = _a_converters["unsigned long int[]"] _a_converters["float*"] = FloatPtrConverter _a_converters["float[]"] = FloatArrayConverter diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -399,6 +399,11 @@ return type_cppstring_to_cstring(arg->GetFullTypeName()); } +char* cppyy_method_arg_default(cppyy_typehandle_t, int, int) { +/* unused: libffi does not work with CINT back-end */ + return cppstring_to_cstring(""); +} + int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index) { TClassRef cr = type_from_handle(handle); @@ -466,7 +471,7 @@ return strtoll(str, NULL, 0); } -unsigned long long cppyy_strtoull(const char* str) { +extern "C" unsigned long long cppyy_strtoull(const char* str) { return strtoull(str, NULL, 0); } diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -364,7 +364,7 @@ return strtoll(str, NULL, 0); } -unsigned long long cppyy_strtoull(const char* str) { +extern "C" unsigned long long cppyy_strtoull(const char* str) { return strtoull(str, NULL, 0); } diff --git a/pypy/module/cppyy/test/example01.cxx b/pypy/module/cppyy/test/example01.cxx --- a/pypy/module/cppyy/test/example01.cxx +++ b/pypy/module/cppyy/test/example01.cxx @@ -138,8 +138,8 @@ // argument passing -#define typeValueImp(itype) \ -itype ArgPasser::itype##Value(itype arg0, int argn, itype arg1, itype arg2) \ +#define typeValueImp(itype, tname) \ +itype ArgPasser::tname##Value(itype arg0, int argn, itype arg1, itype arg2) \ { \ switch (argn) { \ case 0: \ @@ -152,11 +152,15 @@ break; \ } \ \ - return itype(-1); \ + return (itype)-1; \ } -typeValueImp(int) -typeValueImp(long) +typeValueImp(short, short) +typeValueImp(unsigned short, ushort) +typeValueImp(int, int) +typeValueImp(unsigned int, uint) +typeValueImp(long, long) +typeValueImp(unsigned long, ulong) std::string ArgPasser::stringValue(std::string arg0, int argn, std::string arg1) { diff --git a/pypy/module/cppyy/test/example01.h b/pypy/module/cppyy/test/example01.h --- a/pypy/module/cppyy/test/example01.h +++ b/pypy/module/cppyy/test/example01.h @@ -61,14 +61,18 @@ int globalAddOneToInt(int a); } -#define typeValue(itype)\ - itype itype##Value(itype arg0, int argn=0, itype arg1=itype(1), itype arg2=itype(2)) +#define typeValue(itype, tname) \ + itype tname##Value(itype arg0, int argn=0, itype arg1=1, itype arg2=2) // argument passing class ArgPasser { // use a class for now as methptrgetter not public: // implemented for global functions - typeValue(int); - typeValue(long); + typeValue(short, short); + typeValue(unsigned short, ushort); + typeValue(int, int); + typeValue(unsigned int, uint); + typeValue(long, long); + typeValue(unsigned long, ulong); std::string stringValue( std::string arg0, int argn=0, std::string arg1 = "default"); diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -257,7 +257,7 @@ assert f(s("noot"), 1).c_str() == "default" assert f(s("mies")).c_str() == "mies" - for itype in ['int', 'long']: + for itype in ['short', 'ushort', 'int', 'uint', 'long', 'ulong']: g = getattr(a, '%sValue' % itype) raises(TypeError, 'g(1, 2, 3, 4, 6)') assert g(11, 0, 12, 13) == 11 From noreply at buildbot.pypy.org Tue Feb 7 10:13:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 10:13:48 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: implement byteswap with the hope of being rpython Message-ID: <20120207091348.8266482CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52157:e88c9ea41096 Date: 2012-02-07 11:13 +0200 http://bitbucket.org/pypy/pypy/changeset/e88c9ea41096/ Log: implement byteswap with the hope of being rpython diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -513,3 +513,31 @@ if not objectmodel.we_are_translated(): assert n <= p return llop.int_between(lltype.Bool, n, m, p) + + at objectmodel.specialize.argtype(0) +def byteswap(arg): + """ Convert little->big endian and the opposite + """ + from pypy.rpython.lltypesystem import lltype, rffi + + T = lltype.typeOf(arg) + if T != rffi.LONGLONG and T != rffi.ULONGLONG and T != rffi.UINT: + arg = rffi.cast(lltype.Signed, arg) + # XXX we cannot do arithmetics on small ints + if rffi.sizeof(T) == 1: + res = arg + elif rffi.sizeof(T) == 2: + a, b = arg & 0xFF, arg & 0xFF00 + res = (a << 8) | (b >> 8) + elif rffi.sizeof(T) == 4: + a, b, c, d = arg & 0xFF, arg & 0xFF00, arg & 0xFF0000, arg & 0xFF000000 + res = (a << 24) | (b << 8) | (c >> 8) | (d >> 24) + elif rffi.sizeof(T) == 8: + a, b, c, d = arg & 0xFF, arg & 0xFF00, arg & 0xFF0000, arg & 0xFF000000 + e, f, g, h = (arg & (0xFF << 32), arg & (0xFF << 40), + arg & (0xFF << 48), arg & (0xFF << 56)) + res = ((a << 56) | (b << 40) | (c << 24) | (d << 8) | (e >> 8) | + (f >> 24) | (g >> 40) | (h >> 56)) + else: + assert False # unreachable code + return rffi.cast(T, res) diff --git a/pypy/rlib/test/test_rarithmetic.py b/pypy/rlib/test/test_rarithmetic.py --- a/pypy/rlib/test/test_rarithmetic.py +++ b/pypy/rlib/test/test_rarithmetic.py @@ -374,3 +374,9 @@ assert not int_between(1, 2, 2) assert not int_between(1, 1, 1) +def test_byteswap(): + from pypy.rpython.lltypesystem import rffi + + assert byteswap(rffi.cast(rffi.USHORT, 0x0102)) == 0x0201 + assert byteswap(rffi.cast(rffi.INT, 0x01020304)) == 0x04030201 + assert byteswap(rffi.cast(rffi.ULONGLONG, 0x0102030405060708L)) == 0x0807060504030201L From noreply at buildbot.pypy.org Tue Feb 7 11:04:18 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 11:04:18 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: non native dtypes Message-ID: <20120207100418.2E56282CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52158:b8341326183f Date: 2012-02-07 12:03 +0200 http://bitbucket.org/pypy/pypy/changeset/b8341326183f/ Log: non native dtypes diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,3 +1,5 @@ + +import sys from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app @@ -67,9 +69,10 @@ return w_dtype elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) - for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name or name in dtype.aliases: - return dtype + try: + return cache.dtypes_by_name[name] + except KeyError: + pass elif space.isinstance_w(w_dtype, space.w_list): xxx else: @@ -127,6 +130,13 @@ ) W_Dtype.typedef.acceptable_as_base_class = False +if sys.byteorder == 'little': + byteorder_prefix = '<' + nonnative_byteorder_prefix = '>' +else: + byteorder_prefix = '>' + nonnative_byteorder_prefix = '<' + class DtypeCache(object): def __init__(self, space): self.w_booldtype = W_Dtype( @@ -258,7 +268,6 @@ char='Q', w_box_type = space.gettypefor(interp_boxes.W_ULongLongBox), ) - self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, @@ -271,6 +280,20 @@ (dtype.itemtype.get_element_size(), dtype) for dtype in self.builtin_dtypes ) + self.dtypes_by_name = {} + for dtype in self.builtin_dtypes: + self.dtypes_by_name[dtype.name] = dtype + can_name = dtype.kind + str(dtype.itemtype.get_element_size()) + self.dtypes_by_name[can_name] = dtype + self.dtypes_by_name[byteorder_prefix + can_name] = dtype + new_name = nonnative_byteorder_prefix + can_name + itemtypename = dtype.itemtype.__class__.__name__ + self.dtypes_by_name[new_name] = W_Dtype( + getattr(types, 'NonNative' + itemtypename)(), + dtype.num, dtype.kind, new_name, dtype.char, dtype.w_box_type) + for alias in dtype.aliases: + self.dtypes_by_name[alias] = dtype + self.dtypes_by_name[dtype.char] = dtype def get_dtype_cache(space): return space.fromcache(DtypeCache) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -423,3 +423,8 @@ numpy.generic, object) assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) #assert numpy.str_.__mro__ == + + def test_alternate_constructs(self): + from _numpypy import dtype + assert dtype('i8') == dtype('i8') != dtype('i8') diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -6,7 +6,7 @@ from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat, libffi, clibffi from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT, widen +from pypy.rlib.rarithmetic import LONG_BIT, widen, byteswap from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack @@ -99,28 +99,28 @@ def default_fromstring(self, space): raise NotImplementedError + def _read(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def read(self, storage, width, i, offset): - return self.box(libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset - )) + return self.box(self._read(storage, width, i, offset)) def read_bool(self, storage, width, i, offset): - return bool(self.for_computation( - libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset))) + return bool(self.for_computation(self._read(storage, width, i, offset))) + + def _write(self, storage, width, i, offset, value): + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value) + def store(self, storage, width, i, offset, box): - value = self.unbox(box) - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value - ) + self._write(storage, width, i, offset, self.unbox(box)) def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) for i in xrange(start, stop): - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value - ) + self._write(storage, width, i, offset, value) def runpack_str(self, s): return self.box(runpack(self.format_code, s)) @@ -208,6 +208,14 @@ def min(self, v1, v2): return min(v1, v2) +class NonNativePrimitive(Primitive): + _mixin_ = True + + def _read(self, storage, width, i, offset): + return byteswap(Primitive._read(self, storage, width, i, offset)) + + def _write(self, storage, width, i, offset, value): + Primitive._write(self, storage, width, i, offset, byteswap(value)) class Bool(BaseType, Primitive): T = lltype.Bool @@ -523,3 +531,14 @@ UIntP = tp break del tp + +def _setup(): + for name, tp in globals().items(): + if isinstance(tp, type): + class NonNative(NonNativePrimitive, tp): + pass + NonNative.__name__ = 'NonNative' + name + globals()[NonNative.__name__] = NonNative + +_setup() +del _setup From noreply at buildbot.pypy.org Tue Feb 7 11:39:17 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 11:39:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: implement array.tostring and figure out we don't need it, add some actual tests Message-ID: <20120207103917.78E3082CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52159:ed14708a881d Date: 2012-02-07 12:38 +0200 http://bitbucket.org/pypy/pypy/changeset/ed14708a881d/ Log: implement array.tostring and figure out we don't need it, add some actual tests diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -625,6 +625,11 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def descr_tostring(self, space): + ra = ToStringArray(self) + loop.compute(ra) + return space.wrap(ra.s.build()) + def compute_first_step(self, sig, frame): pass @@ -805,6 +810,18 @@ return signature.ResultSignature(self.res_dtype, self.left.create_sig(), self.right.create_sig()) +class ToStringArray(Call1): + def __init__(self, child): + dtype = child.find_dtype() + self.itemsize = dtype.itemtype.get_element_size() + self.s = StringBuilder(child.size * self.itemsize) + Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, + child) + + def create_sig(self): + return signature.ToStringSignature(self.calc_dtype, + self.values.create_sig()) + def done_if_true(dtype, val): return dtype.itemtype.bool(val) @@ -1285,6 +1302,7 @@ std = interp2app(BaseArray.descr_std), fill = interp2app(BaseArray.descr_fill), + tostring = interp2app(BaseArray.descr_tostring), copy = interp2app(BaseArray.descr_copy), flatten = interp2app(BaseArray.descr_flatten), diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -318,6 +318,17 @@ offset = frame.get_final_iter().offset arr.left.setitem(offset, self.right.eval(frame, arr.right)) +class ToStringSignature(Call1): + def __init__(self, dtype, child): + Call1.__init__(self, None, 'tostring', dtype, child) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ToStringArray + + assert isinstance(arr, ToStringArray) + arr.s.append(self.dtype.itemtype.pack_str( + self.child.eval(frame, arr.values))) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -1,5 +1,6 @@ from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - +from pypy.module.micronumpy.interp_dtype import nonnative_byteorder_prefix +from pypy.interpreter.gateway import interp2app class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): @@ -182,6 +183,20 @@ class AppTestTypes(BaseNumpyAppTest): + def setup_class(cls): + BaseNumpyAppTest.setup_class.im_func(cls) + cls.w_non_native_prefix = cls.space.wrap(nonnative_byteorder_prefix) + def check_non_native(w_obj, w_obj2): + assert w_obj.storage[0] == w_obj2.storage[1] + assert w_obj.storage[1] == w_obj2.storage[0] + if w_obj.storage[0] == '\x00': + assert w_obj2.storage[1] == '\x00' + assert w_obj2.storage[0] == '\x01' + else: + assert w_obj2.storage[1] == '\x01' + assert w_obj2.storage[0] == '\x00' + cls.w_check_non_native = cls.space.wrap(interp2app(check_non_native)) + def test_abstract_types(self): import _numpypy as numpy raises(TypeError, numpy.generic, 0) @@ -427,4 +442,11 @@ def test_alternate_constructs(self): from _numpypy import dtype assert dtype('i8') == dtype('i8') != dtype('i8') + assert dtype(self.non_native_prefix + 'i8') != dtype('i8') + + def test_non_native(self): + from _numpypy import array + a = array([1, 2, 3], dtype=self.non_native_prefix + 'i2') + assert a[0] == 1 + assert (a + a)[1] == 4 + self.check_non_native(a, array([1, 2, 3], 'i2')) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1740,6 +1740,12 @@ #5 bytes is larger than 3 bytes raises(ValueError, fromstring, "\x01\x02\x03", count=5, dtype=uint8) + def test_tostring(self): + from _numpypy import array + assert array([1, 2, 3], 'i2').tostring() == '\x01\x00\x02\x00\x03\x00' + assert array([1, 2, 3], 'i2')[::2].tostring() == '\x01\x00\x03\x00' + assert array([1, 2, 3], 'i2')[::2].tostring() == '\x00\x01\x00\x03' class AppTestRanges(BaseNumpyAppTest): def test_arange(self): diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -1,5 +1,6 @@ import functools import math +import struct from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes @@ -125,6 +126,9 @@ def runpack_str(self, s): return self.box(runpack(self.format_code, s)) + def pack_str(self, box): + return struct.pack(self.format_code, self.unbox(box)) + @simple_binary_op def add(self, v1, v2): return v1 + v2 @@ -217,6 +221,9 @@ def _write(self, storage, width, i, offset, value): Primitive._write(self, storage, width, i, offset, byteswap(value)) + def pack_str(self, box): + return struct.pack(self.format_code, byteswap(self.unbox(box))) + class Bool(BaseType, Primitive): T = lltype.Bool BoxType = interp_boxes.W_BoolBox From noreply at buildbot.pypy.org Tue Feb 7 11:45:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 11:45:47 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: exact identity of dtypes is messy. We should rather check for sizes or so. Message-ID: <20120207104547.0554682CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52160:a942567518c2 Date: 2012-02-07 12:45 +0200 http://bitbucket.org/pypy/pypy/changeset/a942567518c2/ Log: exact identity of dtypes is messy. We should rather check for sizes or so. Disable for now diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -960,7 +960,7 @@ assert array([True, False]).dtype is dtype(bool) assert array([True, 1]).dtype is dtype(int) assert array([1, 2, 3]).dtype is dtype(int) - assert array([1L, 2, 3]).dtype is dtype(long) + #assert array([1L, 2, 3]).dtype is dtype(long) assert array([1.2, True]).dtype is dtype(float) assert array([1.2, 5]).dtype is dtype(float) assert array([]).dtype is dtype(float) From noreply at buildbot.pypy.org Tue Feb 7 11:49:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 11:49:01 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: slightly more rpython friendly way of failing Message-ID: <20120207104901.0ECC782CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52161:15d89f64aa44 Date: 2012-02-07 12:48 +0200 http://bitbucket.org/pypy/pypy/changeset/15d89f64aa44/ Log: slightly more rpython friendly way of failing diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -74,7 +74,7 @@ except KeyError: pass elif space.isinstance_w(w_dtype, space.w_list): - xxx + raise NotImplementedError else: for dtype in cache.builtin_dtypes: if w_dtype in dtype.alternate_constructors: From noreply at buildbot.pypy.org Tue Feb 7 11:55:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 11:55:47 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: like this maybe? Message-ID: <20120207105547.5F4A882CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52162:5b4c514f9ae8 Date: 2012-02-07 12:55 +0200 http://bitbucket.org/pypy/pypy/changeset/5b4c514f9ae8/ Log: like this maybe? diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -212,7 +212,7 @@ def min(self, v1, v2): return min(v1, v2) -class NonNativePrimitive(Primitive): +class NonNativePrimitive(object): _mixin_ = True def _read(self, storage, width, i, offset): From noreply at buildbot.pypy.org Tue Feb 7 12:01:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 12:01:51 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: yet another approach Message-ID: <20120207110151.6864C82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52163:f54da73f8d03 Date: 2012-02-07 13:01 +0200 http://bitbucket.org/pypy/pypy/changeset/f54da73f8d03/ Log: yet another approach diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -542,8 +542,11 @@ def _setup(): for name, tp in globals().items(): if isinstance(tp, type): - class NonNative(NonNativePrimitive, tp): + class NonNative(tp): pass + for item, v in NonNativePrimitive.__dict__.items(): + if not item.startswith('__'): + setattr(NonNative, item, v) NonNative.__name__ = 'NonNative' + name globals()[NonNative.__name__] = NonNative From noreply at buildbot.pypy.org Tue Feb 7 12:04:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 12:04:41 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: more hacking Message-ID: <20120207110441.0AD8982CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52164:74884273c3e5 Date: 2012-02-07 13:04 +0200 http://bitbucket.org/pypy/pypy/changeset/74884273c3e5/ Log: more hacking diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -213,8 +213,6 @@ return min(v1, v2) class NonNativePrimitive(object): - _mixin_ = True - def _read(self, storage, width, i, offset): return byteswap(Primitive._read(self, storage, width, i, offset)) @@ -540,13 +538,15 @@ del tp def _setup(): + from pypy.tool.sourcetools import func_with_new_name + for name, tp in globals().items(): if isinstance(tp, type): class NonNative(tp): pass for item, v in NonNativePrimitive.__dict__.items(): if not item.startswith('__'): - setattr(NonNative, item, v) + setattr(NonNative, item, func_with_new_name(v, item)) NonNative.__name__ = 'NonNative' + name globals()[NonNative.__name__] = NonNative From noreply at buildbot.pypy.org Tue Feb 7 12:24:11 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 12:24:11 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: (bivab, hager): Further work on rpythonization, stil more to do Message-ID: <20120207112411.381FF82CE3@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52165:e3a7cc20600f Date: 2012-02-07 12:23 +0100 http://bitbucket.org/pypy/pypy/changeset/e3a7cc20600f/ Log: (bivab, hager): Further work on rpythonization, stil more to do diff --git a/pypy/jit/backend/ppc/ppcgen/assembler.py b/pypy/jit/backend/ppc/ppcgen/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/assembler.py @@ -66,7 +66,7 @@ c = asmfunc.AsmCode(len(self.insts)*4) for i in self.insts: c.emit(i)#.assemble()) - return c.get_function() + #return c.get_function() def get_idescs(cls): r = [] diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -22,7 +22,6 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import (BoxInt, ConstInt, ConstPtr, ConstFloat, Box, INT, REF, FLOAT) -from pypy.jit.backend.x86.support import values_array from pypy.tool.udir import udir from pypy.rlib.objectmodel import we_are_translated @@ -962,7 +961,6 @@ def __init__(self, failargs_limit=1000, r0_in_use=False): PPCAssembler.__init__(self) self.init_block_builder() - self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.r0_in_use = r0_in_use def check(self, desc, v, *args): @@ -996,7 +994,8 @@ self.ldx(rD.value, 0, rD.value) def store_reg(self, source_reg, addr): - self.alloc_scratch_reg(addr) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, addr) if IS_PPC_32: self.stwx(source_reg.value, 0, r.SCRATCH.value) else: @@ -1021,13 +1020,15 @@ BI = condition[0] BO = condition[1] - self.alloc_scratch_reg(addr) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, addr) self.mtctr(r.SCRATCH.value) self.free_scratch_reg() self.bcctr(BO, BI) def b_abs(self, address, trap=False): - self.alloc_scratch_reg(address) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, address) self.mtctr(r.SCRATCH.value) self.free_scratch_reg() if trap: @@ -1154,11 +1155,9 @@ # 64 bit unsigned self.cmpld(block, a, b) - def alloc_scratch_reg(self, value=None): + def alloc_scratch_reg(self): assert not self.r0_in_use self.r0_in_use = True - if value is not None: - self.load_imm(r.r0, value) def free_scratch_reg(self): assert self.r0_in_use diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -288,7 +288,8 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) self.mc.storex(loc.value, 0, r.SCRATCH.value) self.mc.free_scratch_reg() elif loc.is_vfp_reg(): @@ -372,7 +373,8 @@ if resloc: self.mc.load(resloc.value, loc.value, 0) - self.mc.alloc_scratch_reg(0) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, 0) self.mc.store(r.SCRATCH.value, loc.value, 0) self.mc.store(r.SCRATCH.value, loc1.value, 0) self.mc.free_scratch_reg() @@ -748,7 +750,8 @@ bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.alloc_scratch_reg(1 << scale) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, 1 << scale) if IS_PPC_32: self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) else: @@ -857,7 +860,8 @@ def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) self.mc.free_scratch_reg() @@ -986,7 +990,8 @@ # check value resloc = regalloc.try_allocate_reg(resbox) assert resloc is r.RES - self.mc.alloc_scratch_reg(value) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, value) self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) self.mc.free_scratch_reg() regalloc.possibly_free_var(resbox) @@ -1051,7 +1056,8 @@ raise AssertionError(kind) resloc = regalloc.force_allocate_reg(op.result) regalloc.possibly_free_var(resbox) - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) if op.result.type == FLOAT: assert 0, "not implemented yet" else: diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -682,7 +682,8 @@ memaddr = self.gen_descr_encoding(descr, args, arglocs) # store addr in force index field - self.mc.alloc_scratch_reg(memaddr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, memaddr) self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.free_scratch_reg() @@ -886,7 +887,8 @@ return 0 def _write_fail_index(self, fail_index): - self.mc.alloc_scratch_reg(fail_index) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, fail_index) self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.free_scratch_reg() diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -44,7 +44,7 @@ def setup_once(self): self.asm.setup_once() - def compile_loop(self, inputargs, operations, looptoken, log=False): + def compile_loop(self, inputargs, operations, looptoken, log=False, name=""): self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, From noreply at buildbot.pypy.org Tue Feb 7 13:13:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:52 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: a small fix to byteswap to avoid prebuilt longs Message-ID: <20120207121352.F30BD82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52166:943f4e794e71 Date: 2012-02-07 13:06 +0200 http://bitbucket.org/pypy/pypy/changeset/943f4e794e71/ Log: a small fix to byteswap to avoid prebuilt longs diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -535,7 +535,7 @@ elif rffi.sizeof(T) == 8: a, b, c, d = arg & 0xFF, arg & 0xFF00, arg & 0xFF0000, arg & 0xFF000000 e, f, g, h = (arg & (0xFF << 32), arg & (0xFF << 40), - arg & (0xFF << 48), arg & (0xFF << 56)) + arg & (0xFF << 48), arg & (r_uint(0xFF) << 56)) res = ((a << 56) | (b << 40) | (c << 24) | (d << 8) | (e >> 8) | (f >> 24) | (g >> 40) | (h >> 56)) else: From noreply at buildbot.pypy.org Tue Feb 7 13:13:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:54 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: try different specialization Message-ID: <20120207121354.3240A82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52167:c3e31b2f134c Date: 2012-02-07 13:08 +0200 http://bitbucket.org/pypy/pypy/changeset/c3e31b2f134c/ Log: try different specialization diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -514,7 +514,7 @@ assert n <= p return llop.int_between(lltype.Bool, n, m, p) - at objectmodel.specialize.argtype(0) + at objectmodel.specialize.ll_and_arg(0) def byteswap(arg): """ Convert little->big endian and the opposite """ From noreply at buildbot.pypy.org Tue Feb 7 13:13:55 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:55 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: yet another approach Message-ID: <20120207121355.64C7D82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52168:e78857e41456 Date: 2012-02-07 13:28 +0200 http://bitbucket.org/pypy/pypy/changeset/e78857e41456/ Log: yet another approach diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -212,7 +212,9 @@ def min(self, v1, v2): return min(v1, v2) -class NonNativePrimitive(object): +class NonNativePrimitive(Primitive): + _mixin_ = True + def _read(self, storage, width, i, offset): return byteswap(Primitive._read(self, storage, width, i, offset)) @@ -516,7 +518,6 @@ def isinf(self, v): return rfloat.isinf(v) - class Float32(BaseType, Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box @@ -538,15 +539,15 @@ del tp def _setup(): - from pypy.tool.sourcetools import func_with_new_name + #from pypy.tool.sourcetools import func_with_new_name for name, tp in globals().items(): - if isinstance(tp, type): - class NonNative(tp): + if isinstance(tp, type) and issubclass(tp, BaseType): + class NonNative(NonNativePrimitive, tp): pass - for item, v in NonNativePrimitive.__dict__.items(): - if not item.startswith('__'): - setattr(NonNative, item, func_with_new_name(v, item)) + #for item, v in NonNativePrimitive.__dict__.items(): + # if not item.startswith('__'): + # setattr(NonNative, item, func_with_new_name(v, item)) NonNative.__name__ = 'NonNative' + name globals()[NonNative.__name__] = NonNative From noreply at buildbot.pypy.org Tue Feb 7 13:13:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix the spec Message-ID: <20120207121357.1BD5582CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52169:438d10b23d99 Date: 2012-02-07 13:30 +0200 http://bitbucket.org/pypy/pypy/changeset/438d10b23d99/ Log: fix the spec diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -514,7 +514,7 @@ assert n <= p return llop.int_between(lltype.Bool, n, m, p) - at objectmodel.specialize.ll_and_arg(0) + at objectmodel.specialize.ll() def byteswap(arg): """ Convert little->big endian and the opposite """ From noreply at buildbot.pypy.org Tue Feb 7 13:13:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:58 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: yet another approach (?) Message-ID: <20120207121358.5518B82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52170:3ecf7e516fca Date: 2012-02-07 13:47 +0200 http://bitbucket.org/pypy/pypy/changeset/3ecf7e516fca/ Log: yet another approach (?) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -543,11 +543,13 @@ for name, tp in globals().items(): if isinstance(tp, type) and issubclass(tp, BaseType): - class NonNative(NonNativePrimitive, tp): + class NonNative(BaseType): pass - #for item, v in NonNativePrimitive.__dict__.items(): - # if not item.startswith('__'): - # setattr(NonNative, item, func_with_new_name(v, item)) + NonNative.__bases__ = ((BaseType, NonNativePrimitive) + + tp.__bases__[1:]) + for item, v in tp.__dict__.items(): + if not item.startswith('__'): + setattr(NonNative, item, v) NonNative.__name__ = 'NonNative' + name globals()[NonNative.__name__] = NonNative From noreply at buildbot.pypy.org Tue Feb 7 13:13:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:13:59 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: another spec Message-ID: <20120207121359.85EB382CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52171:f6706a32e053 Date: 2012-02-07 13:49 +0200 http://bitbucket.org/pypy/pypy/changeset/f6706a32e053/ Log: another spec diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -71,7 +71,7 @@ def get_element_size(self): return rffi.sizeof(self.T) - @specialize.argtype(1) + @specialize.argtype(0) def box(self, value): return self.BoxType(rffi.cast(self.T, value)) From noreply at buildbot.pypy.org Tue Feb 7 13:14:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:14:00 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: screw metaprogramming Message-ID: <20120207121400.BA4C282CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52172:08d73b9c6a04 Date: 2012-02-07 14:02 +0200 http://bitbucket.org/pypy/pypy/changeset/08d73b9c6a04/ Log: screw metaprogramming diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -7,10 +7,10 @@ from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat, libffi, clibffi from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT, widen, byteswap +from pypy.rlib.rarithmetic import widen, byteswap from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack - +from pypy.tool.sourcetools import func_with_new_name def simple_unary_op(func): specialize.argtype(1)(func) @@ -71,7 +71,7 @@ def get_element_size(self): return rffi.sizeof(self.T) - @specialize.argtype(0) + @specialize.argtype(1) def box(self, value): return self.BoxType(rffi.cast(self.T, value)) @@ -272,6 +272,8 @@ def invert(self, v): return ~v +NonNativeBool = Bool + class Integer(Primitive): _mixin_ = True @@ -336,64 +338,110 @@ T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box format_code = "b" +NonNativeInt8 = Int8 class UInt8(BaseType, Integer): T = rffi.UCHAR BoxType = interp_boxes.W_UInt8Box format_code = "B" +NonNativeUInt8 = UInt8 class Int16(BaseType, Integer): T = rffi.SHORT BoxType = interp_boxes.W_Int16Box format_code = "h" +class NonNativeInt16(BaseType, NonNativePrimitive, Integer): + T = rffi.SHORT + BoxType = interp_boxes.W_Int16Box + format_code = "h" + class UInt16(BaseType, Integer): T = rffi.USHORT BoxType = interp_boxes.W_UInt16Box format_code = "H" +class NonNativeUInt16(BaseType, NonNativePrimitive, Integer): + T = rffi.USHORT + BoxType = interp_boxes.W_UInt16Box + format_code = "H" + class Int32(BaseType, Integer): T = rffi.INT BoxType = interp_boxes.W_Int32Box format_code = "i" +class NonNativeInt32(BaseType, NonNativePrimitive, Integer): + T = rffi.INT + BoxType = interp_boxes.W_Int32Box + format_code = "i" + class UInt32(BaseType, Integer): T = rffi.UINT BoxType = interp_boxes.W_UInt32Box format_code = "I" +class NonNativeUInt32(BaseType, NonNativePrimitive, Integer): + T = rffi.UINT + BoxType = interp_boxes.W_UInt32Box + format_code = "I" + class Long(BaseType, Integer): T = rffi.LONG BoxType = interp_boxes.W_LongBox format_code = "l" +class NonNativeLong(BaseType, NonNativePrimitive, Integer): + T = rffi.LONG + BoxType = interp_boxes.W_LongBox + format_code = "l" + class ULong(BaseType, Integer): T = rffi.ULONG BoxType = interp_boxes.W_ULongBox format_code = "L" +class NonNativeULong(BaseType, NonNativePrimitive, Integer): + T = rffi.ULONG + BoxType = interp_boxes.W_ULongBox + format_code = "L" + class Int64(BaseType, Integer): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box format_code = "q" +class NonNativeInt64(BaseType, NonNativePrimitive, Integer): + T = rffi.LONGLONG + BoxType = interp_boxes.W_Int64Box + format_code = "q" + +def _uint64_coerce(self, space, w_item): + try: + return Integer._coerce(self, space, w_item) + except OperationError, e: + if not e.match(space, space.w_OverflowError): + raise + bigint = space.bigint_w(w_item) + try: + value = bigint.toulonglong() + except OverflowError: + raise OperationError(space.w_OverflowError, space.w_None) + return self.box(value) + class UInt64(BaseType, Integer): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - def _coerce(self, space, w_item): - try: - return Integer._coerce(self, space, w_item) - except OperationError, e: - if not e.match(space, space.w_OverflowError): - raise - bigint = space.bigint_w(w_item) - try: - value = bigint.toulonglong() - except OverflowError: - raise OperationError(space.w_OverflowError, space.w_None) - return self.box(value) + _coerce = func_with_new_name(_uint64_coerce, '_coerce') + +class NonNativeUInt64(BaseType, NonNativePrimitive, Integer): + T = rffi.ULONGLONG + BoxType = interp_boxes.W_UInt64Box + format_code = "Q" + + _coerce = func_with_new_name(_uint64_coerce, '_coerce') class Float(Primitive): _mixin_ = True @@ -523,11 +571,21 @@ BoxType = interp_boxes.W_Float32Box format_code = "f" +class NonNativeFloat32(BaseType, NonNativePrimitive, Float): + T = rffi.FLOAT + BoxType = interp_boxes.W_Float32Box + format_code = "f" + class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box format_code = "d" +class NonNativeFloat64(BaseType, NonNativePrimitive, Float): + T = rffi.DOUBLE + BoxType = interp_boxes.W_Float64Box + format_code = "d" + for tp in [Int32, Int64]: if tp.T == lltype.Signed: IntP = tp @@ -537,21 +595,3 @@ UIntP = tp break del tp - -def _setup(): - #from pypy.tool.sourcetools import func_with_new_name - - for name, tp in globals().items(): - if isinstance(tp, type) and issubclass(tp, BaseType): - class NonNative(BaseType): - pass - NonNative.__bases__ = ((BaseType, NonNativePrimitive) + - tp.__bases__[1:]) - for item, v in tp.__dict__.items(): - if not item.startswith('__'): - setattr(NonNative, item, v) - NonNative.__name__ = 'NonNative' + name - globals()[NonNative.__name__] = NonNative - -_setup() -del _setup From noreply at buildbot.pypy.org Tue Feb 7 13:14:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:14:01 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: replace nice solution with ugly-but-rpython one. Message-ID: <20120207121401.F346F82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52173:fe098533af7b Date: 2012-02-07 14:13 +0200 http://bitbucket.org/pypy/pypy/changeset/fe098533af7b/ Log: replace nice solution with ugly-but-rpython one. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -817,6 +817,9 @@ self.s = StringBuilder(child.size * self.itemsize) Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, child) + self.res = W_NDimArray(1, [1], dtype, 'C') + self.res_casted = rffi.cast(rffi.CArrayPtr(lltype.Char), + self.res.storage) def create_sig(self): return signature.ToStringSignature(self.calc_dtype, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -326,8 +326,10 @@ from pypy.module.micronumpy.interp_numarray import ToStringArray assert isinstance(arr, ToStringArray) - arr.s.append(self.dtype.itemtype.pack_str( - self.child.eval(frame, arr.values))) + arr.res.setitem(0, self.child.eval(frame, arr.values).convert_to( + self.dtype)) + for i in range(arr.itemsize): + arr.s.append(arr.res_casted[i]) class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): diff --git a/pypy/rlib/rstruct/runpack.py b/pypy/rlib/rstruct/runpack.py --- a/pypy/rlib/rstruct/runpack.py +++ b/pypy/rlib/rstruct/runpack.py @@ -4,11 +4,10 @@ """ import py -from struct import pack, unpack +from struct import unpack from pypy.rlib.rstruct.formatiterator import FormatIterator from pypy.rlib.rstruct.error import StructError from pypy.rlib.rstruct.nativefmttable import native_is_bigendian -from pypy.rpython.extregistry import ExtRegistryEntry class MasterReader(object): def __init__(self, s): From noreply at buildbot.pypy.org Tue Feb 7 13:42:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:42:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: yet another approach Message-ID: <20120207124210.B903682CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52174:5939d34d9123 Date: 2012-02-07 14:21 +0200 http://bitbucket.org/pypy/pypy/changeset/5939d34d9123/ Log: yet another approach diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -334,6 +334,9 @@ def invert(self, v): return ~v +class NonNativeInteger(NonNativePrimitive, Integer): + _mixin_ = True + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -351,7 +354,7 @@ BoxType = interp_boxes.W_Int16Box format_code = "h" -class NonNativeInt16(BaseType, NonNativePrimitive, Integer): +class NonNativeInt16(BaseType, NonNativeInteger): T = rffi.SHORT BoxType = interp_boxes.W_Int16Box format_code = "h" @@ -361,7 +364,7 @@ BoxType = interp_boxes.W_UInt16Box format_code = "H" -class NonNativeUInt16(BaseType, NonNativePrimitive, Integer): +class NonNativeUInt16(BaseType, NonNativeInteger): T = rffi.USHORT BoxType = interp_boxes.W_UInt16Box format_code = "H" @@ -371,7 +374,7 @@ BoxType = interp_boxes.W_Int32Box format_code = "i" -class NonNativeInt32(BaseType, NonNativePrimitive, Integer): +class NonNativeInt32(BaseType, NonNativeInteger): T = rffi.INT BoxType = interp_boxes.W_Int32Box format_code = "i" @@ -381,7 +384,7 @@ BoxType = interp_boxes.W_UInt32Box format_code = "I" -class NonNativeUInt32(BaseType, NonNativePrimitive, Integer): +class NonNativeUInt32(BaseType, NonNativeInteger): T = rffi.UINT BoxType = interp_boxes.W_UInt32Box format_code = "I" @@ -391,7 +394,7 @@ BoxType = interp_boxes.W_LongBox format_code = "l" -class NonNativeLong(BaseType, NonNativePrimitive, Integer): +class NonNativeLong(BaseType, NonNativeInteger): T = rffi.LONG BoxType = interp_boxes.W_LongBox format_code = "l" @@ -401,7 +404,7 @@ BoxType = interp_boxes.W_ULongBox format_code = "L" -class NonNativeULong(BaseType, NonNativePrimitive, Integer): +class NonNativeULong(BaseType, NonNativeInteger): T = rffi.ULONG BoxType = interp_boxes.W_ULongBox format_code = "L" @@ -411,37 +414,46 @@ BoxType = interp_boxes.W_Int64Box format_code = "q" -class NonNativeInt64(BaseType, NonNativePrimitive, Integer): +class NonNativeInt64(BaseType, NonNativeInteger): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box format_code = "q" -def _uint64_coerce(self, space, w_item): - try: - return Integer._coerce(self, space, w_item) - except OperationError, e: - if not e.match(space, space.w_OverflowError): - raise - bigint = space.bigint_w(w_item) - try: - value = bigint.toulonglong() - except OverflowError: - raise OperationError(space.w_OverflowError, space.w_None) - return self.box(value) - class UInt64(BaseType, Integer): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - _coerce = func_with_new_name(_uint64_coerce, '_coerce') + def _coerce(self, space, w_item): + try: + return Integer._coerce(self, space, w_item) + except OperationError, e: + if not e.match(space, space.w_OverflowError): + raise + bigint = space.bigint_w(w_item) + try: + value = bigint.toulonglong() + except OverflowError: + raise OperationError(space.w_OverflowError, space.w_None) + return self.box(value) -class NonNativeUInt64(BaseType, NonNativePrimitive, Integer): +class NonNativeUInt64(BaseType, NonNativeInteger): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - _coerce = func_with_new_name(_uint64_coerce, '_coerce') + def _coerce(self, space, w_item): + try: + return NonNativeInteger._coerce(self, space, w_item) + except OperationError, e: + if not e.match(space, space.w_OverflowError): + raise + bigint = space.bigint_w(w_item) + try: + value = bigint.toulonglong() + except OverflowError: + raise OperationError(space.w_OverflowError, space.w_None) + return self.box(value) class Float(Primitive): _mixin_ = True @@ -566,12 +578,15 @@ def isinf(self, v): return rfloat.isinf(v) +class NonNativeFloat(NonNativePrimitive, Float): + _mixin_ = True + class Float32(BaseType, Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box format_code = "f" -class NonNativeFloat32(BaseType, NonNativePrimitive, Float): +class NonNativeFloat32(BaseType, NonNativeFloat): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box format_code = "f" @@ -581,7 +596,7 @@ BoxType = interp_boxes.W_Float64Box format_code = "d" -class NonNativeFloat64(BaseType, NonNativePrimitive, Float): +class NonNativeFloat64(BaseType, NonNativeFloat): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box format_code = "d" From noreply at buildbot.pypy.org Tue Feb 7 13:58:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 13:58:31 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Move the comment. Message-ID: <20120207125831.E28A982CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52175:e44e73b43895 Date: 2012-02-07 13:35 +0100 http://bitbucket.org/pypy/pypy/changeset/e44e73b43895/ Log: Move the comment. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -221,9 +221,8 @@ @always_inline def reader(obj, offset): if self.header(obj).tid & GCFLAG_GLOBAL == 0: - # local obj: read directly adr = rffi.cast(PTYPE, obj + offset) - return adr[0] + return adr[0] # local obj: read directly else: return stm_read_int(obj, offset) # else: call a helper setattr(self, 'read_int%d' % size, reader) From noreply at buildbot.pypy.org Tue Feb 7 13:58:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 13:58:33 +0100 (CET) Subject: [pypy-commit] pypy default: Clean-up. Message-ID: <20120207125833.EF82782CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52176:7aad58bd9b2b Date: 2012-02-07 13:58 +0100 http://bitbucket.org/pypy/pypy/changeset/7aad58bd9b2b/ Log: Clean-up. diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -308,7 +307,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): From noreply at buildbot.pypy.org Tue Feb 7 13:59:40 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:59:40 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: try to cleanup Message-ID: <20120207125940.1854E82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52177:fa972c43ce13 Date: 2012-02-07 14:53 +0200 http://bitbucket.org/pypy/pypy/changeset/fa972c43ce13/ Log: try to cleanup diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -277,8 +277,10 @@ class Integer(Primitive): _mixin_ = True + def _base_coerce(self, space, w_item): + return self.box(space.int_w(space.call_function(space.w_int, w_item))) def _coerce(self, space, w_item): - return self.box(space.int_w(space.call_function(space.w_int, w_item))) + return self._base_coerce(space, w_item) def str_format(self, box): value = self.unbox(box) @@ -419,41 +421,32 @@ BoxType = interp_boxes.W_Int64Box format_code = "q" +def _uint64_coerce(self, space, w_item): + try: + return self._base_coerce(self, space, w_item) + except OperationError, e: + if not e.match(space, space.w_OverflowError): + raise + bigint = space.bigint_w(w_item) + try: + value = bigint.toulonglong() + except OverflowError: + raise OperationError(space.w_OverflowError, space.w_None) + return self.box(value) + class UInt64(BaseType, Integer): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - def _coerce(self, space, w_item): - try: - return Integer._coerce(self, space, w_item) - except OperationError, e: - if not e.match(space, space.w_OverflowError): - raise - bigint = space.bigint_w(w_item) - try: - value = bigint.toulonglong() - except OverflowError: - raise OperationError(space.w_OverflowError, space.w_None) - return self.box(value) + _coerce = func_with_new_name(_uint64_coerce, '_coerce') class NonNativeUInt64(BaseType, NonNativeInteger): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - - def _coerce(self, space, w_item): - try: - return NonNativeInteger._coerce(self, space, w_item) - except OperationError, e: - if not e.match(space, space.w_OverflowError): - raise - bigint = space.bigint_w(w_item) - try: - value = bigint.toulonglong() - except OverflowError: - raise OperationError(space.w_OverflowError, space.w_None) - return self.box(value) + + _coerce = func_with_new_name(_uint64_coerce, '_coerce') class Float(Primitive): _mixin_ = True From noreply at buildbot.pypy.org Tue Feb 7 13:59:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 13:59:42 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: oops Message-ID: <20120207125942.7B55B82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52178:1ff9727e806a Date: 2012-02-07 14:55 +0200 http://bitbucket.org/pypy/pypy/changeset/1ff9727e806a/ Log: oops diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -423,7 +423,7 @@ def _uint64_coerce(self, space, w_item): try: - return self._base_coerce(self, space, w_item) + return self._base_coerce(space, w_item) except OperationError, e: if not e.match(space, space.w_OverflowError): raise From noreply at buildbot.pypy.org Tue Feb 7 14:03:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 14:03:36 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix test_zjit Message-ID: <20120207130336.2C55D82CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52179:0ac19b63bfdc Date: 2012-02-07 15:03 +0200 http://bitbucket.org/pypy/pypy/changeset/0ac19b63bfdc/ Log: fix test_zjit diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -33,7 +33,7 @@ pass SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", - "unegative", "flat"] + "unegative", "flat", "tostring"] TWO_ARG_FUNCTIONS = ["dot", 'take'] class FakeSpace(object): @@ -407,6 +407,9 @@ w_res = neg.call(interp.space, [arr]) elif self.name == "flat": w_res = arr.descr_get_flatiter(interp.space) + elif self.name == "tostring": + arr.descr_tostring(interp.space) + w_res = None else: assert False # unreachable code elif self.name in TWO_ARG_FUNCTIONS: From noreply at buildbot.pypy.org Tue Feb 7 14:04:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 14:04:56 +0100 (CET) Subject: [pypy-commit] pypy default: typos Message-ID: <20120207130456.E009782CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52180:c1cab8967442 Date: 2012-02-07 14:04 +0100 http://bitbucket.org/pypy/pypy/changeset/c1cab8967442/ Log: typos diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -34,13 +34,13 @@ strategies for unicode and string lists. * As usual, numerous performance improvements. There are too many examples - which python constructs now should behave faster to list them. + of python constructs that now should behave faster to list them. * Bugfixes and compatibility fixes with CPython. * Windows fixes. -* NumPy effort progress, for the exact list of things that have been done, +* NumPy effort progress; for the exact list of things that have been done, consult the `numpy status page`_. A tentative list of things that has been done: From noreply at buildbot.pypy.org Tue Feb 7 14:16:00 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Tue, 7 Feb 2012 14:16:00 +0100 (CET) Subject: [pypy-commit] pypy default: Bump version in Sphinx docs Message-ID: <20120207131600.488537107FA@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r52181:03880c16bed8 Date: 2012-02-07 15:15 +0200 http://bitbucket.org/pypy/pypy/changeset/03880c16bed8/ Log: Bump version in Sphinx docs diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. From noreply at buildbot.pypy.org Tue Feb 7 14:50:56 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 7 Feb 2012 14:50:56 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: shuffle stuff around and implement alignment Message-ID: <20120207135056.1EE057107FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52182:273d1c92691e Date: 2012-02-07 15:50 +0200 http://bitbucket.org/pypy/pypy/changeset/273d1c92691e/ Log: shuffle stuff around and implement alignment diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -60,29 +60,6 @@ def fill(self, storage, box, start, stop): self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) - def descr__new__(space, w_subtype, w_dtype): - cache = get_dtype_cache(space) - - if space.is_w(w_dtype, space.w_None): - return cache.w_float64dtype - elif space.isinstance_w(w_dtype, w_subtype): - return w_dtype - elif space.isinstance_w(w_dtype, space.w_str): - name = space.str_w(w_dtype) - try: - return cache.dtypes_by_name[name] - except KeyError: - pass - elif space.isinstance_w(w_dtype, space.w_list): - raise NotImplementedError - else: - for dtype in cache.builtin_dtypes: - if w_dtype in dtype.alternate_constructors: - return dtype - if w_dtype is dtype.w_box_type: - return dtype - raise OperationError(space.w_TypeError, space.wrap("data type not understood")) - def descr_str(self, space): return space.wrap(self.name) @@ -92,6 +69,9 @@ def descr_get_itemsize(self, space): return space.wrap(self.itemtype.get_element_size()) + def descr_get_alignment(self, space): + return space.wrap(self.itemtype.alignment) + def descr_get_shape(self, space): return space.newtuple([]) @@ -112,9 +92,38 @@ def is_bool_type(self): return self.kind == BOOLLTR +def dtype_from_list(space, w_lst): + lst_w = space.listview(w_lst) + fieldlist = [] + for w_elem in lst_w: + fldname, flddesc = space.fixedview(w_elem, 2) + +def descr__new__(space, w_subtype, w_dtype): + cache = get_dtype_cache(space) + + if space.is_w(w_dtype, space.w_None): + return cache.w_float64dtype + elif space.isinstance_w(w_dtype, w_subtype): + return w_dtype + elif space.isinstance_w(w_dtype, space.w_str): + name = space.str_w(w_dtype) + try: + return cache.dtypes_by_name[name] + except KeyError: + pass + elif space.isinstance_w(w_dtype, space.w_list): + return dtype_from_list(space, w_dtype) + else: + for dtype in cache.builtin_dtypes: + if w_dtype in dtype.alternate_constructors: + return dtype + if w_dtype is dtype.w_box_type: + return dtype + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", - __new__ = interp2app(W_Dtype.descr__new__.im_func), + __new__ = interp2app(descr__new__), __str__= interp2app(W_Dtype.descr_str), __repr__ = interp2app(W_Dtype.descr_repr), @@ -125,6 +134,7 @@ kind = interp_attrproperty("kind", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), + alignment = GetSetProperty(W_Dtype.descr_get_alignment), shape = GetSetProperty(W_Dtype.descr_get_shape), name = interp_attrproperty('name', cls=W_Dtype), ) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -450,3 +450,7 @@ assert a[0] == 1 assert (a + a)[1] == 4 self.check_non_native(a, array([1, 2, 3], 'i2')) + + def test_alignment(self): + from _numpypy import dtype + assert dtype('i4').alignment == 4 diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -603,3 +603,11 @@ UIntP = tp break del tp + +def _setup(): + # compute alignment + for tp in globals().values(): + if isinstance(tp, type) and hasattr(tp, 'T'): + tp.alignment = clibffi.cast_type_to_ffitype(tp.T).c_alignment +_setup() +del _setup From noreply at buildbot.pypy.org Tue Feb 7 15:10:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 15:10:32 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: hg merge default Message-ID: <20120207141032.375037107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52183:60e760a17c2c Date: 2012-02-07 14:18 +0100 http://bitbucket.org/pypy/pypy/changeset/60e760a17c2c/ Log: hg merge default diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,43 +227,57 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +518,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +733,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -129,6 +129,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,52 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As became a habit, this +release brings a lot of bugfixes, performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +list strategies which makes homogenous lists more efficient both in terms +of performance and memory. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are too many examples + of python constructs that now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + xxxx # list it, multidim arrays in particular + +* Fundraising XXX + +.. _`numpy status page`: xxx +.. _`numpy status update blog report`: xxx diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -1552,7 +1556,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1591,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1607,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1666,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2187,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2202,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -267,7 +207,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +220,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +453,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -679,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -744,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -817,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -858,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -876,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -973,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -997,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1033,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1084,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1289,11 +1263,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1345,12 +1321,15 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1359,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1387,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1419,22 +1398,29 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -482,6 +401,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -173,7 +175,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +308,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -936,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot @@ -1121,14 +1122,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1164,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1.0).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3.0).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,35 +1478,37 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros @@ -1740,10 +1762,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1757,6 +1780,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -56,7 +56,7 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -548,7 +548,7 @@ for s in result ] else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +635,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +650,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +765,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +785,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +914,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1103,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1113,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -127,6 +127,7 @@ l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, + op.getarg(1).getint(), w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, @@ -163,14 +164,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + at unwrap_spec(repr=str, jd_name=str, call_depth=int) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, call_depth, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_greenkey) + repr, jd_name, call_depth, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,10 +206,11 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + def __init__(self, space, op, repr_of_resop, jd_name, call_depth, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) + self.jd_name = jd_name + self.call_depth = call_depth self.w_greenkey = w_greenkey - self.jd_name = jd_name def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: @@ -243,6 +245,7 @@ greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + call_depth = interp_attrproperty("call_depth", cls=DebugMergePoint), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -122,7 +122,8 @@ assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code assert dmp.greenkey == (self.f.func_code, 0, False) - #assert int_add.name == 'int_add' + assert dmp.call_depth == 0 + assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 @@ -223,11 +224,13 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num - op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + assert op.call_depth == 2 + op = DebugMergePoint([Box(0)], 'repr', 'notmain', 5, ('str',)) raises(AttributeError, 'op.pycode') + assert op.call_depth == 5 diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -342,7 +342,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,9 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -699,7 +699,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -811,7 +811,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -849,6 +849,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -44,7 +48,10 @@ args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -68,6 +75,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +93,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +310,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +328,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +342,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +363,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +388,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +527,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +622,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1060,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1072,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1184,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1251,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1264,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1293,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1370,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1392,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1410,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1433,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1446,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1463,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1485,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1498,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1512,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1564,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1577,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,43 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -308,7 +307,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Tue Feb 7 15:10:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 15:10:34 +0100 (CET) Subject: [pypy-commit] pypy default: Look inside RPython generators too. Message-ID: <20120207141034.5003E7107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52184:a1be520f19fd Date: 2012-02-07 15:10 +0100 http://bitbucket.org/pypy/pypy/changeset/a1be520f19fd/ Log: Look inside RPython generators too. diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3706,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/translator/generator.py b/pypy/translator/generator.py --- a/pypy/translator/generator.py +++ b/pypy/translator/generator.py @@ -68,6 +68,7 @@ (next_entry, return_value) = func(entry) self.current = next_entry return return_value + next._jit_look_inside_ = True GeneratorIterator.next = next return func # for debugging From noreply at buildbot.pypy.org Tue Feb 7 15:17:42 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 15:17:42 +0100 (CET) Subject: [pypy-commit] pypy default: Remove this very old condition. It turns out that nowadays, running Message-ID: <20120207141742.5DCBE7107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52185:5d4620953431 Date: 2012-02-07 15:17 +0100 http://bitbucket.org/pypy/pypy/changeset/5d4620953431/ Log: Remove this very old condition. It turns out that nowadays, running a pypy, it triggers only for getenv() in pypy.translator.goal.nanos which is better left out; but it is otherwise pointless. diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/translator/generator.py b/pypy/translator/generator.py --- a/pypy/translator/generator.py +++ b/pypy/translator/generator.py @@ -68,7 +68,6 @@ (next_entry, return_value) = func(entry) self.current = next_entry return return_value - next._jit_look_inside_ = True GeneratorIterator.next = next return func # for debugging From noreply at buildbot.pypy.org Tue Feb 7 16:53:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 16:53:10 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: In-progress. Message-ID: <20120207155310.D741E7107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52186:99247048a1de Date: 2012-02-07 16:11 +0100 http://bitbucket.org/pypy/pypy/changeset/99247048a1de/ Log: In-progress. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -75,7 +75,7 @@ "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], "stmgc": [("translation.gctransformer", "framework"), - ("translation.gcrootfinder", "none")], # XXX + ("translation.gcrootfinder", "stm")], }, cmdline="--gc"), ChoiceOption("gctransformer", "GC transformer that is used - internal", @@ -93,7 +93,7 @@ default=IS_64_BITS, cmdline="--gcremovetypeptr"), ChoiceOption("gcrootfinder", "Strategy for finding GC Roots (framework GCs only)", - ["n/a", "shadowstack", "asmgcc", "none"], + ["n/a", "shadowstack", "asmgcc", "stm"], "shadowstack", cmdline="--gcrootfinder", requires={ diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, llgroup, rffi from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase @@ -34,6 +34,7 @@ _alloc_flavor_ = "raw" inline_simple_malloc = True inline_simple_malloc_varsize = True + needs_custom_readers = "stm" needs_write_barrier = "stm" prebuilt_gc_objects_are_static_roots = False malloc_zero_filled = True # xxx? @@ -80,12 +81,13 @@ self.declare_reader(size, TYPE) self.declare_write_barrier() + GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) + def setup(self): """Called at run-time to initialize the GC.""" GCBase.setup(self) - GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) self.stm_operations.setup_size_getter( - llhelper(GETSIZE, self._getsize_fn)) + llhelper(self.GETSIZE, self._getsize_fn)) self.main_thread_tls = self.setup_thread(True) self.mutex_lock = ll_thread.allocate_ll_lock() @@ -201,6 +203,11 @@ @always_inline + def get_type_id(self, obj): + tid = self.header(obj).tid + return llop.extract_ushort(llgroup.HALFWORD, tid) + + @always_inline def combine(self, typeid16, flags): return llop.combine_ushort(lltype.Signed, typeid16, flags) @@ -314,6 +321,11 @@ # ---------- + def identityhash(self, gcref): + raise NotImplementedError("XXX") + + # ---------- + def acquire(self, lock): ll_thread.c_thread_acquirelock(lock, 1) diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -313,14 +313,6 @@ [s_gc, annmodel.SomeInteger(knowntype=llgroup.r_halfword)], annmodel.SomeInteger()) - if hasattr(GCClass, 'writebarrier_before_copy'): - self.wb_before_copy_ptr = \ - getfn(GCClass.writebarrier_before_copy.im_func, - [s_gc] + [annmodel.SomeAddress()] * 2 + - [annmodel.SomeInteger()] * 3, annmodel.SomeBool()) - elif GCClass.needs_write_barrier: - raise NotImplementedError("GC needs write barrier, but does not provide writebarrier_before_copy functionality") - # in some GCs we can inline the common case of # malloc_fixedsize(typeid, size, False, False, False) if getattr(GCClass, 'inline_simple_malloc', False): @@ -447,43 +439,8 @@ annmodel.SomeInteger(nonneg=True)], annmodel.s_None) - self.write_barrier_ptr = None - self.write_barrier_from_array_ptr = None - if GCClass.needs_write_barrier: - self.write_barrier_ptr = getfn(GCClass.write_barrier.im_func, - [s_gc, - annmodel.SomeAddress(), - annmodel.SomeAddress()], - annmodel.s_None, - inline=True) - func = getattr(gcdata.gc, 'remember_young_pointer', None) - if func is not None: - # func should not be a bound method, but a real function - assert isinstance(func, types.FunctionType) - self.write_barrier_failing_case_ptr = getfn(func, - [annmodel.SomeAddress(), - annmodel.SomeAddress()], - annmodel.s_None) - func = getattr(GCClass, 'write_barrier_from_array', None) - if func is not None: - self.write_barrier_from_array_ptr = getfn(func.im_func, - [s_gc, - annmodel.SomeAddress(), - annmodel.SomeAddress(), - annmodel.SomeInteger()], - annmodel.s_None, - inline=True) - func = getattr(gcdata.gc, 'remember_young_pointer_from_array3', - None) - if func is not None: - # func should not be a bound method, but a real function - assert isinstance(func, types.FunctionType) - self.write_barrier_from_array_failing_case_ptr = \ - getfn(func, - [annmodel.SomeAddress(), - annmodel.SomeInteger(), - annmodel.SomeAddress()], - annmodel.s_None) + self.setup_write_barriers(GCClass, s_gc) + self.statistics_ptr = getfn(GCClass.statistics.im_func, [s_gc, annmodel.SomeInteger()], annmodel.SomeInteger()) @@ -525,6 +482,53 @@ from pypy.rpython.memory.gctransform import shadowstack return shadowstack.ShadowStackRootWalker(self) + def setup_write_barriers(self, GCClass, s_gc): + self.write_barrier_ptr = None + self.write_barrier_from_array_ptr = None + if GCClass.needs_write_barrier: + self.write_barrier_ptr = getfn(GCClass.write_barrier.im_func, + [s_gc, + annmodel.SomeAddress(), + annmodel.SomeAddress()], + annmodel.s_None, + inline=True) + func = getattr(gcdata.gc, 'remember_young_pointer', None) + if func is not None: + # func should not be a bound method, but a real function + assert isinstance(func, types.FunctionType) + self.write_barrier_failing_case_ptr = getfn(func, + [annmodel.SomeAddress(), + annmodel.SomeAddress()], + annmodel.s_None) + func = getattr(GCClass, 'write_barrier_from_array', None) + if func is not None: + self.write_barrier_from_array_ptr = getfn(func.im_func, + [s_gc, + annmodel.SomeAddress(), + annmodel.SomeAddress(), + annmodel.SomeInteger()], + annmodel.s_None, + inline=True) + func = getattr(gcdata.gc, 'remember_young_pointer_from_array3', + None) + if func is not None: + # func should not be a bound method, but a real function + assert isinstance(func, types.FunctionType) + self.write_barrier_from_array_failing_case_ptr = \ + getfn(func, + [annmodel.SomeAddress(), + annmodel.SomeInteger(), + annmodel.SomeAddress()], + annmodel.s_None) + if hasattr(GCClass, 'writebarrier_before_copy'): + self.wb_before_copy_ptr = \ + getfn(GCClass.writebarrier_before_copy.im_func, + [s_gc] + [annmodel.SomeAddress()] * 2 + + [annmodel.SomeInteger()] * 3, annmodel.SomeBool()) + elif GCClass.needs_write_barrier: + raise NotImplementedError("GC needs write barrier, but does not provide writebarrier_before_copy functionality") + + def consider_constant(self, TYPE, value): self.layoutbuilder.consider_constant(TYPE, value, self.gcdata.gc) diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py new file mode 100644 --- /dev/null +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -0,0 +1,22 @@ +from pypy.rpython.memory.gctransform.framework import FrameworkGCTransformer +from pypy.translator.backendopt.support import var_needsgc + + +class StmFrameworkGCTransformer(FrameworkGCTransformer): + + def setup_write_barriers(self, GCClass, s_gc): + self.write_barrier_ptr = None + self.write_barrier_from_array_ptr = None + pass # xxx + + def gct_getfield(self, hop): + if self.var_needs_set_transform(hop.spaceop.args[0]): + hop.rename('stm_' + hop.spaceop.opname) + else: + self.default(hop) + gct_getarrayitem = gct_getfield + gct_getinteriorfield = gct_getfield + + + def gct_gc_writebarrier_before_copy(self, hop): + xxx diff --git a/pypy/rpython/memory/gctransform/test/test_framework.py b/pypy/rpython/memory/gctransform/test/test_framework.py --- a/pypy/rpython/memory/gctransform/test/test_framework.py +++ b/pypy/rpython/memory/gctransform/test/test_framework.py @@ -18,9 +18,13 @@ import py +class ForTestGCTransformer(FrameworkGCTransformer): + root_stack_depth = 100 + class FrameworkGcPolicy2(FrameworkGcPolicy): - class transformerclass(FrameworkGCTransformer): - root_stack_depth = 100 + @staticmethod + def get_transformer_class(): + return ForTestGCTransformer def test_framework_simple(): def g(x): diff --git a/pypy/rpython/memory/gctransform/test/test_stmframework.py b/pypy/rpython/memory/gctransform/test/test_stmframework.py new file mode 100644 --- /dev/null +++ b/pypy/rpython/memory/gctransform/test/test_stmframework.py @@ -0,0 +1,27 @@ +from pypy.translator.translator import graphof +from pypy.objspace.flow.model import summary +from pypy.rpython.memory.gctransform.test.test_transform import rtype +from pypy.rpython.memory.gctransform.stmframework import ( + StmFrameworkGCTransformer) + + +def prepare(entrypoint, types, func=None): + t = rtype(entrypoint, types) + t.config.translation.gc = 'stmgc' + transformer = StmFrameworkGCTransformer(t) + graph = graphof(t, func or entrypoint) + transformer.transform_graph(graph) + return t, graph + + +def test_reader(): + class A(object): + def __init__(self, x): + self.x = x + def f(a1, a2): + return a1.x + def entry(n, m): + return f(A(n), A(m)) + + t, graph = prepare(entry, [int, int], f) + assert summary(graph) == {'stm_getfield': 1} diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -933,6 +933,7 @@ class TestMarkSweepGC(GenericGCTests): gcname = "marksweep" class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): GC_PARAMS = {'start_heap_size': 1024*WORD, 'translated_to_c': False} @@ -943,6 +944,7 @@ gcname = "statistics" class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.marksweep import PrintingMarkSweepGC as GCClass GC_PARAMS = {'start_heap_size': 1024*WORD, @@ -954,6 +956,7 @@ GC_CAN_SHRINK_ARRAY = True class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.semispace import SemiSpaceGC as GCClass GC_PARAMS = {'space_size': 512*WORD, @@ -964,6 +967,7 @@ gcname = 'markcompact' class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.markcompact import MarkCompactGC as GCClass GC_PARAMS = {'space_size': 4096*WORD, @@ -975,6 +979,7 @@ GC_CAN_SHRINK_ARRAY = True class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.generation import GenerationGC as \ GCClass @@ -1161,6 +1166,7 @@ gcname = "generation" class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.generation import GenerationGC class GCClass(GenerationGC): @@ -1206,6 +1212,7 @@ GC_CAN_MALLOC_NONMOVABLE = True class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.hybrid import HybridGC as GCClass GC_PARAMS = {'space_size': 512*WORD, @@ -1275,6 +1282,7 @@ GC_CAN_TEST_ID = True class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.minimark import MiniMarkGC as GCClass GC_PARAMS = {'nursery_size': 32*WORD, @@ -1391,6 +1399,7 @@ class TestMarkSweepTaggedPointerGC(TaggedPointerGCTests): gcname = "marksweep" class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): GC_PARAMS = {'start_heap_size': 1024*WORD, 'translated_to_c': False} @@ -1400,6 +1409,7 @@ gcname = "hybrid" class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.generation import GenerationGC as \ GCClass @@ -1412,6 +1422,7 @@ gcname = 'markcompact' class gcpolicy(gc.FrameworkGcPolicy): + get_transformer_class = lambda self: self.transformerclass class transformerclass(framework.FrameworkGCTransformer): from pypy.rpython.memory.gc.markcompact import MarkCompactGC as GCClass GC_PARAMS = {'space_size': 4096*WORD, diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -60,7 +60,8 @@ else: self.exctransformer = translator.getexceptiontransformer() if translator is not None: - self.gctransformer = self.gcpolicy.transformerclass(translator) + transformerclass = self.gcpolicy.get_transformer_class() + self.gctransformer = transformerclass(translator) self.completed = False self.instrument_ncounter = 0 diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -5,8 +5,6 @@ from pypy.rpython.lltypesystem.lltype import \ typeOf, Ptr, ContainerType, RttiStruct, \ RuntimeTypeInfo, getRuntimeTypeInfo, top_container -from pypy.rpython.memory.gctransform import \ - refcounting, boehm, framework, asmgcroot from pypy.rpython.lltypesystem import lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -112,7 +110,10 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic class RefcountingGcPolicy(BasicGcPolicy): - transformerclass = refcounting.RefcountingGCTransformer + @staticmethod + def get_transformer_class(): + from pypy.rpython.memory.gctransform import refcounting + return refcounting.RefcountingGCTransformer def common_gcheader_initdata(self, defnode): if defnode.db.gctransformer is not None: @@ -197,7 +198,10 @@ class BoehmGcPolicy(BasicGcPolicy): - transformerclass = boehm.BoehmGCTransformer + @staticmethod + def get_transformer_class(): + from pypy.rpython.memory.gctransform import boehm + return boehm.BoehmGCTransformer def common_gcheader_initdata(self, defnode): if defnode.db.gctransformer is not None: @@ -247,9 +251,11 @@ yield 'boehm_gc_startup_code();' def get_real_weakref_type(self): + from pypy.rpython.memory.gctransform import boehm return boehm.WEAKLINK def convert_weakref_to(self, ptarget): + from pypy.rpython.memory.gctransform import boehm return boehm.convert_weakref_to(ptarget) def OP_GC__COLLECT(self, funcgen, op): @@ -306,7 +312,10 @@ class FrameworkGcPolicy(BasicGcPolicy): - transformerclass = framework.FrameworkGCTransformer + @staticmethod + def get_transformer_class(): + from pypy.rpython.memory.gctransform import framework + return framework.FrameworkGCTransformer def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): @@ -338,9 +347,11 @@ yield '%s();' % (self.db.get(fnptr),) def get_real_weakref_type(self): + from pypy.rpython.memory.gctransform import framework return framework.WEAKREF def convert_weakref_to(self, ptarget): + from pypy.rpython.memory.gctransform import framework return framework.convert_weakref_to(ptarget) def OP_GC_RELOAD_POSSIBLY_MOVED(self, funcgen, op): @@ -396,7 +407,10 @@ raise Exception("the FramewokGCTransformer should handle this") class AsmGcRootFrameworkGcPolicy(FrameworkGcPolicy): - transformerclass = asmgcroot.AsmGcRootFrameworkGCTransformer + @staticmethod + def get_transformer_class(): + from pypy.rpython.memory.gctransform import asmgcroot + return asmgcroot.AsmGcRootFrameworkGCTransformer def GC_KEEPALIVE(self, funcgen, v): return 'pypy_asm_keepalive(%s);' % funcgen.expr(v) @@ -404,6 +418,12 @@ def OP_GC_STACK_BOTTOM(self, funcgen, op): return 'pypy_asm_stack_bottom();' +class StmFrameworkGcPolicy(FrameworkGcPolicy): + @staticmethod + def get_transformer_class(): + from pypy.rpython.memory.gctransform import stmframework + return stmframework.StmFrameworkGCTransformer + name_to_gcpolicy = { 'boehm': BoehmGcPolicy, @@ -411,6 +431,5 @@ 'none': NoneGcPolicy, 'framework': FrameworkGcPolicy, 'framework+asmgcroot': AsmGcRootFrameworkGcPolicy, + 'framework+stm': StmFrameworkGcPolicy, } - - diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -1,4 +1,5 @@ from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.memory.gc.stmgc import PRIMITIVE_SIZES from pypy.translator.stm import _rffi_stm @@ -28,9 +29,10 @@ lltype.Void) tldict_enum = smexternal('stm_tldict_enum', [CALLBACK], lltype.Void) - stm_read_word = smexternal('stm_read_word', - [llmemory.Address, lltype.Signed], - lltype.Signed) + for _size, _TYPE in PRIMITIVE_SIZES.items(): + _name = 'stm_read_int%d' % _size + locals()[_name] = smexternal(_name, [llmemory.Address, lltype.Signed], + _TYPE) stm_copy_transactional_to_raw = smexternal('stm_copy_transactional_to_raw', [llmemory.Address, From noreply at buildbot.pypy.org Tue Feb 7 16:53:12 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 16:53:12 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Give up, will try a different approach Message-ID: <20120207155312.1A6317107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52187:3720ee526894 Date: 2012-02-07 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/3720ee526894/ Log: Give up, will try a different approach From noreply at buildbot.pypy.org Tue Feb 7 16:53:13 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 16:53:13 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Reads of 1, 2, 4, 8 bytes here too. Message-ID: <20120207155313.4E1FC7107FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52188:b4d3a591ff38 Date: 2012-02-07 16:52 +0100 http://bitbucket.org/pypy/pypy/changeset/b4d3a591ff38/ Log: Reads of 1, 2, 4, 8 bytes here too. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -11,7 +11,6 @@ #include #include -#include #include #define USE_PTHREAD_MUTEX /* optional */ @@ -29,6 +28,13 @@ # define RPY_STM_ASSERT 1 #endif +#ifdef RPY_STM_ASSERT +# include +#else +# undef assert +# define assert /* nothing */ +#endif + /************************************************************/ /* This is the same as the object header structure HDR @@ -40,7 +46,7 @@ } orec_t; enum { - first_gcflag = 1 << (PYPY_LONG_BIT / 2), + first_gcflag = 1L << (PYPY_LONG_BIT / 2), GCFLAG_GLOBAL = first_gcflag << 0, GCFLAG_WAS_COPIED = first_gcflag << 1 }; @@ -424,62 +430,68 @@ } /* lazy/lazy read instrumentation */ -long stm_read_word(void* addr, long offset) -{ - struct tx_descriptor *d = thread_descriptor; - volatile orec_t *o = get_orec(addr); - owner_version_t ovt; +#define STM_READ_WORD(SIZE, TYPE) \ +TYPE stm_read_int##SIZE(void* addr, long offset) \ +{ \ + struct tx_descriptor *d = thread_descriptor; \ + volatile orec_t *o = get_orec(addr); \ + owner_version_t ovt; \ + \ + assert(sizeof(TYPE) == SIZE); \ + \ + if ((o->tid & GCFLAG_WAS_COPIED) != 0) \ + { \ + /* Look up in the thread-local dictionary. */ \ + wlog_t *found; \ + REDOLOG_FIND(d->redolog, addr, found, goto not_found); \ + orec_t *localobj = (orec_t *)found->val; \ + assert((localobj->tid & GCFLAG_GLOBAL) == 0); \ + return *(TYPE *)(((char *)localobj) + offset); \ + \ + not_found:; \ + } \ + \ + /* XXX try to remove this check from the main path */ \ + if (is_main_thread(d)) \ + return *(TYPE *)(((char *)addr) + offset); \ + \ + retry: \ + /* read the orec BEFORE we read anything else */ \ + ovt = o->version; \ + CFENCE; \ + \ + /* this tx doesn't hold any locks, so if the lock for this addr is \ + held, there is contention. A lock is never hold for too long, \ + so spinloop until it is released. */ \ + if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) \ + { \ + if (IS_LOCKED(ovt)) { \ + tx_spinloop(7); \ + goto retry; \ + } \ + /* else this location is too new, scale forward */ \ + owner_version_t newts = get_global_timestamp(d) & ~1; \ + validate_fast(d, 1); \ + d->start_time = newts; \ + } \ + \ + /* orec is unlocked, with ts <= start_time. read the location */ \ + TYPE tmp = *(TYPE *)(((char *)addr) + offset); \ + \ + /* postvalidate AFTER reading addr: */ \ + CFENCE; \ + if (__builtin_expect(o->version != ovt, 0)) \ + goto retry; /* oups, try again */ \ + \ + oreclist_insert(&d->reads, (orec_t*)o); \ + \ + return tmp; \ +} - if ((o->tid & GCFLAG_WAS_COPIED) != 0) - { - /* Look up in the thread-local dictionary. */ - wlog_t *found; - REDOLOG_FIND(d->redolog, addr, found, goto not_found); - orec_t *localobj = (orec_t *)found->val; -#ifdef RPY_STM_ASSERT - assert((localobj->tid & GCFLAG_GLOBAL) == 0); -#endif - return *(long *)(((char *)localobj) + offset); - - not_found:; - } - - // XXX try to remove this check from the main path - if (is_main_thread(d)) - return *(long *)(((char *)addr) + offset); - - retry: - // read the orec BEFORE we read anything else - ovt = o->version; - CFENCE; - - // this tx doesn't hold any locks, so if the lock for this addr is held, - // there is contention. A lock is never hold for too long, so spinloop - // until it is released. - if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) - { - if (IS_LOCKED(ovt)) { - tx_spinloop(7); - goto retry; - } - // else this location is too new, scale forward - owner_version_t newts = get_global_timestamp(d) & ~1; - validate_fast(d, 1); - d->start_time = newts; - } - - // orec is unlocked, with ts <= start_time. read the location - long tmp = *(long *)(((char *)addr) + offset); - - // postvalidate AFTER reading addr: - CFENCE; - if (__builtin_expect(o->version != ovt, 0)) - goto retry; /* oups, try again */ - - oreclist_insert(&d->reads, (orec_t*)o); - - return tmp; -} +STM_READ_WORD(1, char) +STM_READ_WORD(2, short) +STM_READ_WORD(4, int) +STM_READ_WORD(8, long long) static struct tx_descriptor *descriptor_init(_Bool is_main_thread) diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -1,4 +1,5 @@ from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.memory.gc.stmgc import PRIMITIVE_SIZES from pypy.translator.stm import _rffi_stm @@ -28,9 +29,10 @@ lltype.Void) tldict_enum = smexternal('stm_tldict_enum', [CALLBACK], lltype.Void) - stm_read_word = smexternal('stm_read_word', - [llmemory.Address, lltype.Signed], - lltype.Signed) + for _size, _TYPE in PRIMITIVE_SIZES.items(): + _name = 'stm_read_int%d' % _size + locals()[_name] = smexternal(_name, [llmemory.Address, lltype.Signed], + _TYPE) stm_copy_transactional_to_raw = smexternal('stm_copy_transactional_to_raw', [llmemory.Address, diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -4,6 +4,8 @@ from pypy.translator.stm.stmgcintf import StmOperations, CALLBACK, GETSIZE from pypy.rpython.memory.gc import stmgc +WORD = stmgc.WORD + stm_operations = StmOperations() DEFAULT_TLS = lltype.Struct('DEFAULT_TLS') @@ -12,6 +14,10 @@ ('x', lltype.Signed), ('y', lltype.Signed)) +# xxx a lot of casts to convince rffi to give us a regular integer :-( +SIZEOFHDR = rffi.cast(lltype.Signed, rffi.cast(rffi.SHORT, + rffi.sizeof(S1.hdr))) + def test_set_get_del(): # assume that they are really thread-local; not checked here @@ -57,7 +63,6 @@ def test_tldict_large(self): content = {} - WORD = rffi.sizeof(lltype.Signed) for i in range(12000): key = random.randrange(1000, 2000) * WORD a1 = rffi.cast(llmemory.Address, key) @@ -97,8 +102,8 @@ a1 = llmemory.cast_ptr_to_adr(s1) a2 = llmemory.cast_ptr_to_adr(s2) stm_operations.tldict_add(a1, a2) - res = stm_operations.stm_read_word(llmemory.cast_ptr_to_adr(s1), - rffi.sizeof(S1.hdr)) # 'x' + reader = getattr(stm_operations, 'stm_read_int%d' % WORD) + res = reader(llmemory.cast_ptr_to_adr(s1), SIZEOFHDR) # 'x' lltype.free(s1, flavor='raw') if copied: lltype.free(s2, flavor='raw') @@ -119,6 +124,24 @@ assert res == 84084 test_stm_read_word_transactional_thread.in_main_thread = False + def test_stm_read_int1(self): + S2 = lltype.Struct('S2', ('hdr', stmgc.StmGC.HDR), + ('c1', lltype.Char), + ('c2', lltype.Char), + ('c3', lltype.Char)) + s2 = lltype.malloc(S2, flavor='raw') + s2.hdr.tid = stmgc.GCFLAG_GLOBAL | stmgc.GCFLAG_WAS_COPIED + s2.hdr.version = llmemory.NULL + s2.c1 = 'A' + s2.c2 = 'B' + s2.c3 = 'C' + reader = stm_operations.stm_read_int1 + r1 = reader(llmemory.cast_ptr_to_adr(s2), SIZEOFHDR + 0) # c1 + r2 = reader(llmemory.cast_ptr_to_adr(s2), SIZEOFHDR + 1) # c2 + r3 = reader(llmemory.cast_ptr_to_adr(s2), SIZEOFHDR + 2) # c3 + lltype.free(s2, flavor='raw') + assert r1 == 'A' and r2 == 'B' and r3 == 'C' + def test_stm_size_getter(self): def getsize(addr): xxx From noreply at buildbot.pypy.org Tue Feb 7 17:17:53 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:17:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (arigo, bivab) get rid of the special case Message-ID: <20120207161753.2405E7107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52189:dad5a5e0f215 Date: 2012-01-18 10:02 -0800 http://bitbucket.org/pypy/pypy/changeset/dad5a5e0f215/ Log: (arigo, bivab) get rid of the special case diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/ppcgen/locations.py --- a/pypy/jit/backend/ppc/ppcgen/locations.py +++ b/pypy/jit/backend/ppc/ppcgen/locations.py @@ -110,7 +110,4 @@ return ImmLocation(val) def get_spp_offset(pos): - if pos < 0: - return -pos * WORD - else: - return -(pos + 1) * WORD + return -(pos + 1) * WORD diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -186,7 +186,7 @@ arg_index = 0 count = 0 n_register_args = len(r.PARAM_REGS) - cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1 + cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD for box in inputargs: assert isinstance(box, Box) # handle inputargs in argument registers From noreply at buildbot.pypy.org Tue Feb 7 17:17:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:17:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, arigo) Add test for an operation that does not correctly emit the code for the guard, i.e. emitting two guards for the same operation Message-ID: <20120207161754.5D9287107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52190:cce43ec7bf12 Date: 2012-01-18 17:46 +0100 http://bitbucket.org/pypy/pypy/changeset/cce43ec7bf12/ Log: (bivab, arigo) Add test for an operation that does not correctly emit the code for the guard, i.e. emitting two guards for the same operation diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3233,6 +3233,24 @@ 'float', descr=calldescr) assert res.getfloat() == expected + def test_some_issue(self): + t_box, T_box = self.alloc_instance(self.T) + null_box = self.null_instance() + faildescr = BasicFailDescr(42) + operations = [ + ResOperation(rop.GUARD_NONNULL_CLASS, [t_box, T_box], None, + descr=faildescr), + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(1))] + operations[0].setfailargs([]) + looptoken = JitCellToken() + inputargs = [t_box] + self.cpu.compile_loop(inputargs, operations, looptoken) + operations = [ + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(99)) + ] + self.cpu.compile_bridge(faildescr, [], operations, looptoken) + fail = self.cpu.execute_token(looptoken, null_box.getref_base()) + assert fail.identifier == 99 def test_compile_loop_with_target(self): i0 = BoxInt() From noreply at buildbot.pypy.org Tue Feb 7 17:17:55 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:17:55 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: give test a proper name Message-ID: <20120207161755.E2B3E7107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52191:3a98709cbeba Date: 2012-01-18 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/3a98709cbeba/ Log: give test a proper name diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3233,7 +3233,7 @@ 'float', descr=calldescr) assert res.getfloat() == expected - def test_some_issue(self): + def test_wrong_guard_nonnull_class(self): t_box, T_box = self.alloc_instance(self.T) null_box = self.null_instance() faildescr = BasicFailDescr(42) From noreply at buildbot.pypy.org Tue Feb 7 17:17:57 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:17:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add test_list to the ppc backend Message-ID: <20120207161757.2174F7107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52192:07167acad992 Date: 2012-01-19 13:54 -0800 http://bitbucket.org/pypy/pypy/changeset/07167acad992/ Log: Add test_list to the ppc backend diff --git a/pypy/jit/backend/ppc/test/support.py b/pypy/jit/backend/ppc/test/support.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/support.py @@ -0,0 +1,9 @@ +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.metainterp.test import support + +class JitPPCMixin(support.LLJitMixin): + type_system = 'lltype' + CPUClass = getcpuclass() + + def check_jumps(self, maxcount): + pass diff --git a/pypy/jit/backend/ppc/test/test_list.py b/pypy/jit/backend/ppc/test/test_list.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_list.py @@ -0,0 +1,8 @@ + +from pypy.jit.metainterp.test.test_list import ListTests +from pypy.jit.backend.ppc.test.support import JitPPCMixin + +class TestList(JitPPCMixin, ListTests): + # for individual tests see + # ====> ../../../metainterp/test/test_list.py + pass From noreply at buildbot.pypy.org Tue Feb 7 17:17:58 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:17:58 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: update interface of compile_loop Message-ID: <20120207161758.53AD87107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52193:b102e85bc3bb Date: 2012-01-19 13:54 -0800 http://bitbucket.org/pypy/pypy/changeset/b102e85bc3bb/ Log: update interface of compile_loop diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -42,7 +42,7 @@ def setup_once(self): self.asm.setup_once() - def compile_loop(self, inputargs, operations, looptoken, log=False): + def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, From noreply at buildbot.pypy.org Tue Feb 7 17:18:00 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:18:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120207161800.2C8647107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52194:d7c152aca457 Date: 2012-02-07 08:04 -0800 http://bitbucket.org/pypy/pypy/changeset/d7c152aca457/ Log: merge diff --git a/pypy/jit/backend/detect_cpu.py b/pypy/jit/backend/detect_cpu.py --- a/pypy/jit/backend/detect_cpu.py +++ b/pypy/jit/backend/detect_cpu.py @@ -32,6 +32,7 @@ 'x86': 'x86', # Apple 'Power Macintosh': 'ppc', 'ppc64': 'ppc64', + 'ppc64_64': 'ppc64', 'x86_64': 'x86', 'amd64': 'x86', # freebsd 'AMD64': 'x86', # win64 @@ -79,6 +80,8 @@ return "pypy.jit.backend.arm.runner", "ArmCPU" elif backend_name == 'ppc64': return "pypy.jit.backend.ppc.runner", "PPC_64_CPU" + elif backend_name == 'ppc64_64': + return "pypy.jit.backend.ppc.runner", "PPC_64_CPU" else: raise ProcessorAutodetectError, ( "we have no JIT backend for this cpu: '%s'" % backend_name) diff --git a/pypy/jit/backend/ppc/ppcgen/_ppcgen.c b/pypy/jit/backend/ppc/_ppcgen.c rename from pypy/jit/backend/ppc/ppcgen/_ppcgen.c rename to pypy/jit/backend/ppc/_ppcgen.c diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/arch.py rename from pypy/jit/backend/ppc/ppcgen/arch.py rename to pypy/jit/backend/ppc/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/arch.py @@ -1,8 +1,8 @@ # Constants that depend on whether we are on 32-bit or 64-bit -from pypy.jit.backend.ppc.ppcgen.register import (NONVOLATILES, - NONVOLATILES_FLOAT, - MANAGED_REGS) +from pypy.jit.backend.ppc.register import (NONVOLATILES, + NONVOLATILES_FLOAT, + MANAGED_REGS) import sys if sys.maxint == (2**31 - 1): diff --git a/pypy/jit/backend/ppc/ppcgen/asmfunc.py b/pypy/jit/backend/ppc/asmfunc.py rename from pypy/jit/backend/ppc/ppcgen/asmfunc.py rename to pypy/jit/backend/ppc/asmfunc.py --- a/pypy/jit/backend/ppc/ppcgen/asmfunc.py +++ b/pypy/jit/backend/ppc/asmfunc.py @@ -4,7 +4,7 @@ from pypy.jit.backend.ppc.codebuf import MachineCodeBlockWrapper from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager from pypy.rpython.lltypesystem import lltype, rffi -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD _ppcgen = None diff --git a/pypy/jit/backend/ppc/ppcgen/assembler.py b/pypy/jit/backend/ppc/assembler.py rename from pypy/jit/backend/ppc/ppcgen/assembler.py rename to pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -1,5 +1,5 @@ import os -from pypy.jit.backend.ppc.ppcgen import form +from pypy.jit.backend.ppc import form # don't be fooled by the fact that there's some separation between a # generic assembler class and a PPC assembler class... there's @@ -62,7 +62,7 @@ def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): insns = self.assemble0(dump) - from pypy.jit.backend.ppc.ppcgen import asmfunc + from pypy.jit.backend.ppc import asmfunc c = asmfunc.AsmCode(len(insns)*4) for i in insns: c.emit(i) diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py rename from pypy/jit/backend/ppc/ppcgen/codebuilder.py rename to pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1,16 +1,16 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, NONVOLATILES, GPR_SAVE_AREA, IS_PPC_64) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import gen_emit_cmp_op -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +from pypy.jit.backend.ppc.helper.assembler import gen_emit_cmp_op +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, AsmMemoryManager, MachineDataBlockWrapper) diff --git a/pypy/jit/backend/ppc/ppcgen/condition.py b/pypy/jit/backend/ppc/condition.py rename from pypy/jit/backend/ppc/ppcgen/condition.py rename to pypy/jit/backend/ppc/condition.py diff --git a/pypy/jit/backend/ppc/ppcgen/field.py b/pypy/jit/backend/ppc/field.py rename from pypy/jit/backend/ppc/ppcgen/field.py rename to pypy/jit/backend/ppc/field.py diff --git a/pypy/jit/backend/ppc/ppcgen/form.py b/pypy/jit/backend/ppc/form.py rename from pypy/jit/backend/ppc/ppcgen/form.py rename to pypy/jit/backend/ppc/form.py --- a/pypy/jit/backend/ppc/ppcgen/form.py +++ b/pypy/jit/backend/ppc/form.py @@ -91,7 +91,7 @@ for fname, v in more_specializatons.iteritems(): field = self.fieldmap[fname] if field not in self.fields: - raise FormException, "don't know about '%s' here"%k + raise FormException, "don't know about '%s' here" % field if isinstance(v, str): ds[field] = self.fieldmap[v] else: diff --git a/pypy/jit/backend/ppc/ppcgen/func_builder.py b/pypy/jit/backend/ppc/func_builder.py rename from pypy/jit/backend/ppc/ppcgen/func_builder.py rename to pypy/jit/backend/ppc/func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/func_builder.py +++ b/pypy/jit/backend/ppc/func_builder.py @@ -1,6 +1,6 @@ -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.regname import * def load_arg(code, argi, typecode): rD = r3+argi diff --git a/pypy/jit/backend/ppc/ppcgen/helper/__init__.py b/pypy/jit/backend/ppc/helper/__init__.py rename from pypy/jit/backend/ppc/ppcgen/helper/__init__.py rename to pypy/jit/backend/ppc/helper/__init__.py diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/helper/assembler.py rename from pypy/jit/backend/ppc/ppcgen/helper/assembler.py rename to pypy/jit/backend/ppc/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/helper/assembler.py @@ -1,10 +1,10 @@ -import pypy.jit.backend.ppc.ppcgen.condition as c +import pypy.jit.backend.ppc.condition as c from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask -from pypy.jit.backend.ppc.ppcgen.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, +from pypy.jit.backend.ppc.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, BACKCHAIN_SIZE) from pypy.jit.metainterp.history import FLOAT from pypy.rlib.unroll import unrolling_iterable -import pypy.jit.backend.ppc.ppcgen.register as r +import pypy.jit.backend.ppc.register as r from pypy.rpython.lltypesystem import rffi, lltype def gen_emit_cmp_op(condition, signed=True): diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py rename from pypy/jit/backend/ppc/ppcgen/helper/regalloc.py rename to pypy/jit/backend/ppc/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/helper/regalloc.py @@ -30,7 +30,7 @@ l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1 and not imm_a0: - l1 = self.make_sure_var_in_reg(arg1, boxes) + l1 = self._ensure_value_is_boxed(arg1, boxes) else: l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) @@ -44,7 +44,7 @@ def f(self, op): a0 = op.getarg(0) assert isinstance(a0, Box) - reg = self.make_sure_var_in_reg(a0) + reg = self._ensure_value_is_boxed(a0) self.possibly_free_vars_for_op(op) res = self.force_allocate_reg(op.result, [a0]) return [reg, res] @@ -65,15 +65,8 @@ b0, b1 = boxes imm_b0 = _check_imm_arg(b0) imm_b1 = _check_imm_arg(b1) - if not imm_b0 and imm_b1: - l0 = self._ensure_value_is_boxed(b0) - l1 = self.make_sure_var_in_reg(b1, boxes) - elif imm_b0 and not imm_b1: - l0 = self.make_sure_var_in_reg(b0) - l1 = self._ensure_value_is_boxed(b1, boxes) - else: - l0 = self._ensure_value_is_boxed(b0) - l1 = self._ensure_value_is_boxed(b1, boxes) + l0 = self._ensure_value_is_boxed(b0, boxes) + l1 = self._ensure_value_is_boxed(b1, boxes) locs = [l0, l1] self.possibly_free_vars_for_op(op) self.free_temp_vars() diff --git a/pypy/jit/backend/ppc/instruction.py b/pypy/jit/backend/ppc/instruction.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/instruction.py +++ /dev/null @@ -1,842 +0,0 @@ -r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, \ - r13, r14, r15, r16, r17, r18, r19, r20, r21, r22, \ - r23, r24, r25, r26, r27, r28, r29, r30, r31 = range(32) -rSCRATCH = r0 -rSP = r1 -rFP = r2 # the ABI doesn't specify a frame pointer. however, we want one - -class AllocationSlot(object): - offset = 0 - number = 0 - def __init__(self): - # The field alloc points to a singleton used by the register - # allocator to detect conflicts. No two AllocationSlot - # instances with the same value in self.alloc can be used at - # once. - self.alloc = self - - def make_loc(self): - """ When we assign a variable to one of these registers, we - call make_loc() to get the actual location instance; that - instance will have its alloc field set to self. For - everything but condition registers, this is self.""" - return self - -class _StackSlot(AllocationSlot): - is_register = False - def __init__(self, offset): - AllocationSlot.__init__(self) - self.offset = offset - def __repr__(self): - return "stack@%s"%(self.offset,) - -_stack_slot_cache = {} -def stack_slot(offset): - # because stack slots are put into dictionaries which compare by - # identity, it is important that there's a unique _StackSlot - # object for each offset, at least per function generated or - # something. doing the caching here is easier, though. - if offset in _stack_slot_cache: - return _stack_slot_cache[offset] - _stack_slot_cache[offset] = res = _StackSlot(offset) - return res - -NO_REGISTER = -1 -GP_REGISTER = 0 -FP_REGISTER = 1 -CR_FIELD = 2 -CT_REGISTER = 3 - -class Register(AllocationSlot): - is_register = True - def __init__(self): - AllocationSlot.__init__(self) - -class GPR(Register): - regclass = GP_REGISTER - def __init__(self, number): - Register.__init__(self) - self.number = number - def __repr__(self): - return 'r' + str(self.number) -gprs = map(GPR, range(32)) - -class FPR(Register): - regclass = FP_REGISTER - def __init__(self, number): - Register.__init__(self) - self.number = number - -fprs = map(FPR, range(32)) - -class BaseCRF(Register): - """ These represent condition registers; however, we never actually - use these as the location of something in the register allocator. - Instead, we place it in an instance of CRF which indicates which - bits are required to extract the value. Note that CRF().alloc will - always be an instance of this. """ - regclass = CR_FIELD - def __init__(self, number): - self.number = number - self.alloc = self - def make_loc(self): - return CRF(self) - -crfs = map(BaseCRF, range(8)) - -class CRF(Register): - regclass = CR_FIELD - def __init__(self, crf): - Register.__init__(self) - self.alloc = crf - self.number = crf.number - self.info = (-1,-1) # (bit, negated) - def set_info(self, info): - assert len(info) == 2 - self.info = info - def make_loc(self): - # should never call this on a CRF, only a BaseCRF - raise NotImplementedError - def move_to_gpr(self, gpr): - bit, negated = self.info - return _CRF2GPR(gpr, self.alloc.number*4 + bit, negated) - def move_from_gpr(self, gpr): - # cmp2info['ne'] - self.set_info((2, 1)) - return _GPR2CRF(self, gpr) - def __repr__(self): - return 'crf' + str(self.number) + str(self.info) - -class CTR(Register): - regclass = CT_REGISTER - def move_from_gpr(self, gpr): - return _GPR2CTR(gpr) - -ctr = CTR() - -_insn_index = [0] - -class Insn(object): - ''' - result is the Var instance that holds the result, or None - result_regclass is the class of the register the result goes into - - reg_args is the vars that need to have registers allocated for them - reg_arg_regclasses is the type of register that needs to be allocated - ''' - def __init__(self): - self._magic_index = _insn_index[0] - _insn_index[0] += 1 - def __repr__(self): - return "<%s %d>" % (self.__class__.__name__, self._magic_index) - def emit(self, asm): - pass - -class Insn_GPR__GPR_GPR(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - - self.result_reg = None - self.arg_reg1 = None - self.arg_reg2 = None - - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg1 = allocator.loc_of(self.reg_args[0]) - self.arg_reg2 = allocator.loc_of(self.reg_args[1]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg1: - a1 = "%s@%s"%(self.reg_args[0], self.arg_reg1) - else: - a1 = str(self.reg_args[0]) - if self.arg_reg2: - a2 = "%s@%s"%(self.reg_args[1], self.arg_reg2) - else: - a2 = str(self.reg_args[1]) - return "<%s-%s %s %s, %s, %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, - r, a1, a2) - - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg1.number, - self.arg_reg2.number) - -class Insn_GPR__GPR_IMM(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[1] - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [args[0]] - self.reg_arg_regclasses = [GP_REGISTER] - self.result_reg = None - self.arg_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s, (%s)>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, - r, a, self.imm.value) - - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg.number, - self.imm.value) - -class Insn_GPR__GPR(Insn): - def __init__(self, methptr, result, arg): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [arg] - self.reg_arg_regclasses = [GP_REGISTER] - - self.result_reg = None - self.arg_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r, a) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.arg_reg.number) - - -class Insn_GPR(Insn): - def __init__(self, methptr, result): - Insn.__init__(self) - self.methptr = methptr - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - return "<%s-%d %s %s>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number) - -class Insn_GPR__IMM(Insn): - def __init__(self, methptr, result, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[0] - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_reg = None - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - return "<%s-%d %s %s, (%s)>" % (self.__class__.__name__, self._magic_index, - self.methptr.im_func.func_name, r, - self.imm.value) - def emit(self, asm): - self.methptr(asm, - self.result_reg.number, - self.imm.value) - -class MoveCRB2GPR(Insn): - def __init__(self, result, gv_condition): - Insn.__init__(self) - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [gv_condition] - self.reg_arg_regclasses = [CR_FIELD] - def allocate(self, allocator): - self.targetreg = allocator.loc_of(self.result) - self.crf = allocator.loc_of(self.reg_args[0]) - def emit(self, asm): - assert isinstance(self.crf, CRF) - bit, negated = self.crf.info - asm.mfcr(self.targetreg.number) - asm.extrwi(self.targetreg.number, self.targetreg.number, 1, self.crf.number*4+bit) - if negated: - asm.xori(self.targetreg.number, self.targetreg.number, 1) - -class Insn_None__GPR_GPR_IMM(Insn): - def __init__(self, methptr, args): - Insn.__init__(self) - self.methptr = methptr - self.imm = args[2] - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = args[:2] - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - def allocate(self, allocator): - self.reg1 = allocator.loc_of(self.reg_args[0]) - self.reg2 = allocator.loc_of(self.reg_args[1]) - def __repr__(self): - return "<%s %s %d>" % (self.__class__.__name__, self.methptr.im_func.func_name, self._magic_index) - - def emit(self, asm): - self.methptr(asm, - self.reg1.number, - self.reg2.number, - self.imm.value) - -class Insn_None__GPR_GPR_GPR(Insn): - def __init__(self, methptr, args): - Insn.__init__(self) - self.methptr = methptr - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER, GP_REGISTER] - def allocate(self, allocator): - self.reg1 = allocator.loc_of(self.reg_args[0]) - self.reg2 = allocator.loc_of(self.reg_args[1]) - self.reg3 = allocator.loc_of(self.reg_args[2]) - def __repr__(self): - return "<%s %s %d>" % (self.__class__.__name__, self.methptr.im_func.func_name, self._magic_index) - - def emit(self, asm): - self.methptr(asm, - self.reg1.number, - self.reg2.number, - self.reg3.number) - -class Extrwi(Insn): - def __init__(self, result, source, size, bit): - Insn.__init__(self) - - self.result = result - self.result_regclass = GP_REGISTER - self.reg_args = [source] - self.reg_arg_regclasses = [GP_REGISTER] - self.result_reg = None - self.arg_reg = None - - self.size = size - self.bit = bit - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d extrwi %s, %s, %s, %s>" % (self.__class__.__name__, self._magic_index, - r, a, self.size, self.bit) - - def emit(self, asm): - asm.extrwi(self.result_reg.number, - self.arg_reg.number, - self.size, self.bit) - - -class CMPInsn(Insn): - def __init__(self, info, result): - Insn.__init__(self) - self.info = info - self.result = result - self.result_reg = None - - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - assert isinstance(self.result_reg, CRF) - self.result_reg.set_info(self.info) - -class CMPW(CMPInsn): - def __init__(self, info, result, args): - CMPInsn.__init__(self, info, result) - self.result_regclass = CR_FIELD - self.reg_args = args - self.reg_arg_regclasses = [GP_REGISTER, GP_REGISTER] - self.arg_reg1 = None - self.arg_reg2 = None - - def allocate(self, allocator): - CMPInsn.allocate(self, allocator) - self.arg_reg1 = allocator.loc_of(self.reg_args[0]) - self.arg_reg2 = allocator.loc_of(self.reg_args[1]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg1: - a1 = "%s@%s"%(self.reg_args[0], self.arg_reg1) - else: - a1 = str(self.reg_args[0]) - if self.arg_reg2: - a2 = "%s@%s"%(self.reg_args[1], self.arg_reg2) - else: - a2 = str(self.reg_args[1]) - return "<%s-%d %s %s, %s, %s>"%(self.__class__.__name__, self._magic_index, - self.__class__.__name__.lower(), - r, a1, a2) - - def emit(self, asm): - asm.cmpw(self.result_reg.number, self.arg_reg1.number, self.arg_reg2.number) - -class CMPWL(CMPW): - def emit(self, asm): - asm.cmplw(self.result_reg.number, self.arg_reg1.number, self.arg_reg2.number) - -class CMPWI(CMPInsn): - def __init__(self, info, result, args): - CMPInsn.__init__(self, info, result) - self.imm = args[1] - self.result_regclass = CR_FIELD - self.reg_args = [args[0]] - self.reg_arg_regclasses = [GP_REGISTER] - self.arg_reg = None - - def allocate(self, allocator): - CMPInsn.allocate(self, allocator) - self.arg_reg = allocator.loc_of(self.reg_args[0]) - - def __repr__(self): - if self.result_reg: - r = "%s@%s"%(self.result, self.result_reg) - else: - r = str(self.result) - if self.arg_reg: - a = "%s@%s"%(self.reg_args[0], self.arg_reg) - else: - a = str(self.reg_args[0]) - return "<%s-%d %s %s, %s, (%s)>"%(self.__class__.__name__, self._magic_index, - self.__class__.__name__.lower(), - r, a, self.imm.value) - def emit(self, asm): - #print "CMPWI", asm.mc.tell() - asm.cmpwi(self.result_reg.number, self.arg_reg.number, self.imm.value) - -class CMPWLI(CMPWI): - def emit(self, asm): - asm.cmplwi(self.result_reg.number, self.arg_reg.number, self.imm.value) - - -## class MTCTR(Insn): -## def __init__(self, result, args): -## Insn.__init__(self) -## self.result = result -## self.result_regclass = CT_REGISTER - -## self.reg_args = args -## self.reg_arg_regclasses = [GP_REGISTER] - -## def allocate(self, allocator): -## self.arg_reg = allocator.loc_of(self.reg_args[0]) - -## def emit(self, asm): -## asm.mtctr(self.arg_reg.number) - -class Jump(Insn): - def __init__(self, gv_cond, targetbuilder, jump_if_true, jump_args_gv): - Insn.__init__(self) - self.gv_cond = gv_cond - self.jump_if_true = jump_if_true - - self.result = None - self.result_regclass = NO_REGISTER - self.reg_args = [gv_cond] - self.reg_arg_regclasses = [CR_FIELD] - self.crf = None - - self.jump_args_gv = jump_args_gv - self.targetbuilder = targetbuilder - def allocate(self, allocator): - self.crf = allocator.loc_of(self.reg_args[0]) - assert self.crf.info[0] != -1 - - assert self.targetbuilder.initial_var2loc is None - self.targetbuilder.initial_var2loc = {} - from pypy.jit.codegen.ppc.rgenop import Var - for gv_arg in self.jump_args_gv: - if isinstance(gv_arg, Var): - self.targetbuilder.initial_var2loc[gv_arg] = allocator.var2loc[gv_arg] - allocator.builders_to_tell_spill_offset_to.append(self.targetbuilder) - def __repr__(self): - if self.jump_if_true: - op = 'if_true' - else: - op = 'if_false' - if self.crf: - a = '%s@%s'%(self.reg_args[0], self.crf) - else: - a = self.reg_args[0] - return '<%s-%d %s %s>'%(self.__class__.__name__, self._magic_index, - op, a) - def emit(self, asm): - if self.targetbuilder.start: - asm.load_word(rSCRATCH, self.targetbuilder.start) - else: - self.targetbuilder.patch_start_here = asm.mc.tell() - asm.load_word(rSCRATCH, 0) - asm.mtctr(rSCRATCH) - bit, negated = self.crf.info - assert bit != -1 - if negated ^ self.jump_if_true: - BO = 12 # jump if relavent bit is set in the CR - else: - BO = 4 # jump if relavent bit is NOT set in the CR - asm.bcctr(BO, self.crf.number*4 + bit) - -class Label(Insn): - def __init__(self, label): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_regclass = NO_REGISTER - self.result = None - self.label = label - def allocate(self, allocator): - for gv in self.label.args_gv: - loc = allocator.loc_of(gv) - if isinstance(loc, CRF): - allocator.forget(gv, loc) - allocator.lru.remove(gv) - allocator.freeregs[loc.regclass].append(loc.alloc) - new_loc = allocator._allocate_reg(GP_REGISTER, gv) - allocator.lru.append(gv) - allocator.insns.append(loc.move_to_gpr(new_loc.number)) - loc = new_loc - self.label.arg_locations = [] - for gv in self.label.args_gv: - loc = allocator.loc_of(gv) - self.label.arg_locations.append(loc) - allocator.labels_to_tell_spill_offset_to.append(self.label) - def __repr__(self): - if hasattr(self.label, 'arg_locations'): - arg_locations = '[' + ', '.join( - ['%s@%s'%(gv, loc) for gv, loc in - zip(self.label.args_gv, self.label.arg_locations)]) + ']' - else: - arg_locations = str(self.label.args_gv) - return ''%(self._magic_index, - arg_locations) - def emit(self, asm): - self.label.startaddr = asm.mc.tell() - -class LoadFramePointer(Insn): - def __init__(self, result): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = result - self.result_regclass = GP_REGISTER - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - def emit(self, asm): - asm.mr(self.result_reg.number, rFP) - -class CopyIntoStack(Insn): - def __init__(self, place, v): - Insn.__init__(self) - self.reg_args = [v] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = None - self.result_regclass = NO_REGISTER - self.place = place - def allocate(self, allocator): - self.arg_reg = allocator.loc_of(self.reg_args[0]) - self.target_slot = allocator.spill_slot() - self.place.offset = self.target_slot.offset - def emit(self, asm): - asm.stw(self.arg_reg.number, rFP, self.target_slot.offset) - -class CopyOffStack(Insn): - def __init__(self, v, place): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = v - self.result_regclass = GP_REGISTER - self.place = place - def allocate(self, allocator): - self.result_reg = allocator.loc_of(self.result) - allocator.free_stack_slots.append(stack_slot(self.place.offset)) - def emit(self, asm): - asm.lwz(self.result_reg.number, rFP, self.place.offset) - -class SpillCalleeSaves(Insn): - def __init__(self): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = None - self.result_regclass = NO_REGISTER - def allocate(self, allocator): - # cough cough cough - callersave = gprs[3:13] - for v in allocator.var2loc: - loc = allocator.loc_of(v) - if loc in callersave: - allocator.spill(loc, v) - allocator.freeregs[GP_REGISTER].append(loc) - def emit(self, asm): - pass - -class LoadArg(Insn): - def __init__(self, argnumber, arg): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result = None - self.result_regclass = NO_REGISTER - - self.argnumber = argnumber - self.arg = arg - def allocate(self, allocator): - from pypy.jit.codegen.ppc.rgenop import Var - if isinstance(self.arg, Var): - self.loc = allocator.loc_of(self.arg) - else: - self.loc = None - def emit(self, asm): - if self.argnumber < 8: # magic numbers 'r' us - targetreg = 3+self.argnumber - if self.loc is None: - self.arg.load_now(asm, gprs[targetreg]) - elif self.loc.is_register: - asm.mr(targetreg, self.loc.number) - else: - asm.lwz(targetreg, rFP, self.loc.offset) - else: - targetoffset = 24+self.argnumber*4 - if self.loc is None: - self.arg.load_now(asm, gprs[0]) - asm.stw(r0, r1, targetoffset) - elif self.loc.is_register: - asm.stw(self.loc.number, r1, targetoffset) - else: - asm.lwz(r0, rFP, self.loc.offset) - asm.stw(r0, r1, targetoffset) - -class CALL(Insn): - def __init__(self, result, target): - Insn.__init__(self) - from pypy.jit.codegen.ppc.rgenop import Var - if isinstance(target, Var): - self.reg_args = [target] - self.reg_arg_regclasses = [CT_REGISTER] - else: - self.reg_args = [] - self.reg_arg_regclasses = [] - self.target = target - self.result = result - self.result_regclass = GP_REGISTER - def allocate(self, allocator): - if self.reg_args: - assert allocator.loc_of(self.reg_args[0]) is ctr - self.resultreg = allocator.loc_of(self.result) - def emit(self, asm): - if not self.reg_args: - self.target.load_now(asm, gprs[0]) - asm.mtctr(0) - asm.bctrl() - asm.lwz(rFP, rSP, 0) - if self.resultreg != gprs[3]: - asm.mr(self.resultreg.number, 3) - - -class AllocTimeInsn(Insn): - def __init__(self): - Insn.__init__(self) - self.reg_args = [] - self.reg_arg_regclasses = [] - self.result_regclass = NO_REGISTER - self.result = None - -class Move(AllocTimeInsn): - def __init__(self, dest, src): - AllocTimeInsn.__init__(self) - self.dest = dest - self.src = src - def emit(self, asm): - asm.mr(self.dest.number, self.src.number) - -class Load(AllocTimeInsn): - def __init__(self, dest, const): - AllocTimeInsn.__init__(self) - self.dest = dest - self.const = const - def __repr__(self): - return ""%(self._magic_index, self.dest, self.const) - def emit(self, asm): - self.const.load_now(asm, self.dest) - -class Unspill(AllocTimeInsn): - """ A special instruction inserted by our register "allocator." It - indicates that we need to load a value from the stack into a register - because we spilled a particular value. """ - def __init__(self, var, reg, stack): - """ - var --- the var we spilled (a Var) - reg --- the reg we spilled it from (an integer) - offset --- the offset on the stack we spilled it to (an integer) - """ - AllocTimeInsn.__init__(self) - self.var = var - self.reg = reg - self.stack = stack - if not isinstance(self.reg, GPR): - assert isinstance(self.reg, CRF) or isinstance(self.reg, CTR) - self.moveinsn = self.reg.move_from_gpr(0) - else: - self.moveinsn = None - def __repr__(self): - return ''%(self._magic_index, self.var, self.reg, self.stack) - def emit(self, asm): - if isinstance(self.reg, GPR): - r = self.reg.number - else: - r = 0 - asm.lwz(r, rFP, self.stack.offset) - if self.moveinsn: - self.moveinsn.emit(asm) - -class Spill(AllocTimeInsn): - """ A special instruction inserted by our register "allocator." - It indicates that we need to store a value from the register into - the stack because we spilled a particular value.""" - def __init__(self, var, reg, stack): - """ - var --- the var we are spilling (a Var) - reg --- the reg we are spilling it from (an integer) - offset --- the offset on the stack we are spilling it to (an integer) - """ - AllocTimeInsn.__init__(self) - self.var = var - self.reg = reg - self.stack = stack - def __repr__(self): - return ''%(self._magic_index, self.var, self.stack, self.reg) - def emit(self, asm): - if isinstance(self.reg, GPR): - r = self.reg.number - else: - assert isinstance(self.reg, CRF) - self.reg.move_to_gpr(0).emit(asm) - r = 0 - #print 'spilling to', self.stack.offset - asm.stw(r, rFP, self.stack.offset) - -class _CRF2GPR(AllocTimeInsn): - def __init__(self, targetreg, bit, negated): - AllocTimeInsn.__init__(self) - self.targetreg = targetreg - self.bit = bit - self.negated = negated - def __repr__(self): - number = self.bit // 4 - bit = self.bit % 4 - return '' % ( - self._magic_index, self.targetreg, number, bit, self.negated) - def emit(self, asm): - asm.mfcr(self.targetreg) - asm.extrwi(self.targetreg, self.targetreg, 1, self.bit) - if self.negated: - asm.xori(self.targetreg, self.targetreg, 1) - -class _GPR2CRF(AllocTimeInsn): - def __init__(self, targetreg, fromreg): - AllocTimeInsn.__init__(self) - self.targetreg = targetreg - self.fromreg = fromreg - def __repr__(self): - return '' % ( - self._magic_index, self.targetreg, self.fromreg) - def emit(self, asm): - asm.cmpwi(self.targetreg.number, self.fromreg, 0) - -class _GPR2CTR(AllocTimeInsn): - def __init__(self, fromreg): - AllocTimeInsn.__init__(self) - self.fromreg = fromreg - def emit(self, asm): - asm.mtctr(self.fromreg) - -class Return(Insn): - """ Ensures the return value is in r3 """ - def __init__(self, var): - Insn.__init__(self) - self.var = var - self.reg_args = [self.var] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = None - self.result_regclass = NO_REGISTER - self.reg = None - def allocate(self, allocator): - self.reg = allocator.loc_of(self.reg_args[0]) - def emit(self, asm): - if self.reg.number != 3: - asm.mr(r3, self.reg.number) - -class FakeUse(Insn): - """ A fake use of a var to get it into a register. And reserving - a condition register field.""" - def __init__(self, rvar, var): - Insn.__init__(self) - self.var = var - self.reg_args = [self.var] - self.reg_arg_regclasses = [GP_REGISTER] - self.result = rvar - self.result_regclass = CR_FIELD - def allocate(self, allocator): - pass - def emit(self, asm): - pass diff --git a/pypy/jit/backend/ppc/ppcgen/jump.py b/pypy/jit/backend/ppc/jump.py rename from pypy/jit/backend/ppc/ppcgen/jump.py rename to pypy/jit/backend/ppc/jump.py --- a/pypy/jit/backend/ppc/ppcgen/jump.py +++ b/pypy/jit/backend/ppc/jump.py @@ -76,7 +76,7 @@ src_locations2, dst_locations2, tmpreg2): # find and push the xmm stack locations from src_locations2 that # are going to be overwritten by dst_locations1 - from pypy.jit.backend.ppc.ppcgen.arch import WORD + from pypy.jit.backend.ppc.arch import WORD extrapushes = [] dst_keys = {} for loc in dst_locations1: diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/locations.py rename from pypy/jit/backend/ppc/ppcgen/locations.py rename to pypy/jit/backend/ppc/locations.py diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/opassembler.py rename from pypy/jit/backend/ppc/ppcgen/opassembler.py rename to pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1,19 +1,19 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, gen_emit_unary_cmp_op) -import pypy.jit.backend.ppc.ppcgen.condition as c -import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, +import pypy.jit.backend.ppc.condition as c +import pypy.jit.backend.ppc.register as r +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, GPR_SAVE_AREA, BACKCHAIN_SIZE, MAX_REG_PARAMS) from pypy.jit.metainterp.history import (JitCellToken, TargetToken, Box, AbstractFailDescr, FLOAT, INT, REF) from pypy.rlib.objectmodel import we_are_translated -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, +from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.codebuilder import OverwritingBuilder -from pypy.jit.backend.ppc.ppcgen.regalloc import TempPtr, TempInt +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype from pypy.jit.metainterp.resoperation import rop @@ -854,24 +854,6 @@ self.emit_call(op, arglocs, regalloc) self.propagate_memoryerror_if_r3_is_null() - # from: ../x86/regalloc.py:750 - # called from regalloc - # XXX kill this function at some point - def _regalloc_malloc_varsize(self, size, size_box, vloc, vbox, - ofs_items_loc, regalloc, result): - if IS_PPC_32: - self.mc.mullw(size.value, size.value, vloc.value) - else: - self.mc.mulld(size.value, size.value, vloc.value) - if ofs_items_loc.is_imm(): - self.mc.addi(size.value, size.value, ofs_items_loc.value) - else: - self.mc.add(size.value, size.value, ofs_items_loc.value) - force_index = self.write_new_force_index() - regalloc.force_spill_var(vbox) - self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, - result=result) - def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py rename from pypy/jit/backend/ppc/ppcgen/ppc_assembler.py rename to pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -1,27 +1,27 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, - Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.opassembler import OpAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, IS_PPC_64, WORD, - NONVOLATILES, MAX_REG_PARAMS, - GPR_SAVE_AREA, BACKCHAIN_SIZE, - FPR_SAVE_AREA, - FLOAT_INT_CONVERSION, FORCE_INDEX, - SIZE_LOAD_IMM_PATCH_SP) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, - encode32, encode64, - decode32, decode64, - count_reg_args, - Saved_Volatiles) -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, + Regalloc) +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.opassembler import OpAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, + NONVOLATILES, MAX_REG_PARAMS, + GPR_SAVE_AREA, BACKCHAIN_SIZE, + FPR_SAVE_AREA, + FLOAT_INT_CONVERSION, FORCE_INDEX, + SIZE_LOAD_IMM_PATCH_SP) +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, + encode32, encode64, + decode32, decode64, + count_reg_args, + Saved_Volatiles) +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, @@ -40,7 +40,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.ppc.ppcgen.locations import StackLocation, get_spp_offset +from pypy.jit.backend.ppc.locations import StackLocation, get_spp_offset memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_field.py b/pypy/jit/backend/ppc/ppc_field.py rename from pypy/jit/backend/ppc/ppcgen/ppc_field.py rename to pypy/jit/backend/ppc/ppc_field.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_field.py +++ b/pypy/jit/backend/ppc/ppc_field.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen import regname +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc import regname fields = { # bit margins are *inclusive*! (and bit 0 is # most-significant, 31 least significant) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_form.py b/pypy/jit/backend/ppc/ppc_form.py rename from pypy/jit/backend/ppc/ppcgen/ppc_form.py rename to pypy/jit/backend/ppc/ppc_form.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_form.py +++ b/pypy/jit/backend/ppc/ppc_form.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.form import Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields +from pypy.jit.backend.ppc.form import Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields class PPCForm(Form): fieldmap = ppc_fields diff --git a/pypy/jit/backend/ppc/ppcgen/__init__.py b/pypy/jit/backend/ppc/ppcgen/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/autopath.py b/pypy/jit/backend/ppc/ppcgen/autopath.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/autopath.py +++ /dev/null @@ -1,114 +0,0 @@ -""" -self cloning, automatic path configuration - -copy this into any subdirectory of pypy from which scripts need -to be run, typically all of the test subdirs. -The idea is that any such script simply issues - - import autopath - -and this will make sure that the parent directory containing "pypy" -is in sys.path. - -If you modify the master "autopath.py" version (in pypy/tool/autopath.py) -you can directly run it which will copy itself on all autopath.py files -it finds under the pypy root directory. - -This module always provides these attributes: - - pypydir pypy root directory path - this_dir directory where this autopath.py resides - -""" - - -def __dirinfo(part): - """ return (partdir, this_dir) and insert parent of partdir - into sys.path. If the parent directories don't have the part - an EnvironmentError is raised.""" - - import sys, os - try: - head = this_dir = os.path.realpath(os.path.dirname(__file__)) - except NameError: - head = this_dir = os.path.realpath(os.path.dirname(sys.argv[0])) - - while head: - partdir = head - head, tail = os.path.split(head) - if tail == part: - break - else: - raise EnvironmentError, "'%s' missing in '%r'" % (partdir, this_dir) - - pypy_root = os.path.join(head, '') - try: - sys.path.remove(head) - except ValueError: - pass - sys.path.insert(0, head) - - munged = {} - for name, mod in sys.modules.items(): - if '.' in name: - continue - fn = getattr(mod, '__file__', None) - if not isinstance(fn, str): - continue - newname = os.path.splitext(os.path.basename(fn))[0] - if not newname.startswith(part + '.'): - continue - path = os.path.join(os.path.dirname(os.path.realpath(fn)), '') - if path.startswith(pypy_root) and newname != part: - modpaths = os.path.normpath(path[len(pypy_root):]).split(os.sep) - if newname != '__init__': - modpaths.append(newname) - modpath = '.'.join(modpaths) - if modpath not in sys.modules: - munged[modpath] = mod - - for name, mod in munged.iteritems(): - if name not in sys.modules: - sys.modules[name] = mod - if '.' in name: - prename = name[:name.rfind('.')] - postname = name[len(prename)+1:] - if prename not in sys.modules: - __import__(prename) - if not hasattr(sys.modules[prename], postname): - setattr(sys.modules[prename], postname, mod) - - return partdir, this_dir - -def __clone(): - """ clone master version of autopath.py into all subdirs """ - from os.path import join, walk - if not this_dir.endswith(join('pypy','tool')): - raise EnvironmentError("can only clone master version " - "'%s'" % join(pypydir, 'tool',_myname)) - - - def sync_walker(arg, dirname, fnames): - if _myname in fnames: - fn = join(dirname, _myname) - f = open(fn, 'rwb+') - try: - if f.read() == arg: - print "checkok", fn - else: - print "syncing", fn - f = open(fn, 'w') - f.write(arg) - finally: - f.close() - s = open(join(pypydir, 'tool', _myname), 'rb').read() - walk(pypydir, sync_walker, s) - -_myname = 'autopath.py' - -# set guaranteed attributes - -pypydir, this_dir = __dirinfo('pypy') - -if __name__ == '__main__': - __clone() diff --git a/pypy/jit/backend/ppc/ppcgen/pystructs.py b/pypy/jit/backend/ppc/ppcgen/pystructs.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/pystructs.py +++ /dev/null @@ -1,22 +0,0 @@ -class PyVarObject(object): - ob_size = 8 - -class PyObject(object): - ob_refcnt = 0 - ob_type = 4 - -class PyTupleObject(object): - ob_item = 12 - -class PyTypeObject(object): - tp_name = 12 - tp_basicsize = 16 - tp_itemsize = 20 - tp_dealloc = 24 - -class PyFloatObject(object): - ob_fval = 8 - -class PyIntObject(object): - ob_ival = 8 - diff --git a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py b/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py +++ /dev/null @@ -1,63 +0,0 @@ -from pypy.tool.sourcetools import compile2 -from pypy.rlib.rarithmetic import r_uint -from pypy.jit.backend.ppc.ppcgen.form import IDesc, IDupDesc - -## "opcode": ( 0, 5), -## "rA": (11, 15, 'unsigned', regname._R), -## "rB": (16, 20, 'unsigned', regname._R), -## "Rc": (31, 31), -## "rD": ( 6, 10, 'unsigned', regname._R), -## "OE": (21, 21), -## "XO2": (22, 30), - -## XO = Form("rD", "rA", "rB", "OE", "XO2", "Rc") - -## add = XO(31, XO2=266, OE=0, Rc=0) - -## def add(rD, rA, rB): -## v = 0 -## v |= (31&(2**(5-0+1)-1)) << (32-5-1) -## ... -## return v - -def make_func(name, desc): - sig = [] - fieldvalues = [] - for field in desc.fields: - if field in desc.specializations: - fieldvalues.append((field, desc.specializations[field])) - else: - sig.append(field.name) - fieldvalues.append((field, field.name)) - if isinstance(desc, IDupDesc): - for destfield, srcfield in desc.dupfields.iteritems(): - fieldvalues.append((destfield, srcfield.name)) - body = ['v = r_uint(0)'] - assert 'v' not in sig # that wouldn't be funny - #body.append('print %r'%name + ', ' + ', '.join(["'%s:', %s"%(s, s) for s in sig])) - for field, value in fieldvalues: - if field.name == 'spr': - body.append('spr = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) - value = 'spr' - body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, - field.mask, - (32 - field.right - 1))) - body.append('self.emit(v)') - src = 'def %s(self, %s):\n %s'%(name, ', '.join(sig), '\n '.join(body)) - d = {'r_uint':r_uint} - #print src - exec compile2(src) in d - return d[name] - -def make_rassembler(cls): - bases = [make_rassembler(b) for b in cls.__bases__] - ns = {} - for k, v in cls.__dict__.iteritems(): - if isinstance(v, IDesc): - v = make_func(k, v) - ns[k] = v - rcls = type('R' + cls.__name__, tuple(bases), ns) - def emit(self, value): - self.insts.append(value) - rcls.emit = emit - return rcls diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ /dev/null @@ -1,989 +0,0 @@ -from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, - TempBox, compute_vars_longevity) -from pypy.jit.backend.ppc.ppcgen.arch import (WORD, MY_COPY_OF_REGS) -from pypy.jit.backend.ppc.ppcgen.jump import (remap_frame_layout_mixed, - remap_frame_layout) -from pypy.jit.backend.ppc.ppcgen.locations import imm -from pypy.jit.backend.ppc.ppcgen.helper.regalloc import (_check_imm_arg, - check_imm_box, - prepare_cmp_op, - prepare_unary_int_op, - prepare_binary_int_op, - prepare_binary_int_op_with_imm, - prepare_unary_cmp) -from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, - ConstPtr, Box) -from pypy.jit.metainterp.history import JitCellToken, TargetToken -from pypy.jit.metainterp.resoperation import rop -from pypy.jit.backend.ppc.ppcgen import locations -from pypy.rpython.lltypesystem import rffi, lltype, rstr -from pypy.jit.backend.llsupport import symbolic -from pypy.jit.backend.llsupport.descr import ArrayDescr -from pypy.jit.codewriter.effectinfo import EffectInfo -import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.codewriter import heaptracker -from pypy.jit.backend.llsupport.descr import unpack_arraydescr -from pypy.jit.backend.llsupport.descr import unpack_fielddescr -from pypy.jit.backend.llsupport.descr import unpack_interiorfielddescr - -# xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know -# that it is a LABEL that was not compiled yet. -TargetToken._ppc_loop_code = 0 - -class TempInt(TempBox): - type = INT - - def __repr__(self): - return "" % (id(self),) - -class TempPtr(TempBox): - type = REF - - def __repr__(self): - return "" % (id(self),) - -class PPCRegisterManager(RegisterManager): - all_regs = r.MANAGED_REGS - box_types = None # or a list of acceptable types - no_lower_byte_regs = all_regs - save_around_call_regs = r.VOLATILES - - REGLOC_TO_COPY_AREA_OFS = { - r.r0: MY_COPY_OF_REGS + 0 * WORD, - r.r2: MY_COPY_OF_REGS + 1 * WORD, - r.r3: MY_COPY_OF_REGS + 2 * WORD, - r.r4: MY_COPY_OF_REGS + 3 * WORD, - r.r5: MY_COPY_OF_REGS + 4 * WORD, - r.r6: MY_COPY_OF_REGS + 5 * WORD, - r.r7: MY_COPY_OF_REGS + 6 * WORD, - r.r8: MY_COPY_OF_REGS + 7 * WORD, - r.r9: MY_COPY_OF_REGS + 8 * WORD, - r.r10: MY_COPY_OF_REGS + 9 * WORD, - r.r11: MY_COPY_OF_REGS + 10 * WORD, - r.r12: MY_COPY_OF_REGS + 11 * WORD, - r.r13: MY_COPY_OF_REGS + 12 * WORD, - r.r14: MY_COPY_OF_REGS + 13 * WORD, - r.r15: MY_COPY_OF_REGS + 14 * WORD, - r.r16: MY_COPY_OF_REGS + 15 * WORD, - r.r17: MY_COPY_OF_REGS + 16 * WORD, - r.r18: MY_COPY_OF_REGS + 17 * WORD, - r.r19: MY_COPY_OF_REGS + 18 * WORD, - r.r20: MY_COPY_OF_REGS + 19 * WORD, - r.r21: MY_COPY_OF_REGS + 20 * WORD, - r.r22: MY_COPY_OF_REGS + 21 * WORD, - r.r23: MY_COPY_OF_REGS + 22 * WORD, - r.r24: MY_COPY_OF_REGS + 23 * WORD, - r.r25: MY_COPY_OF_REGS + 24 * WORD, - r.r26: MY_COPY_OF_REGS + 25 * WORD, - r.r27: MY_COPY_OF_REGS + 26 * WORD, - r.r28: MY_COPY_OF_REGS + 27 * WORD, - r.r29: MY_COPY_OF_REGS + 28 * WORD, - r.r30: MY_COPY_OF_REGS + 29 * WORD, - r.r31: MY_COPY_OF_REGS + 30 * WORD, - } - - def __init__(self, longevity, frame_manager=None, assembler=None): - RegisterManager.__init__(self, longevity, frame_manager, assembler) - - def call_result_location(self, v): - return r.r3 - - def convert_to_imm(self, c): - if isinstance(c, ConstInt): - val = rffi.cast(lltype.Signed, c.value) - return locations.ImmLocation(val) - else: - assert isinstance(c, ConstPtr) - return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) - - def ensure_value_is_boxed(self, thing, forbidden_vars=None): - loc = None - if isinstance(thing, Const): - if isinstance(thing, ConstPtr): - tp = REF - else: - tp = INT - loc = self.get_scratch_reg(tp, forbidden_vars=self.temp_boxes - + forbidden_vars) - immvalue = self.convert_to_imm(thing) - self.assembler.load(loc, immvalue) - else: - loc = self.make_sure_var_in_reg(thing, - forbidden_vars=forbidden_vars) - return loc - - def allocate_scratch_reg(self, type=INT, selected_reg=None, forbidden_vars=None): - """Allocate a scratch register, possibly spilling a managed register. - This register is freed after emitting the current operation and can not - be spilled""" - box = TempBox() - reg = self.force_allocate_reg(box, - selected_reg=selected_reg, - forbidden_vars=forbidden_vars) - return reg, box - - def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): - assert type == INT or type == REF - box = TempBox() - self.temp_boxes.append(box) - reg = self.force_allocate_reg(box, forbidden_vars=forbidden_vars, - selected_reg=selected_reg) - return reg - -class PPCFrameManager(FrameManager): - def __init__(self): - FrameManager.__init__(self) - self.used = [] - - @staticmethod - def frame_pos(loc, type): - num_words = PPCFrameManager.frame_size(type) - if type == FLOAT: - assert 0, "not implemented yet" - return locations.StackLocation(loc, num_words=num_words, type=type) - - @staticmethod - def frame_size(type): - if type == FLOAT: - assert 0, "TODO" - return 1 - - @staticmethod - def get_loc_index(loc): - assert loc.is_stack() - if loc.type == FLOAT: - assert 0, "not implemented yet" - return loc.position - -class Regalloc(object): - - def __init__(self, frame_manager=None, assembler=None): - self.cpu = assembler.cpu - self.frame_manager = frame_manager - self.assembler = assembler - self.jump_target_descr = None - self.final_jump_op = None - - def _prepare(self, inputargs, operations): - longevity, last_real_usage = compute_vars_longevity( - inputargs, operations) - self.longevity = longevity - self.last_real_usage = last_real_usage - fm = self.frame_manager - asm = self.assembler - self.rm = PPCRegisterManager(longevity, fm, asm) - - def prepare_loop(self, inputargs, operations): - self._prepare(inputargs, operations) - self._set_initial_bindings(inputargs) - self.possibly_free_vars(list(inputargs)) - - def prepare_bridge(self, inputargs, arglocs, ops): - self._prepare(inputargs, ops) - self._update_bindings(arglocs, inputargs) - - def _set_initial_bindings(self, inputargs): - arg_index = 0 - count = 0 - n_register_args = len(r.PARAM_REGS) - cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD - for box in inputargs: - assert isinstance(box, Box) - # handle inputargs in argument registers - if box.type == FLOAT and arg_index % 2 != 0: - assert 0, "not implemented yet" - if arg_index < n_register_args: - if box.type == FLOAT: - assert 0, "not implemented yet" - else: - loc = r.PARAM_REGS[arg_index] - self.try_allocate_reg(box, selected_reg=loc) - arg_index += 1 - else: - # treat stack args as stack locations with a negative offset - if box.type == FLOAT: - assert 0, "not implemented yet" - else: - cur_frame_pos -= 1 - count += 1 - loc = self.frame_manager.frame_pos(cur_frame_pos, box.type) - self.frame_manager.set_binding(box, loc) - - def _update_bindings(self, locs, inputargs): - used = {} - i = 0 - for loc in locs: - arg = inputargs[i] - i += 1 - if loc.is_reg(): - self.rm.reg_bindings[arg] = loc - elif loc.is_vfp_reg(): - assert 0, "not supported" - else: - assert loc.is_stack() - self.frame_manager.set_binding(arg, loc) - used[loc] = None - - # XXX combine with x86 code and move to llsupport - self.rm.free_regs = [] - for reg in self.rm.all_regs: - if reg not in used: - self.rm.free_regs.append(reg) - # note: we need to make a copy of inputargs because possibly_free_vars - # is also used on op args, which is a non-resizable list - self.possibly_free_vars(list(inputargs)) - - def possibly_free_var(self, var): - self.rm.possibly_free_var(var) - - def possibly_free_vars(self, vars): - for var in vars: - self.possibly_free_var(var) - - def possibly_free_vars_for_op(self, op): - for i in range(op.numargs()): - var = op.getarg(i) - if var is not None: - self.possibly_free_var(var) - - def try_allocate_reg(self, v, selected_reg=None, need_lower_byte=False): - return self.rm.try_allocate_reg(v, selected_reg, need_lower_byte) - - def force_allocate_reg(self, var, forbidden_vars=[], selected_reg=None, - need_lower_byte=False): - return self.rm.force_allocate_reg(var, forbidden_vars, selected_reg, - need_lower_byte) - - def allocate_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): - assert type == INT # XXX extend this once floats are supported - return self.rm.allocate_scratch_reg(type=type, - forbidden_vars=forbidden_vars, - selected_reg=selected_reg) - - def _check_invariants(self): - self.rm._check_invariants() - - def loc(self, var): - if var.type == FLOAT: - assert 0, "not implemented yet" - return self.rm.loc(var) - - def position(self): - return self.rm.position - - def next_instruction(self): - self.rm.next_instruction() - - def force_spill_var(self, var): - if var.type == FLOAT: - assert 0, "not implemented yet" - else: - self.rm.force_spill_var(var) - - def before_call(self, force_store=[], save_all_regs=False): - self.rm.before_call(force_store, save_all_regs) - - def after_call(self, v): - if v.type == FLOAT: - assert 0, "not implemented yet" - else: - return self.rm.after_call(v) - - def call_result_location(self, v): - if v.type == FLOAT: - assert 0, "not implemented yet" - else: - return self.rm.call_result_location(v) - - def _ensure_value_is_boxed(self, thing, forbidden_vars=[]): - if thing.type == FLOAT: - assert 0, "not implemented yet" - else: - return self.rm.ensure_value_is_boxed(thing, forbidden_vars) - - def get_scratch_reg(self, type, forbidden_vars=[], selected_reg=None): - if type == FLOAT: - assert 0, "not implemented yet" - else: - return self.rm.get_scratch_reg(type, forbidden_vars, selected_reg) - - def free_temp_vars(self): - self.rm.free_temp_vars() - - def make_sure_var_in_reg(self, var, forbidden_vars=[], - selected_reg=None, need_lower_byte=False): - if var.type == FLOAT: - assert 0, "not implemented yet" - else: - return self.rm.make_sure_var_in_reg(var, forbidden_vars, - selected_reg, need_lower_byte) - - def convert_to_imm(self, value): - if isinstance(value, ConstInt): - return self.rm.convert_to_imm(value) - else: - assert 0, "not implemented yet" - - def _sync_var(self, v): - if v.type == FLOAT: - assert 0, "not implemented yet" - else: - self.rm._sync_var(v) - - # ****************************************************** - # * P R E P A R E O P E R A T I O N S * - # ****************************************************** - - - def void(self, op): - return [] - - prepare_int_add = prepare_binary_int_op_with_imm() - prepare_int_sub = prepare_binary_int_op_with_imm() - prepare_int_floordiv = prepare_binary_int_op_with_imm() - - prepare_int_mul = prepare_binary_int_op() - prepare_int_mod = prepare_binary_int_op() - prepare_int_and = prepare_binary_int_op() - prepare_int_or = prepare_binary_int_op() - prepare_int_xor = prepare_binary_int_op() - prepare_int_lshift = prepare_binary_int_op() - prepare_int_rshift = prepare_binary_int_op() - prepare_uint_rshift = prepare_binary_int_op() - prepare_uint_floordiv = prepare_binary_int_op() - - prepare_int_add_ovf = prepare_binary_int_op() - prepare_int_sub_ovf = prepare_binary_int_op() - prepare_int_mul_ovf = prepare_binary_int_op() - - prepare_int_neg = prepare_unary_int_op() - prepare_int_invert = prepare_unary_int_op() - - prepare_int_le = prepare_cmp_op() - prepare_int_lt = prepare_cmp_op() - prepare_int_ge = prepare_cmp_op() - prepare_int_gt = prepare_cmp_op() - prepare_int_eq = prepare_cmp_op() - prepare_int_ne = prepare_cmp_op() - - prepare_ptr_eq = prepare_int_eq - prepare_ptr_ne = prepare_int_ne - - prepare_instance_ptr_eq = prepare_ptr_eq - prepare_instance_ptr_ne = prepare_ptr_ne - - prepare_uint_lt = prepare_cmp_op() - prepare_uint_le = prepare_cmp_op() - prepare_uint_gt = prepare_cmp_op() - prepare_uint_ge = prepare_cmp_op() - - prepare_int_is_true = prepare_unary_cmp() - prepare_int_is_zero = prepare_unary_cmp() - - def prepare_finish(self, op): - args = [None] * (op.numargs() + 1) - for i in range(op.numargs()): - arg = op.getarg(i) - if arg: - args[i] = self.loc(arg) - self.possibly_free_var(arg) - n = self.cpu.get_fail_descr_number(op.getdescr()) - args[-1] = imm(n) - return args - - def prepare_call_malloc_gc(self, op): - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - return args - - def _prepare_guard(self, op, args=None): - if args is None: - args = [] - args.append(imm(len(self.frame_manager.used))) - for arg in op.getfailargs(): - if arg: - args.append(self.loc(arg)) - else: - args.append(None) - return args - - def prepare_guard_true(self, op): - l0 = self._ensure_value_is_boxed(op.getarg(0)) - args = self._prepare_guard(op, [l0]) - return args - - prepare_guard_false = prepare_guard_true - prepare_guard_nonnull = prepare_guard_true - prepare_guard_isnull = prepare_guard_true - - def prepare_guard_no_overflow(self, op): - locs = self._prepare_guard(op) - self.possibly_free_vars(op.getfailargs()) - return locs - - prepare_guard_overflow = prepare_guard_no_overflow - prepare_guard_not_invalidated = prepare_guard_no_overflow - - def prepare_guard_exception(self, op): - boxes = list(op.getarglist()) - arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) - loc = self._ensure_value_is_boxed(arg0) - loc1 = self.get_scratch_reg(INT, boxes) - if op.result in self.longevity: - resloc = self.force_allocate_reg(op.result, boxes) - self.possibly_free_var(op.result) - else: - resloc = None - pos_exc_value = imm(self.cpu.pos_exc_value()) - pos_exception = imm(self.cpu.pos_exception()) - arglocs = self._prepare_guard(op, - [loc, loc1, resloc, pos_exc_value, pos_exception]) - return arglocs - - def prepare_guard_no_exception(self, op): - loc = self._ensure_value_is_boxed( - ConstInt(self.cpu.pos_exception())) - arglocs = self._prepare_guard(op, [loc]) - return arglocs - - def prepare_guard_value(self, op): - boxes = list(op.getarglist()) - a0, a1 = boxes - imm_a1 = check_imm_box(a1) - l0 = self._ensure_value_is_boxed(a0, boxes) - if not imm_a1: - l1 = self._ensure_value_is_boxed(a1, boxes) - else: - l1 = self.make_sure_var_in_reg(a1, boxes) - assert op.result is None - arglocs = self._prepare_guard(op, [l0, l1]) - self.possibly_free_vars(op.getarglist()) - self.possibly_free_vars(op.getfailargs()) - return arglocs - - def prepare_guard_class(self, op): - assert isinstance(op.getarg(0), Box) - boxes = list(op.getarglist()) - x = self._ensure_value_is_boxed(boxes[0], boxes) - y = self.get_scratch_reg(REF, forbidden_vars=boxes) - y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) - self.assembler.load(y, imm(y_val)) - offset = self.cpu.vtable_offset - assert offset is not None - offset_loc = self._ensure_value_is_boxed(ConstInt(offset), boxes) - arglocs = self._prepare_guard(op, [x, y, offset_loc]) - return arglocs - - prepare_guard_nonnull_class = prepare_guard_class - - def compute_hint_frame_locations(self, operations): - # optimization only: fill in the 'hint_frame_locations' dictionary - # of rm and xrm based on the JUMP at the end of the loop, by looking - # at where we would like the boxes to be after the jump. - op = operations[-1] - if op.getopnum() != rop.JUMP: - return - self.final_jump_op = op - descr = op.getdescr() - assert isinstance(descr, TargetToken) - if descr._ppc_loop_code != 0: - # if the target LABEL was already compiled, i.e. if it belongs - # to some already-compiled piece of code - self._compute_hint_frame_locations_from_descr(descr) - #else: - # The loop ends in a JUMP going back to a LABEL in the same loop. - # We cannot fill 'hint_frame_locations' immediately, but we can - # wait until the corresponding prepare_op_label() to know where the - # we would like the boxes to be after the jump. - - def _compute_hint_frame_locations_from_descr(self, descr): - arglocs = self.assembler.target_arglocs(descr) - jump_op = self.final_jump_op - assert len(arglocs) == jump_op.numargs() - for i in range(jump_op.numargs()): - box = jump_op.getarg(i) - if isinstance(box, Box): - loc = arglocs[i] - if loc is not None and loc.is_stack(): - self.frame_manager.hint_frame_locations[box] = loc - - def prepare_guard_call_release_gil(self, op, guard_op): - # first, close the stack in the sense of the asmgcc GC root tracker - gcrootmap = self.cpu.gc_ll_descr.gcrootmap - if gcrootmap: - arglocs = [] - argboxes = [] - for i in range(op.numargs()): - loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) - arglocs.append(loc) - argboxes.append(box) - self.assembler.call_release_gil(gcrootmap, arglocs, fcond) - self.possibly_free_vars(argboxes) - # do the call - faildescr = guard_op.getdescr() - fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - self.assembler.emit_call(op, args, self, fail_index) - # then reopen the stack - if gcrootmap: - self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) - locs = self._prepare_guard(guard_op) - self.possibly_free_vars(guard_op.getfailargs()) - return locs - - def prepare_jump(self, op): - descr = op.getdescr() - assert isinstance(descr, TargetToken) - self.jump_target_descr = descr - arglocs = self.assembler.target_arglocs(descr) - - # get temporary locs - tmploc = r.SCRATCH - - # Part about non-floats - src_locations1 = [] - dst_locations1 = [] - - # Build the four lists - for i in range(op.numargs()): - box = op.getarg(i) - src_loc = self.loc(box) - dst_loc = arglocs[i] - if box.type != FLOAT: - src_locations1.append(src_loc) - dst_locations1.append(dst_loc) - else: - assert 0, "not implemented yet" - - remap_frame_layout(self.assembler, src_locations1, - dst_locations1, tmploc) - return [] - - def prepare_setfield_gc(self, op): - boxes = list(op.getarglist()) - a0, a1 = boxes - ofs, size, sign = unpack_fielddescr(op.getdescr()) - base_loc = self._ensure_value_is_boxed(a0, boxes) - value_loc = self._ensure_value_is_boxed(a1, boxes) - if _check_imm_arg(ofs): - ofs_loc = imm(ofs) - else: - ofs_loc = self.get_scratch_reg(INT, boxes) - self.assembler.load(ofs_loc, imm(ofs)) - return [value_loc, base_loc, ofs_loc, imm(size)] - - prepare_setfield_raw = prepare_setfield_gc - - def prepare_getfield_gc(self, op): - a0 = op.getarg(0) - ofs, size, sign = unpack_fielddescr(op.getdescr()) - base_loc = self._ensure_value_is_boxed(a0) - immofs = imm(ofs) - if _check_imm_arg(ofs): - ofs_loc = immofs - else: - ofs_loc = self.get_scratch_reg(INT, [a0]) - self.assembler.load(ofs_loc, immofs) - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - return [base_loc, ofs_loc, res, imm(size)] - - prepare_getfield_raw = prepare_getfield_gc - prepare_getfield_raw_pure = prepare_getfield_gc - prepare_getfield_gc_pure = prepare_getfield_gc - - def prepare_getinteriorfield_gc(self, op): - t = unpack_interiorfielddescr(op.getdescr()) - ofs, itemsize, fieldsize, sign = t - args = op.getarglist() - base_loc = self._ensure_value_is_boxed(op.getarg(0), args) - index_loc = self._ensure_value_is_boxed(op.getarg(1), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): - ofs_loc = imm(ofs) - else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - result_loc = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), - imm(itemsize), imm(fieldsize)] - - def prepare_setinteriorfield_gc(self, op): - t = unpack_interiorfielddescr(op.getdescr()) - ofs, itemsize, fieldsize, sign = t - args = op.getarglist() - base_loc = self._ensure_value_is_boxed(op.getarg(0), args) - index_loc = self._ensure_value_is_boxed(op.getarg(1), args) - value_loc = self._ensure_value_is_boxed(op.getarg(2), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): - ofs_loc = imm(ofs) - else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) - return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), - imm(itemsize), imm(fieldsize)] - - def prepare_arraylen_gc(self, op): - arraydescr = op.getdescr() - assert isinstance(arraydescr, ArrayDescr) - ofs = arraydescr.lendescr.offset - arg = op.getarg(0) - base_loc = self._ensure_value_is_boxed(arg) - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - return [res, base_loc, imm(ofs)] - - def prepare_setarrayitem_gc(self, op): - a0, a1, a2 = list(op.getarglist()) - size, ofs, _ = unpack_arraydescr(op.getdescr()) - scale = get_scale(size) - args = op.getarglist() - base_loc = self._ensure_value_is_boxed(a0, args) - ofs_loc = self._ensure_value_is_boxed(a1, args) - scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) - value_loc = self._ensure_value_is_boxed(a2, args) - assert _check_imm_arg(ofs) - return [value_loc, base_loc, ofs_loc, scratch_loc, - imm(scale), imm(ofs)] - - prepare_setarrayitem_raw = prepare_setarrayitem_gc - - def prepare_getarrayitem_gc(self, op): - a0, a1 = boxes = list(op.getarglist()) - size, ofs, _ = unpack_arraydescr(op.getdescr()) - scale = get_scale(size) - base_loc = self._ensure_value_is_boxed(a0, boxes) - ofs_loc = self._ensure_value_is_boxed(a1, boxes) - scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - assert _check_imm_arg(ofs) - return [res, base_loc, ofs_loc, scratch_loc, - imm(scale), imm(ofs)] - - prepare_getarrayitem_raw = prepare_getarrayitem_gc - prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc - - def prepare_strlen(self, op): - args = op.getarglist() - l0 = self._ensure_value_is_boxed(op.getarg(0)) - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.cpu.translate_support_code) - immofs = imm(ofs_length) - if _check_imm_arg(ofs_length): - l1 = immofs - else: - l1 = self.get_scratch_reg(INT, args) - self.assembler.load(l1, immofs) - - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - - res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - return [l0, l1, res] - - def prepare_strgetitem(self, op): - boxes = list(op.getarglist()) - base_loc = self._ensure_value_is_boxed(boxes[0]) - - a1 = boxes[1] - imm_a1 = check_imm_box(a1) - if imm_a1: - ofs_loc = self.make_sure_var_in_reg(a1, boxes) - else: - ofs_loc = self._ensure_value_is_boxed(a1, boxes) - - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.cpu.translate_support_code) - assert itemsize == 1 - return [res, base_loc, ofs_loc, imm(basesize)] - - def prepare_strsetitem(self, op): - boxes = list(op.getarglist()) - base_loc = self._ensure_value_is_boxed(boxes[0], boxes) - ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) - value_loc = self._ensure_value_is_boxed(boxes[2], boxes) - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, - self.cpu.translate_support_code) - assert itemsize == 1 - return [value_loc, base_loc, ofs_loc, imm(basesize)] - - prepare_copystrcontent = void - prepare_copyunicodecontent = void - - def prepare_unicodelen(self, op): - l0 = self._ensure_value_is_boxed(op.getarg(0)) - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - immofs = imm(ofs_length) - if _check_imm_arg(ofs_length): - l1 = immofs - else: - l1 = self.get_scratch_reg(INT, [op.getarg(0)]) - self.assembler.load(l1, immofs) - - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - return [l0, l1, res] - - def prepare_unicodegetitem(self, op): - boxes = list(op.getarglist()) - base_loc = self._ensure_value_is_boxed(boxes[0], boxes) - ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) - - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - res = self.force_allocate_reg(op.result) - - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - scale = itemsize / 2 - return [res, base_loc, ofs_loc, - imm(scale), imm(basesize), imm(itemsize)] - - def prepare_unicodesetitem(self, op): - boxes = list(op.getarglist()) - base_loc = self._ensure_value_is_boxed(boxes[0], boxes) - ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) - value_loc = self._ensure_value_is_boxed(boxes[2], boxes) - basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - scale = itemsize / 2 - return [value_loc, base_loc, ofs_loc, - imm(scale), imm(basesize), imm(itemsize)] - - def prepare_same_as(self, op): - arg = op.getarg(0) - imm_arg = _check_imm_arg(arg) - if imm_arg: - argloc = self.make_sure_var_in_reg(arg) - else: - argloc = self._ensure_value_is_boxed(arg) - self.possibly_free_vars_for_op(op) - self.free_temp_vars() - resloc = self.force_allocate_reg(op.result) - return [argloc, resloc] - - prepare_cast_ptr_to_int = prepare_same_as - prepare_cast_int_to_ptr = prepare_same_as - - def prepare_call(self, op): - effectinfo = op.getdescr().get_extra_info() - if effectinfo is not None: - oopspecindex = effectinfo.oopspecindex - if oopspecindex == EffectInfo.OS_MATH_SQRT: - args = self.prepare_op_math_sqrt(op, fcond) - self.assembler.emit_op_math_sqrt(op, args, self, fcond) - return - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - return args - - prepare_debug_merge_point = void - prepare_jit_debug = void - - def prepare_cond_call_gc_wb(self, op): - assert op.result is None - N = op.numargs() - # we force all arguments in a reg (unless they are Consts), - # because it will be needed anyway by the following setfield_gc - # or setarrayitem_gc. It avoids loading it twice from the memory. - arglocs = [] - argboxes = [] - for i in range(N): - loc = self._ensure_value_is_boxed(op.getarg(i), argboxes) - arglocs.append(loc) - self.rm.possibly_free_vars(argboxes) - return arglocs - - prepare_cond_call_gc_wb_array = prepare_cond_call_gc_wb - - def prepare_force_token(self, op): - res_loc = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - return [res_loc] - - def prepare_label(self, op): - # XXX big refactoring needed? - descr = op.getdescr() - assert isinstance(descr, TargetToken) - inputargs = op.getarglist() - arglocs = [None] * len(inputargs) - # - # we use force_spill() on the boxes that are not going to be really - # used any more in the loop, but that are kept alive anyway - # by being in a next LABEL's or a JUMP's argument or fail_args - # of some guard - position = self.rm.position - for arg in inputargs: - assert isinstance(arg, Box) - if self.last_real_usage.get(arg, -1) <= position: - self.force_spill_var(arg) - - # - for i in range(len(inputargs)): - arg = inputargs[i] - assert isinstance(arg, Box) - loc = self.loc(arg) - arglocs[i] = loc - if loc.is_reg(): - self.frame_manager.mark_as_free(arg) - # - descr._ppc_arglocs = arglocs - descr._ppc_loop_code = self.assembler.mc.currpos() - descr._ppc_clt = self.assembler.current_clt - self.assembler.target_tokens_currently_compiling[descr] = None - self.possibly_free_vars_for_op(op) - # - # if the LABEL's descr is precisely the target of the JUMP at the - # end of the same loop, i.e. if what we are compiling is a single - # loop that ends up jumping to this LABEL, then we can now provide - # the hints about the expected position of the spilled variables. - jump_op = self.final_jump_op - if jump_op is not None and jump_op.getdescr() is descr: - self._compute_hint_frame_locations_from_descr(descr) - - def prepare_guard_call_may_force(self, op, guard_op): - faildescr = guard_op.getdescr() - fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - for v in guard_op.getfailargs(): - if v in self.rm.reg_bindings: - self.force_spill_var(v) - self.assembler.emit_call(op, args, self, fail_index) - locs = self._prepare_guard(guard_op) - self.possibly_free_vars(guard_op.getfailargs()) - return locs - - def prepare_guard_call_assembler(self, op, guard_op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - jd = descr.outermost_jitdriver_sd - assert jd is not None - #size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code) - size = jd.portal_calldescr.get_result_size() - vable_index = jd.index_of_virtualizable - if vable_index >= 0: - self._sync_var(op.getarg(vable_index)) - vable = self.frame_manager.loc(op.getarg(vable_index)) - else: - vable = imm(0) - self.possibly_free_vars(guard_op.getfailargs()) - return [imm(size), vable] - - def _prepare_args_for_new_op(self, new_args): - gc_ll_descr = self.cpu.gc_ll_descr - args = gc_ll_descr.args_for_new(new_args) - arglocs = [] - for i in range(len(args)): - arg = args[i] - t = TempInt() - l = self.force_allocate_reg(t, selected_reg=r.MANAGED_REGS[i]) - self.assembler.load(l, imm(arg)) - arglocs.append(t) - return arglocs - - def _malloc_varsize(self, ofs_items, ofs_length, itemsize, op): - v = op.getarg(0) - res_v = op.result - boxes = [v, res_v] - itemsize_box = ConstInt(itemsize) - ofs_items_box = ConstInt(ofs_items) - if _check_imm_arg(ofs_items_box): - ofs_items_loc = self.convert_to_imm(ofs_items_box) - else: - ofs_items_loc, ofs_items_box = self._ensure_value_is_boxed(ofs_items_box, boxes) - boxes.append(ofs_items_box) - vloc, vbox = self._ensure_value_is_boxed(v, [res_v]) - boxes.append(vbox) - size, size_box = self._ensure_value_is_boxed(itemsize_box, boxes) - boxes.append(size_box) - self.assembler._regalloc_malloc_varsize(size, size_box, - vloc, vbox, ofs_items_loc, self, res_v) - base_loc = self.make_sure_var_in_reg(res_v) - - value_loc, vbox = self._ensure_value_is_boxed(v, [res_v]) - boxes.append(vbox) - self.possibly_free_vars(boxes) - assert value_loc.is_reg() - assert base_loc.is_reg() - return [value_loc, base_loc, imm(ofs_length)] - - # from ../x86/regalloc.py:791 - def _unpack_fielddescr(self, fielddescr): - assert isinstance(fielddescr, BaseFieldDescr) - ofs = fielddescr.offset - size = fielddescr.get_field_size(self.cpu.translate_support_code) - ptr = fielddescr.is_pointer_field() - return ofs, size, ptr - - # from ../x86/regalloc.py:779 - def _unpack_arraydescr(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - cpu = self.cpu - ofs_length = arraydescr.get_ofs_length(cpu.translate_support_code) - ofs = arraydescr.get_base_size(cpu.translate_support_code) - size = arraydescr.get_item_size(cpu.translate_support_code) - ptr = arraydescr.is_array_of_pointers() - scale = 0 - while (1 << scale) < size: - scale += 1 - assert (1 << scale) == size - return size, scale, ofs, ofs_length, ptr - - def prepare_force_spill(self, op): - self.force_spill_var(op.getarg(0)) - return [] - -def add_none_argument(fn): - return lambda self, op: fn(self, op, None) - -def notimplemented(self, op): - raise NotImplementedError, op - -def notimplemented_with_guard(self, op, guard_op): - - raise NotImplementedError, op - -operations = [notimplemented] * (rop._LAST + 1) -operations_with_guard = [notimplemented_with_guard] * (rop._LAST + 1) - -def get_scale(size): - scale = 0 - while (1 << scale) < size: - scale += 1 - assert (1 << scale) == size - return scale - -for key, value in rop.__dict__.items(): - key = key.lower() - if key.startswith('_'): - continue - methname = 'prepare_%s' % key - if hasattr(Regalloc, methname): - func = getattr(Regalloc, methname).im_func - operations[value] = func - -for key, value in rop.__dict__.items(): - key = key.lower() - if key.startswith('_'): - continue - methname = 'prepare_guard_%s' % key - if hasattr(Regalloc, methname): - func = getattr(Regalloc, methname).im_func - operations_with_guard[value] = func - operations[value] = add_none_argument(func) - -Regalloc.operations = operations -Regalloc.operations_with_guard = operations_with_guard diff --git a/pypy/jit/backend/ppc/ppcgen/test/__init__.py b/pypy/jit/backend/ppc/ppcgen/test/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py +++ /dev/null @@ -1,39 +0,0 @@ -from pypy.jit.backend.ppc.ppcgen.rassemblermaker import make_rassembler -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler - -RPPCAssembler = make_rassembler(PPCAssembler) - -_a = PPCAssembler() -_a.add(3, 3, 4) -add_r3_r3_r4 = _a.insts[0].assemble() - -def test_simple(): - ra = RPPCAssembler() - ra.add(3, 3, 4) - assert ra.insts == [add_r3_r3_r4] - -def test_rtyped(): - from pypy.rpython.test.test_llinterp import interpret - def f(): - ra = RPPCAssembler() - ra.add(3, 3, 4) - ra.lwz(1, 1, 1) # ensure that high bit doesn't produce long but r_uint - return ra.insts[0] - res = interpret(f, []) - assert res == add_r3_r3_r4 - -def test_mnemonic(): - mrs = [] - for A in PPCAssembler, RPPCAssembler: - a = A() - a.mr(3, 4) - mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] - -def test_spr_coding(): - mrs = [] - for A in PPCAssembler, RPPCAssembler: - a = A() - a.mtctr(3) - mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -1,213 +1,948 @@ -from pypy.jit.codegen.ppc.instruction import \ - gprs, fprs, crfs, ctr, \ - NO_REGISTER, GP_REGISTER, FP_REGISTER, CR_FIELD, CT_REGISTER, \ - CMPInsn, Spill, Unspill, stack_slot, \ - rSCRATCH +from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, + TempBox, compute_vars_longevity) +from pypy.jit.backend.ppc.arch import (WORD, MY_COPY_OF_REGS) +from pypy.jit.backend.ppc.jump import (remap_frame_layout_mixed, + remap_frame_layout) +from pypy.jit.backend.ppc.locations import imm +from pypy.jit.backend.ppc.helper.regalloc import (_check_imm_arg, + check_imm_box, + prepare_cmp_op, + prepare_unary_int_op, + prepare_binary_int_op, + prepare_binary_int_op_with_imm, + prepare_unary_cmp) +from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, + ConstPtr, Box) +from pypy.jit.metainterp.history import JitCellToken, TargetToken +from pypy.jit.metainterp.resoperation import rop +from pypy.jit.backend.ppc import locations +from pypy.rpython.lltypesystem import rffi, lltype, rstr +from pypy.jit.backend.llsupport import symbolic +from pypy.jit.backend.llsupport.descr import ArrayDescr +from pypy.jit.codewriter.effectinfo import EffectInfo +import pypy.jit.backend.ppc.register as r +from pypy.jit.codewriter import heaptracker +from pypy.jit.backend.llsupport.descr import unpack_arraydescr +from pypy.jit.backend.llsupport.descr import unpack_fielddescr +from pypy.jit.backend.llsupport.descr import unpack_interiorfielddescr -from pypy.jit.codegen.ppc.conftest import option +# xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know +# that it is a LABEL that was not compiled yet. +TargetToken._ppc_loop_code = 0 -DEBUG_PRINT = option.debug_print +class TempInt(TempBox): + type = INT -class RegisterAllocation: - def __init__(self, freeregs, initial_mapping, initial_spill_offset): - if DEBUG_PRINT: - print - print "RegisterAllocation __init__", initial_mapping.items() + def __repr__(self): + return "" % (id(self),) - self.insns = [] # output list of instructions +class TempPtr(TempBox): + type = REF - # registers with dead values - self.freeregs = {} - for regcls in freeregs: - self.freeregs[regcls] = freeregs[regcls][:] + def __repr__(self): + return "" % (id(self),) - self.var2loc = {} # maps Vars to AllocationSlots - self.lru = [] # least-recently-used list of vars; first is oldest. - # contains all vars in registers, and no vars on stack +class PPCRegisterManager(RegisterManager): + all_regs = r.MANAGED_REGS + box_types = None # or a list of acceptable types + no_lower_byte_regs = all_regs + save_around_call_regs = r.VOLATILES - self.spill_offset = initial_spill_offset # where to put next spilled - # value, relative to rFP, - # measured in bytes - self.free_stack_slots = [] # a free list for stack slots + REGLOC_TO_COPY_AREA_OFS = { + r.r0: MY_COPY_OF_REGS + 0 * WORD, + r.r2: MY_COPY_OF_REGS + 1 * WORD, + r.r3: MY_COPY_OF_REGS + 2 * WORD, + r.r4: MY_COPY_OF_REGS + 3 * WORD, + r.r5: MY_COPY_OF_REGS + 4 * WORD, + r.r6: MY_COPY_OF_REGS + 5 * WORD, + r.r7: MY_COPY_OF_REGS + 6 * WORD, + r.r8: MY_COPY_OF_REGS + 7 * WORD, + r.r9: MY_COPY_OF_REGS + 8 * WORD, + r.r10: MY_COPY_OF_REGS + 9 * WORD, + r.r11: MY_COPY_OF_REGS + 10 * WORD, + r.r12: MY_COPY_OF_REGS + 11 * WORD, + r.r13: MY_COPY_OF_REGS + 12 * WORD, + r.r14: MY_COPY_OF_REGS + 13 * WORD, + r.r15: MY_COPY_OF_REGS + 14 * WORD, + r.r16: MY_COPY_OF_REGS + 15 * WORD, + r.r17: MY_COPY_OF_REGS + 16 * WORD, + r.r18: MY_COPY_OF_REGS + 17 * WORD, + r.r19: MY_COPY_OF_REGS + 18 * WORD, + r.r20: MY_COPY_OF_REGS + 19 * WORD, + r.r21: MY_COPY_OF_REGS + 20 * WORD, + r.r22: MY_COPY_OF_REGS + 21 * WORD, + r.r23: MY_COPY_OF_REGS + 22 * WORD, + r.r24: MY_COPY_OF_REGS + 23 * WORD, + r.r25: MY_COPY_OF_REGS + 24 * WORD, + r.r26: MY_COPY_OF_REGS + 25 * WORD, + r.r27: MY_COPY_OF_REGS + 26 * WORD, + r.r28: MY_COPY_OF_REGS + 27 * WORD, + r.r29: MY_COPY_OF_REGS + 28 * WORD, + r.r30: MY_COPY_OF_REGS + 29 * WORD, + r.r31: MY_COPY_OF_REGS + 30 * WORD, + } - # go through the initial mapping and initialize the data structures - for var, loc in initial_mapping.iteritems(): - self.set(var, loc) - if loc.is_register: - if loc.alloc in self.freeregs[loc.regclass]: - self.freeregs[loc.regclass].remove(loc.alloc) - self.lru.append(var) + def __init__(self, longevity, frame_manager=None, assembler=None): + RegisterManager.__init__(self, longevity, frame_manager, assembler) + + def call_result_location(self, v): + return r.r3 + + def convert_to_imm(self, c): + if isinstance(c, ConstInt): + val = rffi.cast(lltype.Signed, c.value) + return locations.ImmLocation(val) + else: + assert isinstance(c, ConstPtr) + return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) + + def ensure_value_is_boxed(self, thing, forbidden_vars=None): + loc = None + if isinstance(thing, Const): + if isinstance(thing, ConstPtr): + tp = REF else: - assert loc.offset >= self.spill_offset + tp = INT + loc = self.get_scratch_reg(tp, forbidden_vars=self.temp_boxes + + forbidden_vars) + immvalue = self.convert_to_imm(thing) + self.assembler.load(loc, immvalue) + else: + loc = self.make_sure_var_in_reg(thing, + forbidden_vars=self.temp_boxes + forbidden_vars) + return loc - self.labels_to_tell_spill_offset_to = [] - self.builders_to_tell_spill_offset_to = [] + def allocate_scratch_reg(self, type=INT, selected_reg=None, forbidden_vars=None): + """Allocate a scratch register, possibly spilling a managed register. + This register is freed after emitting the current operation and can not + be spilled""" + box = TempBox() + reg = self.force_allocate_reg(box, + selected_reg=selected_reg, + forbidden_vars=forbidden_vars) + return reg, box - def set(self, var, loc): - assert var not in self.var2loc - self.var2loc[var] = loc - - def forget(self, var, loc): - assert self.var2loc[var] is loc - del self.var2loc[var] - - def loc_of(self, var): - return self.var2loc[var] - - def spill_slot(self): - """ Returns an unused stack location. """ - if self.free_stack_slots: - return self.free_stack_slots.pop() - else: - self.spill_offset -= 4 - return stack_slot(self.spill_offset) - - def spill(self, reg, argtospill): - if argtospill in self.lru: - self.lru.remove(argtospill) - self.forget(argtospill, reg) - spillslot = self.spill_slot() - if reg.regclass != GP_REGISTER: - self.insns.append(reg.move_to_gpr(0)) - reg = gprs[0] - self.insns.append(Spill(argtospill, reg, spillslot)) - self.set(argtospill, spillslot) - - def _allocate_reg(self, regclass, newarg): - - # check if there is a register available - freeregs = self.freeregs[regclass] - - if freeregs: - reg = freeregs.pop().make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Putting %r into fresh register %r" % (newarg, reg) - return reg - - # if not, find something to spill - for i in range(len(self.lru)): - argtospill = self.lru[i] - reg = self.loc_of(argtospill) - assert reg.is_register - if reg.regclass == regclass: - del self.lru[i] - break - else: - assert 0 - - # Move the value we are spilling onto the stack, both in the - # data structures and in the instructions: - - self.spill(reg, argtospill) - - if DEBUG_PRINT: - print "allocate_reg: Spilled %r from %r to %r." % (argtospill, reg, self.loc_of(argtospill)) - - # update data structures to put newarg into the register - reg = reg.alloc.make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Put %r in stolen reg %r." % (newarg, reg) + def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT or type == REF + box = TempBox() + self.temp_boxes.append(box) + reg = self.force_allocate_reg(box, forbidden_vars=forbidden_vars, + selected_reg=selected_reg) return reg - def _promote(self, arg): - if arg in self.lru: - self.lru.remove(arg) - self.lru.append(arg) +class PPCFrameManager(FrameManager): + def __init__(self): + FrameManager.__init__(self) + self.used = [] - def allocate_for_insns(self, insns): - from pypy.jit.codegen.ppc.rgenop import Var + @staticmethod + def frame_pos(loc, type): + num_words = PPCFrameManager.frame_size(type) + if type == FLOAT: + assert 0, "not implemented yet" + return locations.StackLocation(loc, num_words=num_words, type=type) - insns2 = [] + @staticmethod + def frame_size(type): + if type == FLOAT: + assert 0, "TODO" + return 1 - # make a pass through the instructions, loading constants into - # Vars where needed. - for insn in insns: - newargs = [] - for arg in insn.reg_args: - if not isinstance(arg, Var): - newarg = Var() - arg.load(insns2, newarg) - newargs.append(newarg) + @staticmethod + def get_loc_index(loc): + assert loc.is_stack() + if loc.type == FLOAT: + assert 0, "not implemented yet" + return loc.position + +class Regalloc(object): + + def __init__(self, frame_manager=None, assembler=None): + self.cpu = assembler.cpu + self.frame_manager = frame_manager + self.assembler = assembler + self.jump_target_descr = None + self.final_jump_op = None + + def _prepare(self, inputargs, operations): + longevity, last_real_usage = compute_vars_longevity( + inputargs, operations) + self.longevity = longevity + self.last_real_usage = last_real_usage + fm = self.frame_manager + asm = self.assembler + self.rm = PPCRegisterManager(longevity, fm, asm) + + def prepare_loop(self, inputargs, operations): + self._prepare(inputargs, operations) + self._set_initial_bindings(inputargs) + self.possibly_free_vars(list(inputargs)) + + def prepare_bridge(self, inputargs, arglocs, ops): + self._prepare(inputargs, ops) + self._update_bindings(arglocs, inputargs) + + def _set_initial_bindings(self, inputargs): + arg_index = 0 + count = 0 + n_register_args = len(r.PARAM_REGS) + cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1 + for box in inputargs: + assert isinstance(box, Box) + # handle inputargs in argument registers + if box.type == FLOAT and arg_index % 2 != 0: + assert 0, "not implemented yet" + if arg_index < n_register_args: + if box.type == FLOAT: + assert 0, "not implemented yet" else: - newargs.append(arg) - insn.reg_args[0:len(newargs)] = newargs - insns2.append(insn) + loc = r.PARAM_REGS[arg_index] + self.try_allocate_reg(box, selected_reg=loc) + arg_index += 1 + else: + # treat stack args as stack locations with a negative offset + if box.type == FLOAT: + assert 0, "not implemented yet" + else: + cur_frame_pos -= 1 + count += 1 + loc = self.frame_manager.frame_pos(cur_frame_pos, box.type) + self.frame_manager.set_binding(box, loc) - # Walk through instructions in forward order - for insn in insns2: + def _update_bindings(self, locs, inputargs): + used = {} + i = 0 + for loc in locs: + arg = inputargs[i] + i += 1 + if loc.is_reg(): + self.rm.reg_bindings[arg] = loc + elif loc.is_vfp_reg(): + assert 0, "not supported" + else: + assert loc.is_stack() + self.frame_manager.set_binding(arg, loc) + used[loc] = None - if DEBUG_PRINT: - print "Processing instruction" - print insn - print "LRU list was:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru] + # XXX combine with x86 code and move to llsupport + self.rm.free_regs = [] + for reg in self.rm.all_regs: + if reg not in used: + self.rm.free_regs.append(reg) + # note: we need to make a copy of inputargs because possibly_free_vars + # is also used on op args, which is a non-resizable list + self.possibly_free_vars(list(inputargs)) - # put things into the lru - for arg in insn.reg_args: - self._promote(arg) - if insn.result: - self._promote(insn.result) - if DEBUG_PRINT: - print "LRU list is now:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru if a is not insn.result] + def possibly_free_var(self, var): + self.rm.possibly_free_var(var) - # We need to allocate a register for each used - # argument that is not already in one - for i in range(len(insn.reg_args)): - arg = insn.reg_args[i] - argcls = insn.reg_arg_regclasses[i] - if DEBUG_PRINT: - print "Allocating register for", arg, "..." - argloc = self.loc_of(arg) - if DEBUG_PRINT: - print "currently in", argloc + def possibly_free_vars(self, vars): + for var in vars: + self.possibly_free_var(var) - if not argloc.is_register: - # It has no register now because it has been spilled - self.forget(arg, argloc) - newargloc = self._allocate_reg(argcls, arg) - if DEBUG_PRINT: - print "unspilling to", newargloc - self.insns.append(Unspill(arg, newargloc, argloc)) - self.free_stack_slots.append(argloc) - elif argloc.regclass != argcls: - # it's in the wrong kind of register - # (this code is excessively confusing) - self.forget(arg, argloc) - self.freeregs[argloc.regclass].append(argloc.alloc) - if argloc.regclass != GP_REGISTER: - if argcls == GP_REGISTER: - gpr = self._allocate_reg(GP_REGISTER, arg).number - else: - gpr = rSCRATCH - self.insns.append( - argloc.move_to_gpr(gpr)) - else: - gpr = argloc.number - if argcls != GP_REGISTER: - newargloc = self._allocate_reg(argcls, arg) - self.insns.append( - newargloc.move_from_gpr(gpr)) - else: - if DEBUG_PRINT: - print "it was in ", argloc - pass + def possibly_free_vars_for_op(self, op): + for i in range(op.numargs()): + var = op.getarg(i) + if var is not None: + self.possibly_free_var(var) - # Need to allocate a register for the destination - assert not insn.result or insn.result not in self.var2loc - if insn.result_regclass != NO_REGISTER: - if DEBUG_PRINT: - print "Allocating register for result %r..." % (insn.result,) - resultreg = self._allocate_reg(insn.result_regclass, insn.result) - insn.allocate(self) - if DEBUG_PRINT: - print insn - print - self.insns.append(insn) - #print 'allocation done' - #for i in self.insns: - # print i - #print self.var2loc - return self.insns + def try_allocate_reg(self, v, selected_reg=None, need_lower_byte=False): + return self.rm.try_allocate_reg(v, selected_reg, need_lower_byte) + + def force_allocate_reg(self, var, forbidden_vars=[], selected_reg=None, + need_lower_byte=False): + return self.rm.force_allocate_reg(var, forbidden_vars, selected_reg, + need_lower_byte) + + def allocate_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT # XXX extend this once floats are supported + return self.rm.allocate_scratch_reg(type=type, + forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + + def _check_invariants(self): + self.rm._check_invariants() + + def loc(self, var): + if var.type == FLOAT: + assert 0, "not implemented yet" + return self.rm.loc(var) + + def position(self): + return self.rm.position + + def next_instruction(self): + self.rm.next_instruction() + + def force_spill_var(self, var): + if var.type == FLOAT: + assert 0, "not implemented yet" + else: + self.rm.force_spill_var(var) + + def before_call(self, force_store=[], save_all_regs=False): + self.rm.before_call(force_store, save_all_regs) + + def after_call(self, v): + if v.type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.after_call(v) + + def call_result_location(self, v): + if v.type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.call_result_location(v) + + def _ensure_value_is_boxed(self, thing, forbidden_vars=[]): + if thing.type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.ensure_value_is_boxed(thing, forbidden_vars) + + def get_scratch_reg(self, type, forbidden_vars=[], selected_reg=None): + if type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.get_scratch_reg(type, forbidden_vars, selected_reg) + + def free_temp_vars(self): + self.rm.free_temp_vars() + + def make_sure_var_in_reg(self, var, forbidden_vars=[], + selected_reg=None, need_lower_byte=False): + if var.type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.make_sure_var_in_reg(var, forbidden_vars, + selected_reg, need_lower_byte) + + def convert_to_imm(self, value): + if isinstance(value, ConstInt): + return self.rm.convert_to_imm(value) + else: + assert 0, "not implemented yet" + + def _sync_var(self, v): + if v.type == FLOAT: + assert 0, "not implemented yet" + else: + self.rm._sync_var(v) + + # ****************************************************** + # * P R E P A R E O P E R A T I O N S * + # ****************************************************** + + + def void(self, op): + return [] + + prepare_int_add = prepare_binary_int_op_with_imm() + prepare_int_sub = prepare_binary_int_op_with_imm() + prepare_int_floordiv = prepare_binary_int_op_with_imm() + + prepare_int_mul = prepare_binary_int_op() + prepare_int_mod = prepare_binary_int_op() + prepare_int_and = prepare_binary_int_op() + prepare_int_or = prepare_binary_int_op() + prepare_int_xor = prepare_binary_int_op() + prepare_int_lshift = prepare_binary_int_op() + prepare_int_rshift = prepare_binary_int_op() + prepare_uint_rshift = prepare_binary_int_op() + prepare_uint_floordiv = prepare_binary_int_op() + + prepare_int_add_ovf = prepare_binary_int_op() + prepare_int_sub_ovf = prepare_binary_int_op() + prepare_int_mul_ovf = prepare_binary_int_op() + + prepare_int_neg = prepare_unary_int_op() + prepare_int_invert = prepare_unary_int_op() + + prepare_int_le = prepare_cmp_op() + prepare_int_lt = prepare_cmp_op() + prepare_int_ge = prepare_cmp_op() + prepare_int_gt = prepare_cmp_op() + prepare_int_eq = prepare_cmp_op() + prepare_int_ne = prepare_cmp_op() + + prepare_ptr_eq = prepare_int_eq + prepare_ptr_ne = prepare_int_ne + + prepare_instance_ptr_eq = prepare_ptr_eq + prepare_instance_ptr_ne = prepare_ptr_ne + + prepare_uint_lt = prepare_cmp_op() + prepare_uint_le = prepare_cmp_op() + prepare_uint_gt = prepare_cmp_op() + prepare_uint_ge = prepare_cmp_op() + + prepare_int_is_true = prepare_unary_cmp() + prepare_int_is_zero = prepare_unary_cmp() + + def prepare_finish(self, op): + args = [None] * (op.numargs() + 1) + for i in range(op.numargs()): + arg = op.getarg(i) + if arg: + args[i] = self.loc(arg) + self.possibly_free_var(arg) + n = self.cpu.get_fail_descr_number(op.getdescr()) + args[-1] = imm(n) + return args + + def prepare_call_malloc_gc(self, op): + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + return args + + def _prepare_guard(self, op, args=None): + if args is None: + args = [] + args.append(imm(len(self.frame_manager.used))) + for arg in op.getfailargs(): + if arg: + args.append(self.loc(arg)) + else: + args.append(None) + return args + + def prepare_guard_true(self, op): + l0 = self._ensure_value_is_boxed(op.getarg(0)) + args = self._prepare_guard(op, [l0]) + return args + + prepare_guard_false = prepare_guard_true + prepare_guard_nonnull = prepare_guard_true + prepare_guard_isnull = prepare_guard_true + + def prepare_guard_no_overflow(self, op): + locs = self._prepare_guard(op) + self.possibly_free_vars(op.getfailargs()) + return locs + + prepare_guard_overflow = prepare_guard_no_overflow + prepare_guard_not_invalidated = prepare_guard_no_overflow + + def prepare_guard_exception(self, op): + boxes = list(op.getarglist()) + arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) + loc = self._ensure_value_is_boxed(arg0) + loc1 = self.get_scratch_reg(INT, boxes) + if op.result in self.longevity: + resloc = self.force_allocate_reg(op.result, boxes) + self.possibly_free_var(op.result) + else: + resloc = None + pos_exc_value = imm(self.cpu.pos_exc_value()) + pos_exception = imm(self.cpu.pos_exception()) + arglocs = self._prepare_guard(op, + [loc, loc1, resloc, pos_exc_value, pos_exception]) + return arglocs + + def prepare_guard_no_exception(self, op): + loc = self._ensure_value_is_boxed( + ConstInt(self.cpu.pos_exception())) + arglocs = self._prepare_guard(op, [loc]) + return arglocs + + def prepare_guard_value(self, op): + boxes = list(op.getarglist()) + a0, a1 = boxes + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) + assert op.result is None + arglocs = self._prepare_guard(op, [l0, l1]) + self.possibly_free_vars(op.getarglist()) + self.possibly_free_vars(op.getfailargs()) + return arglocs + + def prepare_guard_class(self, op): + assert isinstance(op.getarg(0), Box) + boxes = list(op.getarglist()) + x = self._ensure_value_is_boxed(boxes[0], boxes) + y = self.get_scratch_reg(REF, forbidden_vars=boxes) + y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) + self.assembler.load(y, imm(y_val)) + offset = self.cpu.vtable_offset + assert offset is not None + offset_loc = self._ensure_value_is_boxed(ConstInt(offset), boxes) + arglocs = self._prepare_guard(op, [x, y, offset_loc]) + return arglocs + + prepare_guard_nonnull_class = prepare_guard_class + + def compute_hint_frame_locations(self, operations): + # optimization only: fill in the 'hint_frame_locations' dictionary + # of rm and xrm based on the JUMP at the end of the loop, by looking + # at where we would like the boxes to be after the jump. + op = operations[-1] + if op.getopnum() != rop.JUMP: + return + self.final_jump_op = op + descr = op.getdescr() + assert isinstance(descr, TargetToken) + if descr._ppc_loop_code != 0: + # if the target LABEL was already compiled, i.e. if it belongs + # to some already-compiled piece of code + self._compute_hint_frame_locations_from_descr(descr) + #else: + # The loop ends in a JUMP going back to a LABEL in the same loop. + # We cannot fill 'hint_frame_locations' immediately, but we can + # wait until the corresponding prepare_op_label() to know where the + # we would like the boxes to be after the jump. + + def _compute_hint_frame_locations_from_descr(self, descr): + arglocs = self.assembler.target_arglocs(descr) + jump_op = self.final_jump_op + assert len(arglocs) == jump_op.numargs() + for i in range(jump_op.numargs()): + box = jump_op.getarg(i) + if isinstance(box, Box): + loc = arglocs[i] + if loc is not None and loc.is_stack(): + self.frame_manager.hint_frame_locations[box] = loc + + def prepare_guard_call_release_gil(self, op, guard_op): + # first, close the stack in the sense of the asmgcc GC root tracker + gcrootmap = self.cpu.gc_ll_descr.gcrootmap + if gcrootmap: + arglocs = [] + argboxes = [] + for i in range(op.numargs()): + loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + arglocs.append(loc) + argboxes.append(box) + self.assembler.call_release_gil(gcrootmap, arglocs, fcond) + self.possibly_free_vars(argboxes) + # do the call + faildescr = guard_op.getdescr() + fail_index = self.cpu.get_fail_descr_number(faildescr) + self.assembler._write_fail_index(fail_index) + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + self.assembler.emit_call(op, args, self, fail_index) + # then reopen the stack + if gcrootmap: + self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) + locs = self._prepare_guard(guard_op) + self.possibly_free_vars(guard_op.getfailargs()) + return locs + + def prepare_jump(self, op): + descr = op.getdescr() + assert isinstance(descr, TargetToken) + self.jump_target_descr = descr + arglocs = self.assembler.target_arglocs(descr) + + # get temporary locs + tmploc = r.SCRATCH + + # Part about non-floats + src_locations1 = [] + dst_locations1 = [] + + # Build the four lists + for i in range(op.numargs()): + box = op.getarg(i) + src_loc = self.loc(box) + dst_loc = arglocs[i] + if box.type != FLOAT: + src_locations1.append(src_loc) + dst_locations1.append(dst_loc) + else: + assert 0, "not implemented yet" + + remap_frame_layout(self.assembler, src_locations1, + dst_locations1, tmploc) + return [] + + def prepare_setfield_gc(self, op): + boxes = list(op.getarglist()) + a0, a1 = boxes + ofs, size, sign = unpack_fielddescr(op.getdescr()) + base_loc = self._ensure_value_is_boxed(a0, boxes) + value_loc = self._ensure_value_is_boxed(a1, boxes) + if _check_imm_arg(ofs): + ofs_loc = imm(ofs) + else: + ofs_loc = self.get_scratch_reg(INT, boxes) + self.assembler.load(ofs_loc, imm(ofs)) + return [value_loc, base_loc, ofs_loc, imm(size)] + + prepare_setfield_raw = prepare_setfield_gc + + def prepare_getfield_gc(self, op): + a0 = op.getarg(0) + ofs, size, sign = unpack_fielddescr(op.getdescr()) + base_loc = self._ensure_value_is_boxed(a0) + immofs = imm(ofs) + if _check_imm_arg(ofs): + ofs_loc = immofs + else: + ofs_loc = self.get_scratch_reg(INT, [a0]) + self.assembler.load(ofs_loc, immofs) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + return [base_loc, ofs_loc, res, imm(size)] + + prepare_getfield_raw = prepare_getfield_gc + prepare_getfield_raw_pure = prepare_getfield_gc + prepare_getfield_gc_pure = prepare_getfield_gc + + def prepare_getinteriorfield_gc(self, op): + t = unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + args = op.getarglist() + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) + c_ofs = ConstInt(ofs) + if _check_imm_arg(c_ofs): + ofs_loc = imm(ofs) + else: + ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + result_loc = self.force_allocate_reg(op.result) + self.possibly_free_var(op.result) + return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), + imm(itemsize), imm(fieldsize)] + + def prepare_setinteriorfield_gc(self, op): + t = unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + args = op.getarglist() + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) + value_loc = self._ensure_value_is_boxed(op.getarg(2), args) + c_ofs = ConstInt(ofs) + if _check_imm_arg(c_ofs): + ofs_loc = imm(ofs) + else: + ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), + imm(itemsize), imm(fieldsize)] + + def prepare_arraylen_gc(self, op): + arraydescr = op.getdescr() + assert isinstance(arraydescr, ArrayDescr) + ofs = arraydescr.lendescr.offset + arg = op.getarg(0) + base_loc = self._ensure_value_is_boxed(arg) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + return [res, base_loc, imm(ofs)] + + def prepare_setarrayitem_gc(self, op): + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) + args = op.getarglist() + base_loc = self._ensure_value_is_boxed(args[0], args) + ofs_loc = self._ensure_value_is_boxed(args[1], args) + value_loc = self._ensure_value_is_boxed(args[2], args) + scratch_loc = self.rm.get_scratch_reg(INT, + [base_loc, ofs_loc, value_loc]) + assert _check_imm_arg(ofs) + return [value_loc, base_loc, ofs_loc, scratch_loc, imm(scale), imm(ofs)] + prepare_setarrayitem_raw = prepare_setarrayitem_gc + + def prepare_getarrayitem_gc(self, op): + boxes = op.getarglist() + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + assert _check_imm_arg(ofs) + return [res, base_loc, ofs_loc, scratch_loc, imm(scale), imm(ofs)] + + prepare_getarrayitem_raw = prepare_getarrayitem_gc + prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc + + def prepare_strlen(self, op): + args = op.getarglist() + l0 = self._ensure_value_is_boxed(op.getarg(0)) + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, + self.cpu.translate_support_code) + immofs = imm(ofs_length) + if _check_imm_arg(ofs_length): + l1 = immofs + else: + l1 = self.get_scratch_reg(INT, args) + self.assembler.load(l1, immofs) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + + res = self.force_allocate_reg(op.result) + self.possibly_free_var(op.result) + return [l0, l1, res] + + def prepare_strgetitem(self, op): + boxes = op.getarglist() + base_loc = self._ensure_value_is_boxed(boxes[0]) + + a1 = boxes[1] + ofs_loc = self._ensure_value_is_boxed(a1, boxes) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, + self.cpu.translate_support_code) + assert itemsize == 1 + return [res, base_loc, ofs_loc, imm(basesize)] + + def prepare_strsetitem(self, op): + boxes = op.getarglist() + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + value_loc = self._ensure_value_is_boxed(boxes[2], boxes) + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, + self.cpu.translate_support_code) + assert itemsize == 1 + return [value_loc, base_loc, ofs_loc, imm(basesize)] + + prepare_copystrcontent = void + prepare_copyunicodecontent = void + + def prepare_unicodelen(self, op): + l0 = self._ensure_value_is_boxed(op.getarg(0)) + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + immofs = imm(ofs_length) + if _check_imm_arg(ofs_length): + l1 = immofs + else: + l1 = self.get_scratch_reg(INT, [op.getarg(0)]) + self.assembler.load(l1, immofs) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + return [l0, l1, res] + + def prepare_unicodegetitem(self, op): + boxes = op.getarglist() + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.force_allocate_reg(op.result) + + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + scale = itemsize / 2 + return [res, base_loc, ofs_loc, + imm(scale), imm(basesize), imm(itemsize)] + + def prepare_unicodesetitem(self, op): + boxes = op.getarglist() + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + value_loc = self._ensure_value_is_boxed(boxes[2], boxes) + basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, + self.cpu.translate_support_code) + scale = itemsize / 2 + return [value_loc, base_loc, ofs_loc, + imm(scale), imm(basesize), imm(itemsize)] + + def prepare_same_as(self, op): + arg = op.getarg(0) + argloc = self._ensure_value_is_boxed(arg) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + resloc = self.force_allocate_reg(op.result) + return [argloc, resloc] + + prepare_cast_ptr_to_int = prepare_same_as + prepare_cast_int_to_ptr = prepare_same_as + + def prepare_call(self, op): + effectinfo = op.getdescr().get_extra_info() + if effectinfo is not None: + oopspecindex = effectinfo.oopspecindex + if oopspecindex == EffectInfo.OS_MATH_SQRT: + args = self.prepare_op_math_sqrt(op, fcond) + self.assembler.emit_op_math_sqrt(op, args, self, fcond) + return + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + return args + + prepare_debug_merge_point = void + prepare_jit_debug = void + + def prepare_cond_call_gc_wb(self, op): + assert op.result is None + N = op.numargs() + # we force all arguments in a reg (unless they are Consts), + # because it will be needed anyway by the following setfield_gc + # or setarrayitem_gc. It avoids loading it twice from the memory. + arglocs = [] + argboxes = [] + for i in range(N): + loc = self._ensure_value_is_boxed(op.getarg(i), argboxes) + arglocs.append(loc) + self.rm.possibly_free_vars(argboxes) + return arglocs + + prepare_cond_call_gc_wb_array = prepare_cond_call_gc_wb + + def prepare_force_token(self, op): + res_loc = self.force_allocate_reg(op.result) + self.possibly_free_var(op.result) + return [res_loc] + + def prepare_label(self, op): + # XXX big refactoring needed? + descr = op.getdescr() + assert isinstance(descr, TargetToken) + inputargs = op.getarglist() + arglocs = [None] * len(inputargs) + # + # we use force_spill() on the boxes that are not going to be really + # used any more in the loop, but that are kept alive anyway + # by being in a next LABEL's or a JUMP's argument or fail_args + # of some guard + position = self.rm.position + for arg in inputargs: + assert isinstance(arg, Box) + if self.last_real_usage.get(arg, -1) <= position: + self.force_spill_var(arg) + + # + for i in range(len(inputargs)): + arg = inputargs[i] + assert isinstance(arg, Box) + loc = self.loc(arg) + arglocs[i] = loc + if loc.is_reg(): + self.frame_manager.mark_as_free(arg) + # + descr._ppc_arglocs = arglocs + descr._ppc_loop_code = self.assembler.mc.currpos() + descr._ppc_clt = self.assembler.current_clt + self.assembler.target_tokens_currently_compiling[descr] = None + self.possibly_free_vars_for_op(op) + # + # if the LABEL's descr is precisely the target of the JUMP at the + # end of the same loop, i.e. if what we are compiling is a single + # loop that ends up jumping to this LABEL, then we can now provide + # the hints about the expected position of the spilled variables. + jump_op = self.final_jump_op + if jump_op is not None and jump_op.getdescr() is descr: + self._compute_hint_frame_locations_from_descr(descr) + + def prepare_guard_call_may_force(self, op, guard_op): + faildescr = guard_op.getdescr() + fail_index = self.cpu.get_fail_descr_number(faildescr) + self.assembler._write_fail_index(fail_index) + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + for v in guard_op.getfailargs(): + if v in self.rm.reg_bindings: + self.force_spill_var(v) + self.assembler.emit_call(op, args, self, fail_index) + locs = self._prepare_guard(guard_op) + self.possibly_free_vars(guard_op.getfailargs()) + return locs + + def prepare_guard_call_assembler(self, op, guard_op): + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + jd = descr.outermost_jitdriver_sd + assert jd is not None + #size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code) + size = jd.portal_calldescr.get_result_size() + vable_index = jd.index_of_virtualizable + if vable_index >= 0: + self._sync_var(op.getarg(vable_index)) + vable = self.frame_manager.loc(op.getarg(vable_index)) + else: + vable = imm(0) + self.possibly_free_vars(guard_op.getfailargs()) + return [imm(size), vable] + + def _prepare_args_for_new_op(self, new_args): + gc_ll_descr = self.cpu.gc_ll_descr + args = gc_ll_descr.args_for_new(new_args) + arglocs = [] + for i in range(len(args)): + arg = args[i] + t = TempInt() + l = self.force_allocate_reg(t, selected_reg=r.MANAGED_REGS[i]) + self.assembler.load(l, imm(arg)) + arglocs.append(t) + return arglocs + + # from ../x86/regalloc.py:791 + def _unpack_fielddescr(self, fielddescr): + assert isinstance(fielddescr, BaseFieldDescr) + ofs = fielddescr.offset + size = fielddescr.get_field_size(self.cpu.translate_support_code) + ptr = fielddescr.is_pointer_field() + return ofs, size, ptr + + # from ../x86/regalloc.py:779 + def _unpack_arraydescr(self, arraydescr): + assert isinstance(arraydescr, BaseArrayDescr) + cpu = self.cpu + ofs_length = arraydescr.get_ofs_length(cpu.translate_support_code) + ofs = arraydescr.get_base_size(cpu.translate_support_code) + size = arraydescr.get_item_size(cpu.translate_support_code) + ptr = arraydescr.is_array_of_pointers() + scale = 0 + while (1 << scale) < size: + scale += 1 + assert (1 << scale) == size + return size, scale, ofs, ofs_length, ptr + + def prepare_force_spill(self, op): + self.force_spill_var(op.getarg(0)) + return [] + +def add_none_argument(fn): + return lambda self, op: fn(self, op, None) + +def notimplemented(self, op): + raise NotImplementedError, op + +def notimplemented_with_guard(self, op, guard_op): + + raise NotImplementedError, op + +operations = [notimplemented] * (rop._LAST + 1) +operations_with_guard = [notimplemented_with_guard] * (rop._LAST + 1) + +def get_scale(size): + scale = 0 + while (1 << scale) < size: + scale += 1 + assert (1 << scale) == size + return scale + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'prepare_%s' % key + if hasattr(Regalloc, methname): + func = getattr(Regalloc, methname).im_func + operations[value] = func + +for key, value in rop.__dict__.items(): + key = key.lower() + if key.startswith('_'): + continue + methname = 'prepare_guard_%s' % key + if hasattr(Regalloc, methname): + func = getattr(Regalloc, methname).im_func + operations_with_guard[value] = func + operations[value] = add_none_argument(func) + +Regalloc.operations = operations +Regalloc.operations_with_guard = operations_with_guard diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/register.py rename from pypy/jit/backend/ppc/ppcgen/register.py rename to pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.locations import (RegisterLocation, - FPRegisterLocation) +from pypy.jit.backend.ppc.locations import (RegisterLocation, + FPRegisterLocation) ALL_REGS = [RegisterLocation(i) for i in range(32)] ALL_FLOAT_REGS = [FPRegisterLocation(i) for i in range(32)] diff --git a/pypy/jit/backend/ppc/ppcgen/regname.py b/pypy/jit/backend/ppc/regname.py rename from pypy/jit/backend/ppc/ppcgen/regname.py rename to pypy/jit/backend/ppc/regname.py diff --git a/pypy/jit/backend/ppc/rgenop.py b/pypy/jit/backend/ppc/rgenop.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/rgenop.py +++ /dev/null @@ -1,1427 +0,0 @@ -import py -from pypy.jit.codegen.model import AbstractRGenOp, GenLabel, GenBuilder -from pypy.jit.codegen.model import GenVar, GenConst, CodeGenSwitch -from pypy.jit.codegen.model import ReplayBuilder, dummy_var -from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.lltypesystem import lloperation -from pypy.rpython.extfunc import register_external -from pypy.rlib.objectmodel import specialize, we_are_translated -from pypy.jit.codegen.conftest import option -from ctypes import POINTER, cast, c_void_p, c_int, CFUNCTYPE - -from pypy.jit.codegen.ppc import codebuf -from pypy.jit.codegen.ppc.instruction import rSP, rFP, rSCRATCH, gprs -from pypy.jit.codegen.ppc import instruction as insn -from pypy.jit.codegen.ppc.regalloc import RegisterAllocation -from pypy.jit.codegen.emit_moves import emit_moves, emit_moves_safe - -from pypy.jit.codegen.ppc.ppcgen.rassemblermaker import make_rassembler -from pypy.jit.codegen.ppc.ppcgen.ppc_assembler import MyPPCAssembler - -from pypy.jit.codegen.i386.rgenop import gc_malloc_fnaddr -from pypy.rpython.annlowlevel import llhelper - -class RPPCAssembler(make_rassembler(MyPPCAssembler)): - def emit(self, value): - self.mc.write(value) - -_PPC = RPPCAssembler - - -_flush_icache = None -def flush_icache(base, size): - global _flush_icache - if _flush_icache == None: - cpath = py.magic.autopath().dirpath().join('_flush_icache.c') - _flush_icache = cpath._getpymodule()._flush_icache - _flush_icache(base, size) -register_external(flush_icache, [int, int], None, "LL_flush_icache") - - -NSAVEDREGISTERS = 19 - -DEBUG_TRAP = option.trap -DEBUG_PRINT = option.debug_print - -_var_index = [0] -class Var(GenVar): - conditional = False - def __init__(self): - self.__magic_index = _var_index[0] - _var_index[0] += 1 - def __repr__(self): - return "v%d" % self.__magic_index - def fits_in_uimm(self): - return False - def fits_in_simm(self): - return False - -class ConditionVar(Var): - """ Used for vars that originated as the result of a conditional - operation, like a == b """ - conditional = True - -class IntConst(GenConst): - - def __init__(self, value): - self.value = value - - def __repr__(self): - return 'IntConst(%d)'%self.value - - @specialize.arg(1) - def revealconst(self, T): - if isinstance(T, lltype.Ptr): - return lltype.cast_int_to_ptr(T, self.value) - elif T is llmemory.Address: - return llmemory.cast_int_to_adr(self.value) - else: - return lltype.cast_primitive(T, self.value) - - def load(self, insns, var): - insns.append( - insn.Insn_GPR__IMM(_PPC.load_word, - var, [self])) - - def load_now(self, asm, loc): - if loc.is_register: - assert isinstance(loc, insn.GPR) - asm.load_word(loc.number, self.value) - else: - #print 'load_now to', loc.offset - asm.load_word(rSCRATCH, self.value) - asm.stw(rSCRATCH, rFP, loc.offset) - - def fits_in_simm(self): - return abs(self.value) < 2**15 - - def fits_in_uimm(self): - return 0 <= self.value < 2**16 - -class AddrConst(GenConst): - - def __init__(self, addr): - self.addr = addr - - @specialize.arg(1) - def revealconst(self, T): - if T is llmemory.Address: - return self.addr - elif isinstance(T, lltype.Ptr): - return llmemory.cast_adr_to_ptr(self.addr, T) - elif T is lltype.Signed: - return llmemory.cast_adr_to_int(self.addr) - else: - assert 0, "XXX not implemented" - - def fits_in_simm(self): - return False - - def fits_in_uimm(self): - return False - - def load(self, insns, var): - i = IntConst(llmemory.cast_adr_to_int(self.addr)) - insns.append( - insn.Insn_GPR__IMM(RPPCAssembler.load_word, - var, [i])) - - def load_now(self, asm, loc): - value = llmemory.cast_adr_to_int(self.addr) - if loc.is_register: - assert isinstance(loc, insn.GPR) - asm.load_word(loc.number, value) - else: - #print 'load_now to', loc.offset - asm.load_word(rSCRATCH, value) - asm.stw(rSCRATCH, rFP, loc.offset) - - -class JumpPatchupGenerator(object): - - def __init__(self, insns, allocator): - self.insns = insns - self.allocator = allocator - - def emit_move(self, tarloc, srcloc): - srcvar = None - if DEBUG_PRINT: - for v, loc in self.allocator.var2loc.iteritems(): - if loc is srcloc: - srcvar = v - break - emit = self.insns.append - if tarloc == srcloc: - return - if tarloc.is_register and srcloc.is_register: - assert isinstance(tarloc, insn.GPR) - if isinstance(srcloc, insn.GPR): - emit(insn.Move(tarloc, srcloc)) - else: - assert isinstance(srcloc, insn.CRF) - emit(srcloc.move_to_gpr(tarloc.number)) - elif tarloc.is_register and not srcloc.is_register: - emit(insn.Unspill(srcvar, tarloc, srcloc)) - elif not tarloc.is_register and srcloc.is_register: - emit(insn.Spill(srcvar, srcloc, tarloc)) - elif not tarloc.is_register and not srcloc.is_register: - emit(insn.Unspill(srcvar, insn.gprs[0], srcloc)) - emit(insn.Spill(srcvar, insn.gprs[0], tarloc)) - - def create_fresh_location(self): - return self.allocator.spill_slot() - -class StackInfo(Var): - # not really a Var at all, but needs to be mixable with Consts.... - # offset will be assigned later - offset = 0 - pass - -def prepare_for_jump(insns, sourcevars, src2loc, target, allocator): - - tar2src = {} # tar var -> src var - tar2loc = {} - - # construct mapping of targets to sources; note that "target vars" - # and "target locs" are the same thing right now - targetlocs = target.arg_locations - tarvars = [] - -## if DEBUG_PRINT: -## print targetlocs -## print allocator.var2loc - - for i in range(len(targetlocs)): - tloc = targetlocs[i] - src = sourcevars[i] - if isinstance(src, Var): - tar2loc[tloc] = tloc - tar2src[tloc] = src - tarvars.append(tloc) - if not tloc.is_register: - if tloc in allocator.free_stack_slots: - allocator.free_stack_slots.remove(tloc) - - gen = JumpPatchupGenerator(insns, allocator) - emit_moves(gen, tarvars, tar2src, tar2loc, src2loc) - - for i in range(len(targetlocs)): - tloc = targetlocs[i] - src = sourcevars[i] - if not isinstance(src, Var): - insns.append(insn.Load(tloc, src)) - -class Label(GenLabel): - - def __init__(self, args_gv): - self.args_gv = args_gv - #self.startaddr = startaddr - #self.arg_locations = arg_locations - self.min_stack_offset = 1 - -# our approach to stack layout: - -# on function entry, the stack looks like this: - -# .... -# | parameter area | -# | linkage area | <- rSP points to the last word of the linkage area -# +----------------+ - -# we set things up like so: - -# | parameter area | -# | linkage area | <- rFP points to where the rSP was -# | saved registers | -# | local variables | -# +-----------------+ <- rSP points here, and moves around between basic blocks - -# points of note (as of 2006-11-09 anyway :-): -# 1. we currently never spill to the parameter area (should fix?) -# 2. we always save all callee-save registers -# 3. as each basic block can move the SP around as it sees fit, we index -# into the local variables area from the FP (frame pointer; it is not -# usual on the PPC to have a frame pointer, but there's no reason we -# can't have one :-) - - -class Builder(GenBuilder): - - def __init__(self, rgenop): - self.rgenop = rgenop - self.asm = RPPCAssembler() - self.asm.mc = None - self.insns = [] - self.initial_spill_offset = 0 - self.initial_var2loc = None - self.max_param_space = -1 - self.final_jump_addr = 0 - - self.start = 0 - self.closed = True - self.patch_start_here = 0 - - # ---------------------------------------------------------------- - # the public Builder interface: - - def end(self): - pass - - @specialize.arg(1) - def genop1(self, opname, gv_arg): - #print opname, 'on', id(self) - genmethod = getattr(self, 'op_' + opname) - r = genmethod(gv_arg) - #print '->', id(r) - return r - - @specialize.arg(1) - def genop2(self, opname, gv_arg1, gv_arg2): - #print opname, 'on', id(self) - genmethod = getattr(self, 'op_' + opname) - r = genmethod(gv_arg1, gv_arg2) - #print '->', id(r) - return r - - @specialize.arg(1) - def genraisingop2(self, opname, gv_arg1, gv_arg2): - genmethod = getattr(self, 'raisingop_' + opname) - r = genmethod(gv_arg1, gv_arg2) - return r - - @specialize.arg(1) - def genraisingop1(self, opname, gv_arg): - genmethod = getattr(self, 'raisingop_' + opname) - r = genmethod(gv_arg) - return r - - def genop_call(self, sigtoken, gv_fnptr, args_gv): - self.insns.append(insn.SpillCalleeSaves()) - for i in range(len(args_gv)): - self.insns.append(insn.LoadArg(i, args_gv[i])) - gv_result = Var() - self.max_param_space = max(self.max_param_space, len(args_gv)*4) - self.insns.append(insn.CALL(gv_result, gv_fnptr)) - return gv_result - - def genop_getfield(self, fieldtoken, gv_ptr): - fieldoffset, fieldsize = fieldtoken - opcode = {1:_PPC.lbz, 2:_PPC.lhz, 4:_PPC.lwz}[fieldsize] - return self._arg_simm_op(gv_ptr, IntConst(fieldoffset), opcode) - - def genop_setfield(self, fieldtoken, gv_ptr, gv_value): - gv_result = Var() - fieldoffset, fieldsize = fieldtoken - opcode = {1:_PPC.stb, 2:_PPC.sth, 4:_PPC.stw}[fieldsize] - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(opcode, - [gv_value, gv_ptr, IntConst(fieldoffset)])) - return gv_result - - def genop_getsubstruct(self, fieldtoken, gv_ptr): - return self._arg_simm_op(gv_ptr, IntConst(fieldtoken[0]), _PPC.addi) - - def genop_getarrayitem(self, arraytoken, gv_ptr, gv_index): - _, _, itemsize = arraytoken - opcode = {1:_PPC.lbzx, - 2:_PPC.lhzx, - 4:_PPC.lwzx}[itemsize] - opcodei = {1:_PPC.lbz, - 2:_PPC.lhz, - 4:_PPC.lwz}[itemsize] - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - return self._arg_arg_op_with_simm(gv_ptr, gv_itemoffset, opcode, opcodei) - - def genop_getarraysubstruct(self, arraytoken, gv_ptr, gv_index): - _, _, itemsize = arraytoken - assert itemsize == 4 - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - return self._arg_arg_op_with_simm(gv_ptr, gv_itemoffset, _PPC.add, _PPC.addi, - commutative=True) - - def genop_getarraysize(self, arraytoken, gv_ptr): - lengthoffset, _, _ = arraytoken - return self._arg_simm_op(gv_ptr, IntConst(lengthoffset), _PPC.lwz) - - def genop_setarrayitem(self, arraytoken, gv_ptr, gv_index, gv_value): - _, _, itemsize = arraytoken - gv_itemoffset = self.itemoffset(arraytoken, gv_index) - gv_result = Var() - if gv_itemoffset.fits_in_simm(): - opcode = {1:_PPC.stb, - 2:_PPC.sth, - 4:_PPC.stw}[itemsize] - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(opcode, - [gv_value, gv_ptr, gv_itemoffset])) - else: - opcode = {1:_PPC.stbx, - 2:_PPC.sthx, - 4:_PPC.stwx}[itemsize] - self.insns.append( - insn.Insn_None__GPR_GPR_GPR(opcode, - [gv_value, gv_ptr, gv_itemoffset])) - - def genop_malloc_fixedsize(self, alloctoken): - return self.genop_call(1, # COUGH - IntConst(gc_malloc_fnaddr()), - [IntConst(alloctoken)]) - - def genop_malloc_varsize(self, varsizealloctoken, gv_size): - gv_itemoffset = self.itemoffset(varsizealloctoken, gv_size) - gv_result = self.genop_call(1, # COUGH - IntConst(gc_malloc_fnaddr()), - [gv_itemoffset]) - lengthoffset, _, _ = varsizealloctoken - self.insns.append( - insn.Insn_None__GPR_GPR_IMM(_PPC.stw, - [gv_size, gv_result, IntConst(lengthoffset)])) - return gv_result - - def genop_same_as(self, gv_arg): - if not isinstance(gv_arg, Var): - gv_result = Var() - gv_arg.load(self.insns, gv_result) - return gv_result - else: - return gv_arg - - def genop_cast_int_to_ptr(self, kind, gv_int): - return gv_int - -## def genop_debug_pdb(self): # may take an args_gv later - - def genop_get_frame_base(self): - gv_result = Var() - self.insns.append( - insn.LoadFramePointer(gv_result)) - return gv_result - - def get_frame_info(self, vars_gv): - result = [] - for v in vars_gv: - if isinstance(v, Var): - place = StackInfo() - self.insns.append(insn.CopyIntoStack(place, v)) - result.append(place) - else: - result.append(None) - return result - - def alloc_frame_place(self, kind, gv_initial_value=None): - place = StackInfo() - if gv_initial_value is None: - gv_initial_value = AddrConst(llmemory.NULL) - self.insns.append(insn.CopyIntoStack(place, gv_initial_value)) - return place - - def genop_absorb_place(self, place): - var = Var() - self.insns.append(insn.CopyOffStack(var, place)) - return var - - def enter_next_block(self, args_gv): - if DEBUG_PRINT: - print 'enter_next_block1', args_gv - seen = {} - for i in range(len(args_gv)): - gv = args_gv[i] - if isinstance(gv, Var): - if gv in seen: - new_gv = self._arg_op(gv, _PPC.mr) - args_gv[i] = new_gv - seen[gv] = True - else: - new_gv = Var() - gv.load(self.insns, new_gv) - args_gv[i] = new_gv - - if DEBUG_PRINT: - print 'enter_next_block2', args_gv - - r = Label(args_gv) - self.insns.append(insn.Label(r)) - return r - - def jump_if_false(self, gv_condition, args_gv): - return self._jump(gv_condition, False, args_gv) - - def jump_if_true(self, gv_condition, args_gv): - return self._jump(gv_condition, True, args_gv) - - def finish_and_return(self, sigtoken, gv_returnvar): - self.insns.append(insn.Return(gv_returnvar)) - self.allocate_and_emit([]) - - # standard epilogue: - - # restore old SP - self.asm.lwz(rSP, rSP, 0) - # restore all callee-save GPRs - self.asm.lmw(gprs[32-NSAVEDREGISTERS].number, rSP, -4*(NSAVEDREGISTERS+1)) - # restore Condition Register - self.asm.lwz(rSCRATCH, rSP, 4) - self.asm.mtcr(rSCRATCH) - # restore Link Register and jump to it - self.asm.lwz(rSCRATCH, rSP, 8) - self.asm.mtlr(rSCRATCH) - self.asm.blr() - - self._close() - - def finish_and_goto(self, outputargs_gv, target): - if target.min_stack_offset == 1: - self.pause_writing(outputargs_gv) - self.start_writing() - allocator = self.allocate(outputargs_gv) - if DEBUG_PRINT: - before_moves = len(self.insns) - print outputargs_gv - print target.args_gv - allocator.spill_offset = min(allocator.spill_offset, target.min_stack_offset) - prepare_for_jump( - self.insns, outputargs_gv, allocator.var2loc, target, allocator) - if DEBUG_PRINT: - print 'moves:' - for i in self.insns[before_moves:]: - print ' ', i - self.emit(allocator) - here_size = self._stack_size(allocator.spill_offset) - there_size = self._stack_size(target.min_stack_offset) - if here_size != there_size: - self.emit_stack_adjustment(there_size) - if self.rgenop.DEBUG_SCRIBBLE: - if here_size > there_size: - offsets = range(there_size, here_size, 4) - else: - offsets = range(here_size, there_size, 4) - for offset in offsets: - self.asm.load_word(rSCRATCH, 0x23456789) - self.asm.stw(rSCRATCH, rSP, -offset) - self.asm.load_word(rSCRATCH, target.startaddr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self._close() - - def flexswitch(self, gv_exitswitch, args_gv): - # make sure the exitswitch ends the block in a register: - crresult = Var() - self.insns.append(insn.FakeUse(crresult, gv_exitswitch)) - allocator = self.allocate_and_emit(args_gv) - switch_mc = self.asm.mc.reserve(7 * 5 + 4) - self._close() - result = FlexSwitch(self.rgenop, switch_mc, - allocator.loc_of(gv_exitswitch), - allocator.loc_of(crresult), - allocator.var2loc, - allocator.spill_offset) - return result, result.add_default() - - def start_writing(self): - if not self.closed: - return self - assert self.asm.mc is None - if self.final_jump_addr != 0: - mc = self.rgenop.open_mc() - target = mc.tell() - if target == self.final_jump_addr + 16: - mc.setpos(mc.getpos()-4) - else: - self.asm.mc = self.rgenop.ExistingCodeBlock( - self.final_jump_addr, self.final_jump_addr+8) - self.asm.load_word(rSCRATCH, target) - flush_icache(self.final_jump_addr, 8) - self._code_start = mc.tell() - self.asm.mc = mc - self.final_jump_addr = 0 - self.closed = False - return self - else: - self._open() - self.maybe_patch_start_here() - return self - - def maybe_patch_start_here(self): - if self.patch_start_here: - mc = self.asm.mc - self.asm.mc = self.rgenop.ExistingCodeBlock( - self.patch_start_here, self.patch_start_here+8) - self.asm.load_word(rSCRATCH, mc.tell()) - flush_icache(self.patch_start_here, 8) - self.asm.mc = mc - self.patch_start_here = 0 - - def pause_writing(self, args_gv): - allocator = self.allocate_and_emit(args_gv) - self.initial_var2loc = allocator.var2loc - self.initial_spill_offset = allocator.spill_offset - self.insns = [] - self.max_param_space = -1 - self.final_jump_addr = self.asm.mc.tell() - self.closed = True - self.asm.nop() - self.asm.nop() - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self._close() - return self - - # ---------------------------------------------------------------- - # ppc-specific interface: - - def itemoffset(self, arraytoken, gv_index): - # if gv_index is constant, this can return a constant... - lengthoffset, startoffset, itemsize = arraytoken - - gv_offset = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(RPPCAssembler.mulli, - gv_offset, [gv_index, IntConst(itemsize)])) - gv_itemoffset = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(RPPCAssembler.addi, - gv_itemoffset, [gv_offset, IntConst(startoffset)])) - return gv_itemoffset - - def _write_prologue(self, sigtoken): - numargs = sigtoken # for now - if DEBUG_TRAP: - self.asm.trap() - inputargs = [Var() for i in range(numargs)] - assert self.initial_var2loc is None - self.initial_var2loc = {} - for arg in inputargs[:8]: - self.initial_var2loc[arg] = gprs[3+len(self.initial_var2loc)] - if len(inputargs) > 8: - for i in range(8, len(inputargs)): - arg = inputargs[i] - self.initial_var2loc[arg] = insn.stack_slot(24 + 4 * len(self.initial_var2loc)) - self.initial_spill_offset = self._var_offset(0) - - # Standard prologue: - - # Minimum stack space = 24+params+lv+4*GPRSAVE+8*FPRSAVE - # params = stack space for parameters for functions we call - # lv = stack space for local variables - # GPRSAVE = the number of callee-save GPRs we save, currently - # NSAVEDREGISTERS which is 19, i.e. all of them - # FPRSAVE = the number of callee-save FPRs we save, currently 0 - # Initially, we set params == lv == 0 and allow each basic block to - # ensure it has enough space to continue. - - minspace = self._stack_size(self._var_offset(0)) - # save Link Register - self.asm.mflr(rSCRATCH) - self.asm.stw(rSCRATCH, rSP, 8) - # save Condition Register - self.asm.mfcr(rSCRATCH) - self.asm.stw(rSCRATCH, rSP, 4) - # save the callee-save GPRs - self.asm.stmw(gprs[32-NSAVEDREGISTERS].number, rSP, -4*(NSAVEDREGISTERS + 1)) - # set up frame pointer - self.asm.mr(rFP, rSP) - # save stack pointer into linkage area and set stack pointer for us. - self.asm.stwu(rSP, rSP, -minspace) - - if self.rgenop.DEBUG_SCRIBBLE: - # write junk into all non-argument, non rFP or rSP registers - self.asm.load_word(rSCRATCH, 0x12345678) - for i in range(min(11, 3+len(self.initial_var2loc)), 32): - self.asm.load_word(i, 0x12345678) - # scribble the part of the stack between - # self._var_offset(0) and minspace - for offset in range(self._var_offset(0), -minspace, -4): - self.asm.stw(rSCRATCH, rFP, offset) - # and then a bit more - for offset in range(-minspace-4, -minspace-200, -4): - self.asm.stw(rSCRATCH, rFP, offset) - - return inputargs - - def _var_offset(self, v): - """v represents an offset into the local variable area in bytes; - this returns the offset relative to rFP""" - return -(4*NSAVEDREGISTERS+4+v) - - def _stack_size(self, lv): - """ Returns the required stack size to store all data, assuming - that there are 'param' bytes of parameters for callee functions and - 'lv' is the largest (wrt to abs() :) rFP-relative byte offset of - any variable on the stack. Plus 4 because the rFP actually points - into our caller's linkage area.""" - assert lv <= 0 - if self.max_param_space >= 0: - param = max(self.max_param_space, 32) + 24 - else: - param = 0 - return ((4 + param - lv + 15) & ~15) - - def _open(self): - self.asm.mc = self.rgenop.open_mc() - self._code_start = self.asm.mc.tell() - self.closed = False - - def _close(self): - _code_stop = self.asm.mc.tell() - code_size = _code_stop - self._code_start - flush_icache(self._code_start, code_size) - self.rgenop.close_mc(self.asm.mc) - self.asm.mc = None - - def allocate_and_emit(self, live_vars_gv): - allocator = self.allocate(live_vars_gv) - return self.emit(allocator) - - def allocate(self, live_vars_gv): - assert self.initial_var2loc is not None - allocator = RegisterAllocation( - self.rgenop.freeregs, - self.initial_var2loc, - self.initial_spill_offset) - self.insns = allocator.allocate_for_insns(self.insns) - return allocator - - def emit(self, allocator): - in_size = self._stack_size(self.initial_spill_offset) - our_size = self._stack_size(allocator.spill_offset) - if in_size != our_size: - assert our_size > in_size - self.emit_stack_adjustment(our_size) - if self.rgenop.DEBUG_SCRIBBLE: - for offset in range(in_size, our_size, 4): - self.asm.load_word(rSCRATCH, 0x23456789) - self.asm.stw(rSCRATCH, rSP, -offset) - if self.rgenop.DEBUG_SCRIBBLE: - locs = {} - for _, loc in self.initial_var2loc.iteritems(): - locs[loc] = True - regs = insn.gprs[3:] - for reg in regs: - if reg not in locs: - self.asm.load_word(reg.number, 0x3456789) - self.asm.load_word(0, 0x3456789) - for offset in range(self._var_offset(0), - self.initial_spill_offset, - -4): - if insn.stack_slot(offset) not in locs: - self.asm.stw(0, rFP, offset) - for insn_ in self.insns: - insn_.emit(self.asm) - for label in allocator.labels_to_tell_spill_offset_to: - label.min_stack_offset = allocator.spill_offset - for builder in allocator.builders_to_tell_spill_offset_to: - builder.initial_spill_offset = allocator.spill_offset - return allocator - - def emit_stack_adjustment(self, newsize): - # the ABI requires that at all times that r1 is valid, in the - # sense that it must point to the bottom of the stack and that - # executing SP <- *(SP) repeatedly walks the stack. - # this code satisfies this, although there is a 1-instruction - # window where such walking would find a strange intermediate - # "frame" - self.asm.addi(rSCRATCH, rFP, -newsize) - self.asm.sub(rSCRATCH, rSCRATCH, rSP) - - # this is a pure debugging check that we avoid the situation - # where *(r1) == r1 which would violates the ABI rules listed - # above. after a while it can be removed or maybe made - # conditional on some --option passed to py.test - self.asm.tweqi(rSCRATCH, 0) - - self.asm.stwux(rSP, rSP, rSCRATCH) - self.asm.stw(rFP, rSP, 0) - - def _arg_op(self, gv_arg, opcode): - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR(opcode, gv_result, gv_arg)) - return gv_result - - def _arg_arg_op(self, gv_x, gv_y, opcode): - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_GPR(opcode, gv_result, [gv_x, gv_y])) - return gv_result - - def _arg_simm_op(self, gv_x, gv_imm, opcode): - assert gv_imm.fits_in_simm() - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(opcode, gv_result, [gv_x, gv_imm])) - return gv_result - - def _arg_uimm_op(self, gv_x, gv_imm, opcode): - assert gv_imm.fits_in_uimm() - gv_result = Var() - self.insns.append( - insn.Insn_GPR__GPR_IMM(opcode, gv_result, [gv_x, gv_imm])) - return gv_result - - def _arg_arg_op_with_simm(self, gv_x, gv_y, opcode, opcodei, - commutative=False): - if gv_y.fits_in_simm(): - return self._arg_simm_op(gv_x, gv_y, opcodei) - elif gv_x.fits_in_simm() and commutative: - return self._arg_simm_op(gv_y, gv_x, opcodei) - else: - return self._arg_arg_op(gv_x, gv_y, opcode) - - def _arg_arg_op_with_uimm(self, gv_x, gv_y, opcode, opcodei, - commutative=False): - if gv_y.fits_in_uimm(): - return self._arg_uimm_op(gv_x, gv_y, opcodei) - elif gv_x.fits_in_uimm() and commutative: - return self._arg_uimm_op(gv_y, gv_x, opcodei) - else: - return self._arg_arg_op(gv_x, gv_y, opcode) - - def _identity(self, gv_arg): - return gv_arg - - cmp2info = { - # bit-in-crf negated - 'gt': ( 1, 0 ), - 'lt': ( 0, 0 ), - 'le': ( 1, 1 ), - 'ge': ( 0, 1 ), - 'eq': ( 2, 0 ), - 'ne': ( 2, 1 ), - } - - cmp2info_flipped = { - # bit-in-crf negated - 'gt': ( 0, 0 ), - 'lt': ( 1, 0 ), - 'le': ( 0, 1 ), - 'ge': ( 1, 1 ), - 'eq': ( 2, 0 ), - 'ne': ( 2, 1 ), - } - - def _compare(self, op, gv_x, gv_y): - #print "op", op - gv_result = ConditionVar() - if gv_y.fits_in_simm(): - self.insns.append( - insn.CMPWI(self.cmp2info[op], gv_result, [gv_x, gv_y])) - elif gv_x.fits_in_simm(): - self.insns.append( - insn.CMPWI(self.cmp2info_flipped[op], gv_result, [gv_y, gv_x])) - else: - self.insns.append( - insn.CMPW(self.cmp2info[op], gv_result, [gv_x, gv_y])) - return gv_result - - def _compare_u(self, op, gv_x, gv_y): - gv_result = ConditionVar() - if gv_y.fits_in_uimm(): - self.insns.append( - insn.CMPWLI(self.cmp2info[op], gv_result, [gv_x, gv_y])) - elif gv_x.fits_in_uimm(): - self.insns.append( - insn.CMPWLI(self.cmp2info_flipped[op], gv_result, [gv_y, gv_x])) - else: - self.insns.append( - insn.CMPWL(self.cmp2info[op], gv_result, [gv_x, gv_y])) - return gv_result - - def _jump(self, gv_condition, if_true, args_gv): - targetbuilder = self.rgenop.newbuilder() - - self.insns.append( - insn.Jump(gv_condition, targetbuilder, if_true, args_gv)) - - return targetbuilder - - def _ov(self): - # mfxer rFOO - # extrwi rBAR, rFOO, 1, 1 - gv_xer = Var() - self.insns.append( - insn.Insn_GPR(_PPC.mfxer, gv_xer)) - gv_ov = Var() - self.insns.append(insn.Extrwi(gv_ov, gv_xer, 1, 1)) - return gv_ov - - def op_bool_not(self, gv_arg): - return self._arg_uimm_op(gv_arg, self.rgenop.genconst(1), RPPCAssembler.xori) - - def op_int_is_true(self, gv_arg): - return self._compare('ne', gv_arg, self.rgenop.genconst(0)) - - def op_int_neg(self, gv_arg): - return self._arg_op(gv_arg, _PPC.neg) - - def raisingop_int_neg_ovf(self, gv_arg): - gv_result = self._arg_op(gv_arg, _PPC.nego) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_abs(self, gv_arg): - gv_sign = self._arg_uimm_op(gv_arg, self.rgenop.genconst(31), _PPC.srawi) - gv_maybe_inverted = self._arg_arg_op(gv_arg, gv_sign, _PPC.xor) - return self._arg_arg_op(gv_sign, gv_maybe_inverted, _PPC.subf) - - def raisingop_int_abs_ovf(self, gv_arg): - gv_sign = self._arg_uimm_op(gv_arg, self.rgenop.genconst(31), _PPC.srawi) - gv_maybe_inverted = self._arg_arg_op(gv_arg, gv_sign, _PPC.xor) - gv_result = self._arg_arg_op(gv_sign, gv_maybe_inverted, _PPC.subfo) - return (gv_result, self._ov()) - - def op_int_invert(self, gv_arg): - return self._arg_op(gv_arg, _PPC.not_) - - def op_int_add(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.add, _PPC.addi, - commutative=True) - - def raisingop_int_add_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.addo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_sub(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.sub, _PPC.subi) - - def raisingop_int_sub_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.subo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_mul(self, gv_x, gv_y): - return self._arg_arg_op_with_simm(gv_x, gv_y, _PPC.mullw, _PPC.mulli, - commutative=True) - - def raisingop_int_mul_ovf(self, gv_x, gv_y): - gv_result = self._arg_arg_op(gv_x, gv_y, _PPC.mullwo) - gv_ov = self._ov() - return (gv_result, gv_ov) - - def op_int_floordiv(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.divw) - - ## def op_int_floordiv_zer(self, gv_x, gv_y): - - def op_int_mod(self, gv_x, gv_y): - gv_dividend = self.op_int_floordiv(gv_x, gv_y) - gv_z = self.op_int_mul(gv_dividend, gv_y) - return self.op_int_sub(gv_x, gv_z) - - ## def op_int_mod_zer(self, gv_x, gv_y): - - def op_int_lt(self, gv_x, gv_y): - return self._compare('lt', gv_x, gv_y) - - def op_int_le(self, gv_x, gv_y): - return self._compare('le', gv_x, gv_y) - - def op_int_eq(self, gv_x, gv_y): - return self._compare('eq', gv_x, gv_y) - - def op_int_ne(self, gv_x, gv_y): - return self._compare('ne', gv_x, gv_y) - - def op_int_gt(self, gv_x, gv_y): - return self._compare('gt', gv_x, gv_y) - - def op_int_ge(self, gv_x, gv_y): - return self._compare('ge', gv_x, gv_y) - - op_char_lt = op_int_lt - op_char_le = op_int_le - op_char_eq = op_int_eq - op_char_ne = op_int_ne - op_char_gt = op_int_gt - op_char_ge = op_int_ge - - op_unichar_eq = op_int_eq - op_unichar_ne = op_int_ne - - def op_int_and(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.and_) - - def op_int_or(self, gv_x, gv_y): - return self._arg_arg_op_with_uimm(gv_x, gv_y, _PPC.or_, _PPC.ori, - commutative=True) - - def op_int_lshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - return self.rgenop.genconst(0) - else: - return self._arg_uimm_op(gv_x, gv_y, _PPC.slwi) - # computing x << y when you don't know y is <=32 - # (we can assume y >= 0 though) - # here's the plan: - # - # z = nltu(y, 32) (as per cwg) - # w = x << y - # r = w&z - gv_a = self._arg_simm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_z = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - gv_w = self._arg_arg_op(gv_x, gv_y, _PPC.slw) - return self._arg_arg_op(gv_z, gv_w, _PPC.and_) - - ## def op_int_lshift_val(self, gv_x, gv_y): - - def op_int_rshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - gv_y = self.rgenop.genconst(31) - return self._arg_simm_op(gv_x, gv_y, _PPC.srawi) - # computing x >> y when you don't know y is <=32 - # (we can assume y >= 0 though) - # here's the plan: - # - # ntlu_y_32 = nltu(y, 32) (as per cwg) - # o = srawi(x, 31) & ~ntlu_y_32 - # w = (x >> y) & ntlu_y_32 - # r = w|o - gv_a = self._arg_uimm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_ntlu_y_32 = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - - gv_c = self._arg_uimm_op(gv_x, self.rgenop.genconst(31), _PPC.srawi) - gv_o = self._arg_arg_op(gv_c, gv_ntlu_y_32, _PPC.andc_) - - gv_e = self._arg_arg_op(gv_x, gv_y, _PPC.sraw) - gv_w = self._arg_arg_op(gv_e, gv_ntlu_y_32, _PPC.and_) - - return self._arg_arg_op(gv_o, gv_w, _PPC.or_) - - ## def op_int_rshift_val(self, gv_x, gv_y): - - def op_int_xor(self, gv_x, gv_y): - return self._arg_arg_op_with_uimm(gv_x, gv_y, _PPC.xor, _PPC.xori, - commutative=True) - - ## various int_*_ovfs - - op_uint_is_true = op_int_is_true - op_uint_invert = op_int_invert - - op_uint_add = op_int_add - op_uint_sub = op_int_sub - op_uint_mul = op_int_mul - - def op_uint_floordiv(self, gv_x, gv_y): - return self._arg_arg_op(gv_x, gv_y, _PPC.divwu) - - ## def op_uint_floordiv_zer(self, gv_x, gv_y): - - def op_uint_mod(self, gv_x, gv_y): - gv_dividend = self.op_uint_floordiv(gv_x, gv_y) - gv_z = self.op_uint_mul(gv_dividend, gv_y) - return self.op_uint_sub(gv_x, gv_z) - - ## def op_uint_mod_zer(self, gv_x, gv_y): - - def op_uint_lt(self, gv_x, gv_y): - return self._compare_u('lt', gv_x, gv_y) - - def op_uint_le(self, gv_x, gv_y): - return self._compare_u('le', gv_x, gv_y) - - def op_uint_eq(self, gv_x, gv_y): - return self._compare_u('eq', gv_x, gv_y) - - def op_uint_ne(self, gv_x, gv_y): - return self._compare_u('ne', gv_x, gv_y) - - def op_uint_gt(self, gv_x, gv_y): - return self._compare_u('gt', gv_x, gv_y) - - def op_uint_ge(self, gv_x, gv_y): - return self._compare_u('ge', gv_x, gv_y) - - op_uint_and = op_int_and - op_uint_or = op_int_or - - op_uint_lshift = op_int_lshift - - ## def op_uint_lshift_val(self, gv_x, gv_y): - - def op_uint_rshift(self, gv_x, gv_y): - if gv_y.fits_in_simm(): - if abs(gv_y.value) >= 32: - return self.rgenop.genconst(0) - else: - return self._arg_simm_op(gv_x, gv_y, _PPC.srwi) - # computing x << y when you don't know y is <=32 - # (we can assume y >=0 though, i think) - # here's the plan: - # - # z = ngeu(y, 32) (as per cwg) - # w = x >> y - # r = w&z - gv_a = self._arg_simm_op(gv_y, self.rgenop.genconst(32), _PPC.subfic) - gv_b = self._arg_op(gv_y, _PPC.addze) - gv_z = self._arg_arg_op(gv_b, gv_y, _PPC.subf) - gv_w = self._arg_arg_op(gv_x, gv_y, _PPC.srw) - return self._arg_arg_op(gv_z, gv_w, _PPC.and_) - ## def op_uint_rshift_val(self, gv_x, gv_y): - - op_uint_xor = op_int_xor - - # ... floats ... - - # ... llongs, ullongs ... - - # here we assume that booleans are always 1 or 0 and chars are - # always zero-padded. - - op_cast_bool_to_int = _identity - op_cast_bool_to_uint = _identity - ## def op_cast_bool_to_float(self, gv_arg): - op_cast_char_to_int = _identity - op_cast_unichar_to_int = _identity - op_cast_int_to_char = _identity - - op_cast_int_to_unichar = _identity - op_cast_int_to_uint = _identity - ## def op_cast_int_to_float(self, gv_arg): - ## def op_cast_int_to_longlong(self, gv_arg): - op_cast_uint_to_int = _identity - ## def op_cast_uint_to_float(self, gv_arg): - ## def op_cast_float_to_int(self, gv_arg): - ## def op_cast_float_to_uint(self, gv_arg): - ## def op_truncate_longlong_to_int(self, gv_arg): - - # many pointer operations are genop_* special cases above - - op_ptr_eq = op_int_eq - op_ptr_ne = op_int_ne - - op_ptr_nonzero = op_int_is_true - op_ptr_ne = op_int_ne - op_ptr_eq = op_int_eq - - def op_ptr_iszero(self, gv_arg): - return self._compare('eq', gv_arg, self.rgenop.genconst(0)) - - op_cast_ptr_to_int = _identity - - # ... address operations ... - - at specialize.arg(0) -def cast_int_to_whatever(T, value): - if isinstance(T, lltype.Ptr): - return lltype.cast_int_to_ptr(T, value) - elif T is llmemory.Address: - return llmemory.cast_int_to_adr(value) - else: - return lltype.cast_primitive(T, value) - - at specialize.arg(0) -def cast_whatever_to_int(T, value): - if isinstance(T, lltype.Ptr): - return lltype.cast_ptr_to_int(value) - elif T is llmemory.Address: - return llmemory.cast_adr_to_int(value) - else: - return lltype.cast_primitive(lltype.Signed, value) - -class RPPCGenOp(AbstractRGenOp): - - # the set of registers we consider available for allocation - # we can artifically restrict it for testing purposes - freeregs = { - insn.GP_REGISTER:insn.gprs[3:], - insn.FP_REGISTER:insn.fprs, - insn.CR_FIELD:insn.crfs, - insn.CT_REGISTER:[insn.ctr]} - DEBUG_SCRIBBLE = option.debug_scribble - MC_SIZE = 65536 - - def __init__(self): - self.mcs = [] # machine code blocks where no-one is currently writing - self.keepalive_gc_refs = [] - - # ---------------------------------------------------------------- - # the public RGenOp interface - - def newgraph(self, sigtoken, name): - numargs = sigtoken # for now - builder = self.newbuilder() - builder._open() - entrypoint = builder.asm.mc.tell() - inputargs_gv = builder._write_prologue(sigtoken) - return builder, IntConst(entrypoint), inputargs_gv - - @specialize.genconst(1) - def genconst(self, llvalue): - T = lltype.typeOf(llvalue) - if T is llmemory.Address: - return AddrConst(llvalue) - elif isinstance(T, lltype.Primitive): - return IntConst(lltype.cast_primitive(lltype.Signed, llvalue)) - elif isinstance(T, lltype.Ptr): - lladdr = llmemory.cast_ptr_to_adr(llvalue) - if T.TO._gckind == 'gc': - self.keepalive_gc_refs.append(lltype.cast_opaque_ptr(llmemory.GCREF, llvalue)) - return AddrConst(lladdr) - else: - assert 0, "XXX not implemented" - -## @staticmethod -## @specialize.genconst(0) -## def constPrebuiltGlobal(llvalue): - - @staticmethod - def genzeroconst(kind): - return zero_const - - def replay(self, label): - return ReplayBuilder(self), [dummy_var] * len(label.args_gv) - - @staticmethod - def erasedType(T): - if T is llmemory.Address: - return llmemory.Address - if isinstance(T, lltype.Primitive): - return lltype.Signed - elif isinstance(T, lltype.Ptr): - return llmemory.GCREF - else: - assert 0, "XXX not implemented" - - @staticmethod - @specialize.memo() - def fieldToken(T, name): - FIELD = getattr(T, name) - if isinstance(FIELD, lltype.ContainerType): - fieldsize = 0 # not useful for getsubstruct - else: - fieldsize = llmemory.sizeof(FIELD) - return (llmemory.offsetof(T, name), fieldsize) - - @staticmethod - @specialize.memo() - def allocToken(T): - return llmemory.sizeof(T) - - @staticmethod - @specialize.memo() - def varsizeAllocToken(T): - if isinstance(T, lltype.Array): - return RPPCGenOp.arrayToken(T) - else: - # var-sized structs - arrayfield = T._arrayfld - ARRAYFIELD = getattr(T, arrayfield) - arraytoken = RPPCGenOp.arrayToken(ARRAYFIELD) - length_offset, items_offset, item_size = arraytoken - arrayfield_offset = llmemory.offsetof(T, arrayfield) - return (arrayfield_offset+length_offset, - arrayfield_offset+items_offset, - item_size) - - @staticmethod - @specialize.memo() - def arrayToken(A): - return (llmemory.ArrayLengthOffset(A), - llmemory.ArrayItemsOffset(A), - llmemory.ItemOffset(A.OF)) - - @staticmethod - @specialize.memo() - def kindToken(T): - if T is lltype.Float: - py.test.skip("not implemented: floats in the i386^WPPC back-end") - return None # for now - - @staticmethod - @specialize.memo() - def sigToken(FUNCTYPE): - return len(FUNCTYPE.ARGS) # for now - - @staticmethod - @specialize.arg(0) - def read_frame_var(T, base, info, index): - """Read from the stack frame of a caller. The 'base' is the - frame stack pointer captured by the operation generated by - genop_get_frame_base(). The 'info' is the object returned by - get_frame_info(); we are looking for the index-th variable - in the list passed to get_frame_info().""" - place = info[index] - if isinstance(place, StackInfo): - #print '!!!', base, place.offset - #print '???', [peek_word_at(base + place.offset + i) - # for i in range(-64, 65, 4)] - assert place.offset != 0 - value = peek_word_at(base + place.offset) - return cast_int_to_whatever(T, value) - else: - assert isinstance(place, GenConst) - return place.revealconst(T) - - @staticmethod - @specialize.arg(0) - def genconst_from_frame_var(kind, base, info, index): - place = info[index] - if isinstance(place, StackInfo): - #print '!!!', base, place.offset - #print '???', [peek_word_at(base + place.offset + i) - # for i in range(-64, 65, 4)] - assert place.offset != 0 - value = peek_word_at(base + place.offset) - return IntConst(value) - else: - assert isinstance(place, GenConst) - return place - - - @staticmethod - @specialize.arg(0) - def write_frame_place(T, base, place, value): - assert place.offset != 0 - value = cast_whatever_to_int(T, value) - poke_word_into(base + place.offset, value) - - @staticmethod - @specialize.arg(0) - def read_frame_place(T, base, place): - value = peek_word_at(base + place.offset) - return cast_int_to_whatever(T, value) - - def check_no_open_mc(self): - pass - - # ---------------------------------------------------------------- - # ppc-specific interface: - - MachineCodeBlock = codebuf.OwningMachineCodeBlock - ExistingCodeBlock = codebuf.ExistingCodeBlock - - def open_mc(self): - if self.mcs: - return self.mcs.pop() - else: - return self.MachineCodeBlock(self.MC_SIZE) # XXX supposed infinite for now - - def close_mc(self, mc): -## from pypy.jit.codegen.ppc.ppcgen.asmfunc import get_ppcgen -## print '!!!!', cast(mc._data, c_void_p).value -## print '!!!!', mc._data.contents[0] -## get_ppcgen().flush2(cast(mc._data, c_void_p).value, -## mc._size*4) - self.mcs.append(mc) - - def newbuilder(self): - return Builder(self) - -# a switch can take 7 instructions: - -# load_word rSCRATCH, gv_case.value (really two instructions) -# cmpw crf, rSWITCH, rSCRATCH -# load_word rSCRATCH, targetaddr (again two instructions) -# mtctr rSCRATCH -# beqctr crf - -# yay RISC :/ - -class FlexSwitch(CodeGenSwitch): - - # a fair part of this code could likely be shared with the i386 - # backend. - - def __init__(self, rgenop, mc, switch_reg, crf, var2loc, initial_spill_offset): - self.rgenop = rgenop - self.crf = crf - self.switch_reg = switch_reg - self.var2loc = var2loc - self.initial_spill_offset = initial_spill_offset - self.asm = RPPCAssembler() - self.asm.mc = mc - self.default_target_addr = 0 - - def add_case(self, gv_case): - targetbuilder = self.rgenop.newbuilder() - targetbuilder._open() - targetbuilder.initial_var2loc = self.var2loc - targetbuilder.initial_spill_offset = self.initial_spill_offset - target_addr = targetbuilder.asm.mc.tell() - p = self.asm.mc.getpos() - # that this works depends a bit on the fixed length of the - # instruction sequences we use to jump around. if the code is - # ever updated to use the branch-relative instructions (a good - # idea, btw) this will need to be thought about again - try: - self._add_case(gv_case, target_addr) - except codebuf.CodeBlockOverflow: - self.asm.mc.setpos(p) - base = self.asm.mc.tell() - mc = self.rgenop.open_mc() - newmc = mc.reserve(7 * 5 + 4) - self.rgenop.close_mc(mc) - new_addr = newmc.tell() - self.asm.load_word(rSCRATCH, new_addr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - size = self.asm.mc.tell() - base - flush_icache(base, size) - self.asm.mc = newmc - self._add_case(gv_case, target_addr) - return targetbuilder - - def _add_case(self, gv_case, target_addr): - asm = self.asm - base = self.asm.mc.tell() - assert isinstance(gv_case, GenConst) - gv_case.load_now(asm, insn.gprs[0]) - asm.cmpw(self.crf.number, rSCRATCH, self.switch_reg.number) - asm.load_word(rSCRATCH, target_addr) - asm.mtctr(rSCRATCH) - asm.bcctr(12, self.crf.number*4 + 2) - if self.default_target_addr: - self._write_default() - size = self.asm.mc.tell() - base - flush_icache(base, size) - - def add_default(self): - targetbuilder = self.rgenop.newbuilder() - targetbuilder._open() - targetbuilder.initial_var2loc = self.var2loc - targetbuilder.initial_spill_offset = self.initial_spill_offset - base = self.asm.mc.tell() - self.default_target_addr = targetbuilder.asm.mc.tell() - self._write_default() - size = self.asm.mc.tell() - base - flush_icache(base, size) - return targetbuilder - - def _write_default(self): - pos = self.asm.mc.getpos() - self.asm.load_word(rSCRATCH, self.default_target_addr) - self.asm.mtctr(rSCRATCH) - self.asm.bctr() - self.asm.mc.setpos(pos) - -global_rgenop = RPPCGenOp() -RPPCGenOp.constPrebuiltGlobal = global_rgenop.genconst - -def peek_word_at(addr): - # now the Very Obscure Bit: when translated, 'addr' is an - # address. When not, it's an integer. It just happens to - # make the test pass, but that's probably going to change. - if we_are_translated(): - return addr.signed[0] - else: - from ctypes import cast, c_void_p, c_int, POINTER - p = cast(c_void_p(addr), POINTER(c_int)) - return p[0] - -def poke_word_into(addr, value): - # now the Very Obscure Bit: when translated, 'addr' is an - # address. When not, it's an integer. It just happens to - # make the test pass, but that's probably going to change. - if we_are_translated(): - addr.signed[0] = value - else: - from ctypes import cast, c_void_p, c_int, POINTER - p = cast(c_void_p(addr), POINTER(c_int)) - p[0] = value - -zero_const = AddrConst(llmemory.NULL) diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -6,16 +6,16 @@ from pypy.jit.metainterp import history, compile from pypy.jit.metainterp.history import BoxPtr from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.ppc.ppcgen.arch import FORCE_INDEX_OFS +from pypy.jit.backend.ppc.arch import FORCE_INDEX_OFS from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc from pypy.jit.backend.x86.support import values_array -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import NONVOLATILES, GPR_SAVE_AREA, WORD -from pypy.jit.backend.ppc.ppcgen.regalloc import PPCRegisterManager, PPCFrameManager -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen import register as r +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import NONVOLATILES, GPR_SAVE_AREA, WORD +from pypy.jit.backend.ppc.regalloc import PPCRegisterManager, PPCFrameManager +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc import register as r import sys from pypy.tool.ansi_print import ansi_log @@ -37,6 +37,8 @@ self.supports_floats = False self.total_compiled_loops = 0 self.total_compiled_bridges = 0 + + def setup(self): self.asm = AssemblerPPC(self) def setup_once(self): @@ -113,7 +115,13 @@ def get_latest_value_ref(self, index): return self.asm.fail_boxes_ptr.getitem(index) + + def get_latest_force_token(self): + return self.asm.fail_force_index + def get_on_leave_jitted_hook(self): + return self.asm.leave_jitted_hook + # walk through the given trace and generate machine code def _walk_trace_ops(self, codebuilder, operations): for op in operations: diff --git a/pypy/jit/backend/ppc/ppcgen/symbol_lookup.py b/pypy/jit/backend/ppc/symbol_lookup.py rename from pypy/jit/backend/ppc/ppcgen/symbol_lookup.py rename to pypy/jit/backend/ppc/symbol_lookup.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/autopath.py b/pypy/jit/backend/ppc/test/autopath.py rename from pypy/jit/backend/ppc/ppcgen/test/autopath.py rename to pypy/jit/backend/ppc/test/autopath.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py b/pypy/jit/backend/ppc/test/test_call_assembler.py rename from pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py rename to pypy/jit/backend/ppc/test/test_call_assembler.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_field.py b/pypy/jit/backend/ppc/test/test_field.py rename from pypy/jit/backend/ppc/ppcgen/test/test_field.py rename to pypy/jit/backend/ppc/test/test_field.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_field.py +++ b/pypy/jit/backend/ppc/test/test_field.py @@ -1,6 +1,6 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.field import Field +from pypy.jit.backend.ppc.field import Field from py.test import raises import random diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_form.py b/pypy/jit/backend/ppc/test/test_form.py rename from pypy/jit/backend/ppc/ppcgen/test/test_form.py rename to pypy/jit/backend/ppc/test/test_form.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_form.py +++ b/pypy/jit/backend/ppc/test/test_form.py @@ -1,11 +1,11 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import b +from pypy.jit.backend.ppc.ppc_assembler import b import random import sys -from pypy.jit.backend.ppc.ppcgen.form import Form, FormException -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler +from pypy.jit.backend.ppc.form import Form, FormException +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc.assembler import Assembler # 0 31 # +-------------------------------+ diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py b/pypy/jit/backend/ppc/test/test_func_builder.py rename from pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py rename to pypy/jit/backend/ppc/test/test_func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py +++ b/pypy/jit/backend/ppc/test/test_func_builder.py @@ -1,11 +1,11 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.func_builder import make_func -from pypy.jit.backend.ppc.ppcgen import form, func_builder -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.func_builder import make_func +from pypy.jit.backend.ppc import form, func_builder +from pypy.jit.backend.ppc.regname import * from pypy.jit.backend.detect_cpu import autodetect_main_model class TestFuncBuilderTest(object): diff --git a/pypy/jit/backend/ppc/test/test_genc_ts.py b/pypy/jit/backend/ppc/test/test_genc_ts.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/test/test_genc_ts.py +++ /dev/null @@ -1,16 +0,0 @@ -import py -from pypy.jit.codegen.i386.test.test_genc_ts import I386TimeshiftingTestMixin -from pypy.jit.timeshifter.test import test_timeshift -from pypy.jit.codegen.ppc.rgenop import RPPCGenOp - -class PPCTimeshiftingTestMixin(I386TimeshiftingTestMixin): - RGenOp = RPPCGenOp - -class TestTimeshiftPPC(PPCTimeshiftingTestMixin, - test_timeshift.TestLLType): - - # for the individual tests see - # ====> ../../../timeshifter/test/test_timeshift.py - - pass - diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py b/pypy/jit/backend/ppc/test/test_helper.py rename from pypy/jit/backend/ppc/ppcgen/test/test_helper.py rename to pypy/jit/backend/ppc/test/test_helper.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py +++ b/pypy/jit/backend/ppc/test/test_helper.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (encode32, decode32) +from pypy.jit.backend.ppc.helper.assembler import (encode32, decode32) #encode64, decode64) def test_encode32(): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_ppc.py rename to pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -1,13 +1,12 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.codebuilder import BasicPPCAssembler, PPCBuilder -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.regname import * -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen import form, pystructs +from pypy.jit.backend.ppc.codebuilder import BasicPPCAssembler, PPCBuilder +from pypy.jit.backend.ppc.regname import * +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc import form from pypy.jit.backend.detect_cpu import autodetect_main_model -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD from pypy.rpython.lltypesystem import lltype, rffi from pypy.rpython.annlowlevel import llhelper diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py b/pypy/jit/backend/ppc/test/test_regalloc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py rename to pypy/jit/backend/ppc/test/test_regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py +++ b/pypy/jit/backend/ppc/test/test_regalloc.py @@ -1,10 +1,11 @@ from pypy.rlib.objectmodel import instantiate -from pypy.jit.backend.ppc.ppcgen.locations import (imm, RegisterLocation, - ImmLocation, StackLocation) -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen.codebuilder import hi, lo -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import WORD +from pypy.jit.backend.ppc.locations import (imm, RegisterLocation, + ImmLocation, StackLocation) +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc.codebuilder import hi, lo +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import WORD +from pypy.jit.backend.ppc.locations import get_spp_offset class MockBuilder(object): @@ -94,23 +95,31 @@ big = 2 << 28 self.asm.regalloc_mov(imm(big), stack(7)) - exp_instr = [MI("load_imm", 0, 5), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD)), - MI("load_imm", 0, big), - MI("stw", r0.value, SPP.value, -(7 * WORD + WORD))] + exp_instr = [MI("alloc_scratch_reg"), + MI("load_imm", r0, 5), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg"), + + MI("alloc_scratch_reg"), + MI("load_imm", r0, big), + MI("store", r0.value, SPP.value, get_spp_offset(7)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instr def test_mem_to_reg(self): self.asm.regalloc_mov(stack(5), reg(10)) self.asm.regalloc_mov(stack(0), reg(0)) - exp_instrs = [MI("lwz", r10.value, SPP.value, -(5 * WORD + WORD)), - MI("lwz", r0.value, SPP.value, -(WORD))] + exp_instrs = [MI("load", r10.value, SPP.value, -(5 * WORD + WORD)), + MI("load", r0.value, SPP.value, -(WORD))] assert self.asm.mc.instrs == exp_instrs def test_mem_to_mem(self): self.asm.regalloc_mov(stack(5), stack(6)) - exp_instrs = [MI("lwz", r0.value, SPP.value, -(5 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD))] + exp_instrs = [ + MI("alloc_scratch_reg"), + MI("load", r0.value, SPP.value, get_spp_offset(5)), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instrs def test_reg_to_reg(self): @@ -123,8 +132,8 @@ def test_reg_to_mem(self): self.asm.regalloc_mov(reg(5), stack(10)) self.asm.regalloc_mov(reg(0), stack(2)) - exp_instrs = [MI("stw", r5.value, SPP.value, -(10 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(2 * WORD + WORD))] + exp_instrs = [MI("store", r5.value, SPP.value, -(10 * WORD + WORD)), + MI("store", r0.value, SPP.value, -(2 * WORD + WORD))] assert self.asm.mc.instrs == exp_instrs def reg(i): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py b/pypy/jit/backend/ppc/test/test_register_manager.py rename from pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py rename to pypy/jit/backend/ppc/test/test_register_manager.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py +++ b/pypy/jit/backend/ppc/test/test_register_manager.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen import regalloc, register +from pypy.jit.backend.ppc import regalloc, register class TestPPCRegisterManager(object): def test_allocate_scratch_register(self): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py b/pypy/jit/backend/ppc/test/test_stackframe.py rename from pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py rename to pypy/jit/backend/ppc/test/test_stackframe.py diff --git a/pypy/jit/backend/ppc/test/test_zll_random.py b/pypy/jit/backend/ppc/test/test_zll_random.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_zll_random.py @@ -0,0 +1,12 @@ +from pypy.jit.backend.test.test_random import check_random_function, Random +from pypy.jit.backend.test.test_ll_random import LLtypeOperationBuilder +from pypy.jit.backend.detect_cpu import getcpuclass + +CPU = getcpuclass() + +def test_stress(): + cpu = CPU(None, None) + cpu.setup_once() + r = Random() + for i in range(1000): + check_random_function(cpu, LLtypeOperationBuilder, r, i, 1000) diff --git a/pypy/jit/backend/ppc/test/test_ztranslation.py b/pypy/jit/backend/ppc/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_ztranslation.py @@ -0,0 +1,255 @@ +import py, os, sys +from pypy.tool.udir import udir +from pypy.rlib.jit import JitDriver, unroll_parameters, set_param +from pypy.rlib.jit import PARAMETERS, dont_look_inside +from pypy.rlib.jit import promote +from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.backend.test.support import CCompiledMixin +from pypy.jit.codewriter.policy import StopAtXPolicy +from pypy.translator.translator import TranslationContext +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64 +from pypy.config.translationoption import DEFL_GC +from pypy.rlib import rgc + +class TestTranslationPPC(CCompiledMixin): + CPUClass = getcpuclass() + + def _check_cbuilder(self, cbuilder): + # We assume here that we have sse2. If not, the CPUClass + # needs to be changed to CPU386_NO_SSE2, but well. + assert '-msse2' in cbuilder.eci.compile_extra + assert '-mfpmath=sse' in cbuilder.eci.compile_extra + + def test_stuff_translates(self): + # this is a basic test that tries to hit a number of features and their + # translation: + # - jitting of loops and bridges + # - virtualizables + # - set_param interface + # - profiler + # - full optimizer + # - floats neg and abs + + class Frame(object): + _virtualizable2_ = ['i'] + + def __init__(self, i): + self.i = i + + @dont_look_inside + def myabs(x): + return abs(x) + + jitdriver = JitDriver(greens = [], + reds = ['total', 'frame', 'j'], + virtualizables = ['frame']) + def f(i, j): + for param, _ in unroll_parameters: + defl = PARAMETERS[param] + set_param(jitdriver, param, defl) + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) + total = 0 + frame = Frame(i) + while frame.i > 3: + jitdriver.can_enter_jit(frame=frame, total=total, j=j) + jitdriver.jit_merge_point(frame=frame, total=total, j=j) + total += frame.i + if frame.i >= 20: + frame.i -= 2 + frame.i -= 1 + j *= -0.712 + if j + (-j): raise ValueError + k = myabs(j) + if k - abs(j): raise ValueError + if k - abs(-j): raise ValueError + return chr(total % 253) + # + from pypy.rpython.lltypesystem import lltype, rffi + from pypy.rlib.libffi import types, CDLL, ArgChain + from pypy.rlib.test.test_libffi import get_libm_name + libm_name = get_libm_name(sys.platform) + jitdriver2 = JitDriver(greens=[], reds = ['i', 'func', 'res', 'x']) + def libffi_stuff(i, j): + lib = CDLL(libm_name) + func = lib.getpointer('fabs', [types.double], types.double) + res = 0.0 + x = float(j) + while i > 0: + jitdriver2.jit_merge_point(i=i, res=res, func=func, x=x) + promote(func) + argchain = ArgChain() + argchain.arg(x) + res = func.call(argchain, rffi.DOUBLE) + i -= 1 + return res + # + def main(i, j): + a_char = f(i, j) + a_float = libffi_stuff(i, j) + return ord(a_char) * 10 + int(a_float) + expected = main(40, -49) + res = self.meta_interp(main, [40, -49]) + assert res == expected + + def test_direct_assembler_call_translates(self): + """Test CALL_ASSEMBLER and the recursion limit""" + from pypy.rlib.rstackovf import StackOverflow + + class Thing(object): + def __init__(self, val): + self.val = val + + class Frame(object): + _virtualizable2_ = ['thing'] + + driver = JitDriver(greens = ['codeno'], reds = ['i', 'frame'], + virtualizables = ['frame'], + get_printable_location = lambda codeno: str(codeno)) + class SomewhereElse(object): + pass + + somewhere_else = SomewhereElse() + + def change(newthing): + somewhere_else.frame.thing = newthing + + def main(codeno): + frame = Frame() + somewhere_else.frame = frame + frame.thing = Thing(0) + portal(codeno, frame) + return frame.thing.val + + def portal(codeno, frame): + i = 0 + while i < 10: + driver.can_enter_jit(frame=frame, codeno=codeno, i=i) + driver.jit_merge_point(frame=frame, codeno=codeno, i=i) + nextval = frame.thing.val + if codeno == 0: + subframe = Frame() + subframe.thing = Thing(nextval) + nextval = portal(1, subframe) + elif frame.thing.val > 40: + change(Thing(13)) + nextval = 13 + frame.thing = Thing(nextval + 1) + i += 1 + return frame.thing.val + + driver2 = JitDriver(greens = [], reds = ['n']) + + def main2(bound): + try: + while portal2(bound) == -bound+1: + bound *= 2 + except StackOverflow: + pass + return bound + + def portal2(n): + while True: + driver2.jit_merge_point(n=n) + n -= 1 + if n <= 0: + return n + n = portal2(n) + assert portal2(10) == -9 + + def mainall(codeno, bound): + return main(codeno) + main2(bound) + + res = self.meta_interp(mainall, [0, 1], inline=True, + policy=StopAtXPolicy(change)) + print hex(res) + assert res & 255 == main(0) + bound = res & ~255 + assert 1024 <= bound <= 131072 + assert bound & (bound-1) == 0 # a power of two + + +class TestTranslationRemoveTypePtrPPC(CCompiledMixin): + CPUClass = getcpuclass() + + def _get_TranslationContext(self): + t = TranslationContext() + t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' + t.config.translation.gcrootfinder = 'asmgcc' + t.config.translation.list_comprehension_operations = True + t.config.translation.gcremovetypeptr = True + return t + + def test_external_exception_handling_translates(self): + jitdriver = JitDriver(greens = [], reds = ['n', 'total']) + + class ImDone(Exception): + def __init__(self, resvalue): + self.resvalue = resvalue + + @dont_look_inside + def f(x, total): + if x <= 30: + raise ImDone(total * 10) + if x > 200: + return 2 + raise ValueError + @dont_look_inside + def g(x): + if x > 150: + raise ValueError + return 2 + class Base: + def meth(self): + return 2 + class Sub(Base): + def meth(self): + return 1 + @dont_look_inside + def h(x): + if x < 20000: + return Sub() + else: + return Base() + def myportal(i): + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) + total = 0 + n = i + while True: + jitdriver.can_enter_jit(n=n, total=total) + jitdriver.jit_merge_point(n=n, total=total) + try: + total += f(n, total) + except ValueError: + total += 1 + try: + total += g(n) + except ValueError: + total -= 1 + n -= h(n).meth() # this is to force a GUARD_CLASS + def main(i): + try: + myportal(i) + except ImDone, e: + return e.resvalue + + # XXX custom fishing, depends on the exact env var and format + logfile = udir.join('test_ztranslation.log') + os.environ['PYPYLOG'] = 'jit-log-opt:%s' % (logfile,) + try: + res = self.meta_interp(main, [400]) + assert res == main(400) + finally: + del os.environ['PYPYLOG'] + + guard_class = 0 + for line in open(str(logfile)): + if 'guard_class' in line: + guard_class += 1 + # if we get many more guard_classes, it means that we generate + # guards that always fail (the following assert's original purpose + # is to catch the following case: each GUARD_CLASS is misgenerated + # and always fails with "gcremovetypeptr") + assert 0 < guard_class < 10 diff --git a/pypy/jit/backend/ppc/ppcgen/util.py b/pypy/jit/backend/ppc/util.py rename from pypy/jit/backend/ppc/ppcgen/util.py rename to pypy/jit/backend/ppc/util.py --- a/pypy/jit/backend/ppc/ppcgen/util.py +++ b/pypy/jit/backend/ppc/util.py @@ -1,5 +1,5 @@ -from pypy.jit.codegen.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.codegen.ppc.ppcgen.func_builder import make_func +from pypy.jit.codegen.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.codegen.ppc.func_builder import make_func from regname import * From noreply at buildbot.pypy.org Tue Feb 7 17:18:01 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 7 Feb 2012 17:18:01 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: move the call to setup_failure_recovery to the init method of the assembler to fix an annotation issue Message-ID: <20120207161801.B5DFF7107FA@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend-rpythonization Changeset: r52195:ec2a19d0b674 Date: 2012-02-07 08:15 -0800 http://bitbucket.org/pypy/pypy/changeset/ec2a19d0b674/ Log: move the call to setup_failure_recovery to the init method of the assembler to fix an annotation issue diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -106,6 +106,7 @@ self._regalloc = None self.max_stack_params = 0 self.propagate_exception_path = 0 + self.setup_failure_recovery() def _save_nonvolatiles(self): """ save nonvolatile GPRs in GPR SAVE AREA @@ -382,7 +383,6 @@ gc_ll_descr.initialize() self._build_propagate_exception_path() self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) - self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) From noreply at buildbot.pypy.org Tue Feb 7 17:56:23 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 17:56:23 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: (bivab, hager): translation fixes Message-ID: <20120207165623.88D3A7107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52196:5730965ab0ff Date: 2012-02-07 08:55 -0800 http://bitbucket.org/pypy/pypy/changeset/5730965ab0ff/ Log: (bivab, hager): translation fixes diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -112,7 +112,7 @@ (frame_depth + len(all_regs) * WORD + len(all_vfp_regs) * 2 * WORD)) - fail_index_2 = self.assembler.failure_recovery_func( + fail_index_2 = self.assembler.decode_registers_and_descr( faildescr._failure_recovery_code, addr_of_force_index, addr_end_of_frame) diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -83,7 +83,7 @@ def decode64(mem, index): value = 0 - for x in unrolling_iterable(range(8)): + for x in range(8): value |= (ord(mem[index + x]) << (56 - x * 8)) return intmask(value) diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py @@ -10,7 +10,7 @@ return False def _check_imm_arg(arg, size=IMM_SIZE, allow_zero=True): - #assert not isinstance(arg, ConstInt) + assert not isinstance(arg, ConstInt) #if not we_are_translated(): # if not isinstance(arg, int): # import pdb; pdb.set_trace() @@ -25,8 +25,8 @@ def f(self, op): boxes = op.getarglist() arg0, arg1 = boxes - imm_a0 = _check_imm_arg(arg0) - imm_a1 = _check_imm_arg(arg1) + imm_a0 = check_imm_box(arg0) + imm_a1 = check_imm_box(arg1) l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1 and not imm_a0: @@ -63,8 +63,8 @@ def f(self, op): boxes = op.getarglist() b0, b1 = boxes - imm_b0 = _check_imm_arg(b0) - imm_b1 = _check_imm_arg(b1) + imm_b0 = check_imm_box(b0) + imm_b1 = check_imm_box(b1) l0 = self._ensure_value_is_boxed(b0, boxes) l1 = self._ensure_value_is_boxed(b1, boxes) locs = [l0, l1] diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -238,6 +238,7 @@ descr = decode32(enc, i+1) self.fail_boxes_count = fail_index self.fail_force_index = spp_loc + assert isinstance(descr, int) return descr def decode_inputargs(self, enc): @@ -600,8 +601,7 @@ if op.is_ovf(): if (operations[i + 1].getopnum() != rop.GUARD_NO_OVERFLOW and operations[i + 1].getopnum() != rop.GUARD_OVERFLOW): - not_implemented("int_xxx_ovf not followed by " - "guard_(no)_overflow") + assert 0, "int_xxx_ovf not followed by guard_(no)_overflow" return True return False if (operations[i + 1].getopnum() != rop.GUARD_TRUE and diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -512,7 +512,7 @@ loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) arglocs.append(loc) argboxes.append(box) - self.assembler.call_release_gil(gcrootmap, arglocs, fcond) + self.assembler.call_release_gil(gcrootmap, arglocs) self.possibly_free_vars(argboxes) # do the call faildescr = guard_op.getdescr() @@ -595,11 +595,10 @@ args = op.getarglist() base_loc = self._ensure_value_is_boxed(op.getarg(0), args) index_loc = self._ensure_value_is_boxed(op.getarg(1), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): + if _check_imm_arg(ofs): ofs_loc = imm(ofs) else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) self.possibly_free_vars_for_op(op) self.free_temp_vars() result_loc = self.force_allocate_reg(op.result) @@ -614,11 +613,10 @@ base_loc = self._ensure_value_is_boxed(op.getarg(0), args) index_loc = self._ensure_value_is_boxed(op.getarg(1), args) value_loc = self._ensure_value_is_boxed(op.getarg(2), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): + if _check_imm_arg(ofs): ofs_loc = imm(ofs) else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] @@ -640,8 +638,7 @@ base_loc = self._ensure_value_is_boxed(args[0], args) ofs_loc = self._ensure_value_is_boxed(args[1], args) value_loc = self._ensure_value_is_boxed(args[2], args) - scratch_loc = self.rm.get_scratch_reg(INT, - [base_loc, ofs_loc, value_loc]) + scratch_loc = self.rm.get_scratch_reg(INT, args) assert _check_imm_arg(ofs) return [value_loc, base_loc, ofs_loc, scratch_loc, imm(scale), imm(ofs)] prepare_setarrayitem_raw = prepare_setarrayitem_gc @@ -652,7 +649,7 @@ scale = get_scale(size) base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) - scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + scratch_loc = self.rm.get_scratch_reg(INT, boxes) self.possibly_free_vars_for_op(op) self.free_temp_vars() res = self.force_allocate_reg(op.result) diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -97,7 +97,7 @@ rffi.cast(TP, addr_of_force_index)[0] = ~fail_index # start of "no gc operation!" block - fail_index_2 = self.asm.failure_recovery_func( + fail_index_2 = self.asm.decode_registers_and_descr( faildescr._failure_recovery_code, spilling_pointer) self.asm.leave_jitted_hook() # end of "no gc operation!" block From noreply at buildbot.pypy.org Tue Feb 7 18:20:24 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 18:20:24 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: (bivab, hager): disable some code that does not work at the moment. Message-ID: <20120207172024.843D57107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52197:8004bfed337a Date: 2012-02-07 08:57 -0800 http://bitbucket.org/pypy/pypy/changeset/8004bfed337a/ Log: (bivab, hager): disable some code that does not work at the moment. diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -763,11 +763,13 @@ def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: - oopspecindex = effectinfo.oopspecindex - if oopspecindex == EffectInfo.OS_MATH_SQRT: - args = self.prepare_op_math_sqrt(op, fcond) - self.assembler.emit_op_math_sqrt(op, args, self, fcond) - return + # XXX TODO + #oopspecindex = effectinfo.oopspecindex + #if oopspecindex == EffectInfo.OS_MATH_SQRT: + # args = self.prepare_op_math_sqrt(op, fcond) + # self.assembler.emit_op_math_sqrt(op, args, self, fcond) + # return + pass args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args From noreply at buildbot.pypy.org Tue Feb 7 18:20:25 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 18:20:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: some translation fixes in call_assembler Message-ID: <20120207172025.B89707107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52198:79e36f74aa2a Date: 2012-02-07 09:14 -0800 http://bitbucket.org/pypy/pypy/changeset/79e36f74aa2a/ Log: some translation fixes in call_assembler diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -1031,16 +1031,16 @@ # Reset the vable token --- XXX really too much special logic here:-( if jd.index_of_virtualizable >= 0: - from pypy.jit.backend.llsupport.descr import BaseFieldDescr + from pypy.jit.backend.llsupport.descr import FieldDescr fielddescr = jd.vable_token_descr - assert isinstance(fielddescr, BaseFieldDescr) + assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset resloc = regalloc.force_allocate_reg(resbox) - self.alloc_scratch_reg() + self.mc.alloc_scratch_reg() self.mov_loc_loc(arglocs[1], r.SCRATCH) self.mc.li(resloc.value, 0) self.mc.storex(resloc.value, 0, r.SCRATCH.value) - self.free_scratch_reg() + self.mc.free_scratch_reg() regalloc.possibly_free_var(resbox) if op.result is not None: From noreply at buildbot.pypy.org Tue Feb 7 18:20:26 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 18:20:26 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: (bivab, hager): disable codepath until gc support is in place Message-ID: <20120207172026.EB6AF7107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52199:11b80f0c5896 Date: 2012-02-07 09:19 -0800 http://bitbucket.org/pypy/pypy/changeset/11b80f0c5896/ Log: (bivab, hager): disable codepath until gc support is in place diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -522,7 +522,8 @@ self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack if gcrootmap: - self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) + assert 0, "not implemented yet" + # self.assembler.call_reacquire_gil(gcrootmap, registers) locs = self._prepare_guard(guard_op) self.possibly_free_vars(guard_op.getfailargs()) return locs From noreply at buildbot.pypy.org Tue Feb 7 18:39:39 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 18:39:39 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: add gc test Message-ID: <20120207173939.31E887107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52200:7a29e82da96d Date: 2012-02-07 09:38 -0800 http://bitbucket.org/pypy/pypy/changeset/7a29e82da96d/ Log: add gc test diff --git a/pypy/jit/backend/ppc/test/test_zrpy_gc.py b/pypy/jit/backend/ppc/test/test_zrpy_gc.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_zrpy_gc.py @@ -0,0 +1,795 @@ +""" +This is a test that translates a complete JIT together with a GC and runs it. +It is testing that the GC-dependent aspects basically work, mostly the mallocs +and the various cases of write barrier. +""" + +import weakref +import py, os +from pypy.annotation import policy as annpolicy +from pypy.rlib import rgc +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib.jit import JitDriver, dont_look_inside +from pypy.rlib.jit import elidable, unroll_safe +from pypy.jit.backend.llsupport.gc import GcLLDescr_framework +from pypy.tool.udir import udir +from pypy.config.translationoption import DEFL_GC + +class X(object): + def __init__(self, x=0): + self.x = x + + next = None + +class CheckError(Exception): + pass + +def check(flag): + if not flag: + raise CheckError + +def get_g(main): + main._dont_inline_ = True + def g(name, n): + x = X() + x.foo = 2 + main(n, x) + x.foo = 5 + return weakref.ref(x) + g._dont_inline_ = True + return g + + +def get_entry(g): + + def entrypoint(args): + name = '' + n = 2000 + argc = len(args) + if argc > 1: + name = args[1] + if argc > 2: + n = int(args[2]) + r_list = [] + for i in range(20): + r = g(name, n) + r_list.append(r) + rgc.collect() + rgc.collect(); rgc.collect() + freed = 0 + for r in r_list: + if r() is None: + freed += 1 + print freed + return 0 + + return entrypoint + + +def get_functions_to_patch(): + from pypy.jit.backend.llsupport import gc + # + can_use_nursery_malloc1 = gc.GcLLDescr_framework.can_use_nursery_malloc + def can_use_nursery_malloc2(*args): + try: + if os.environ['PYPY_NO_INLINE_MALLOC']: + return False + except KeyError: + pass + return can_use_nursery_malloc1(*args) + # + return {(gc.GcLLDescr_framework, 'can_use_nursery_malloc'): + can_use_nursery_malloc2} + +def compile(f, gc, enable_opts='', **kwds): + from pypy.annotation.listdef import s_list_of_strings + from pypy.translator.translator import TranslationContext + from pypy.jit.metainterp.warmspot import apply_jit + from pypy.translator.c import genc + # + t = TranslationContext() + t.config.translation.gc = gc + if gc != 'boehm': + t.config.translation.gcremovetypeptr = True + for name, value in kwds.items(): + setattr(t.config.translation, name, value) + ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) + ann.build_types(f, [s_list_of_strings], main_entry_point=True) + t.buildrtyper().specialize() + + if kwds['jit']: + patch = get_functions_to_patch() + old_value = {} + try: + for (obj, attr), value in patch.items(): + old_value[obj, attr] = getattr(obj, attr) + setattr(obj, attr, value) + # + apply_jit(t, enable_opts=enable_opts) + # + finally: + for (obj, attr), oldvalue in old_value.items(): + setattr(obj, attr, oldvalue) + + cbuilder = genc.CStandaloneBuilder(t, f, t.config) + cbuilder.generate_source(defines=cbuilder.DEBUG_DEFINES) + cbuilder.compile() + return cbuilder + +def run(cbuilder, args=''): + # + pypylog = udir.join('test_zrpy_gc.log') + data = cbuilder.cmdexec(args, env={'PYPYLOG': ':%s' % pypylog}) + return data.strip() + +def compile_and_run(f, gc, **kwds): + cbuilder = compile(f, gc, **kwds) + return run(cbuilder) + + + +def test_compile_boehm(): + myjitdriver = JitDriver(greens = [], reds = ['n', 'x']) + @dont_look_inside + def see(lst, n): + assert len(lst) == 3 + assert lst[0] == n+10 + assert lst[1] == n+20 + assert lst[2] == n+30 + def main(n, x): + while n > 0: + myjitdriver.can_enter_jit(n=n, x=x) + myjitdriver.jit_merge_point(n=n, x=x) + y = X() + y.foo = x.foo + n -= y.foo + see([n+10, n+20, n+30], n) + res = compile_and_run(get_entry(get_g(main)), "boehm", jit=True) + assert int(res) >= 16 + +# ______________________________________________________________________ + + +class BaseFrameworkTests(object): + compile_kwds = {} + + def setup_class(cls): + funcs = [] + name_to_func = {} + for fullname in dir(cls): + if not fullname.startswith('define'): + continue + definefunc = getattr(cls, fullname) + _, name = fullname.split('_', 1) + beforefunc, loopfunc, afterfunc = definefunc.im_func(cls) + if beforefunc is None: + def beforefunc(n, x): + return n, x, None, None, None, None, None, None, None, None, None, '' + if afterfunc is None: + def afterfunc(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + pass + beforefunc.func_name = 'before_'+name + loopfunc.func_name = 'loop_'+name + afterfunc.func_name = 'after_'+name + funcs.append((beforefunc, loopfunc, afterfunc)) + assert name not in name_to_func + name_to_func[name] = len(name_to_func) + print name_to_func + def allfuncs(name, n): + x = X() + x.foo = 2 + main_allfuncs(name, n, x) + x.foo = 5 + return weakref.ref(x) + def main_allfuncs(name, n, x): + num = name_to_func[name] + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s = funcs[num][0](n, x) + while n > 0: + myjitdriver.can_enter_jit(num=num, n=n, x=x, x0=x0, x1=x1, + x2=x2, x3=x3, x4=x4, x5=x5, x6=x6, x7=x7, l=l, s=s) + myjitdriver.jit_merge_point(num=num, n=n, x=x, x0=x0, x1=x1, + x2=x2, x3=x3, x4=x4, x5=x5, x6=x6, x7=x7, l=l, s=s) + + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s = funcs[num][1]( + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s) + funcs[num][2](n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s) + myjitdriver = JitDriver(greens = ['num'], + reds = ['n', 'x', 'x0', 'x1', 'x2', 'x3', 'x4', + 'x5', 'x6', 'x7', 'l', 's']) + cls.main_allfuncs = staticmethod(main_allfuncs) + cls.name_to_func = name_to_func + OLD_DEBUG = GcLLDescr_framework.DEBUG + try: + GcLLDescr_framework.DEBUG = True + cls.cbuilder = compile(get_entry(allfuncs), DEFL_GC, + gcrootfinder=cls.gcrootfinder, jit=True, + **cls.compile_kwds) + finally: + GcLLDescr_framework.DEBUG = OLD_DEBUG + + def _run(self, name, n, env): + res = self.cbuilder.cmdexec("%s %d" %(name, n), env=env) + assert int(res) == 20 + + def run(self, name, n=2000): + pypylog = udir.join('TestCompileFramework.log') + env = {'PYPYLOG': ':%s' % pypylog, + 'PYPY_NO_INLINE_MALLOC': '1'} + self._run(name, n, env) + env['PYPY_NO_INLINE_MALLOC'] = '' + self._run(name, n, env) + + def run_orig(self, name, n, x): + self.main_allfuncs(name, n, x) + + +class CompileFrameworkTests(BaseFrameworkTests): + # Test suite using (so far) the minimark GC. + +## def define_libffi_workaround(cls): +## # XXX: this is a workaround for a bug in database.py. It seems that +## # the problem is triggered by optimizeopt/fficall.py, and in +## # particular by the ``cast_base_ptr_to_instance(Func, llfunc)``: in +## # these tests, that line is the only place where libffi.Func is +## # referenced. +## # +## # The problem occurs because the gctransformer tries to annotate a +## # low-level helper to call the __del__ of libffi.Func when it's too +## # late. +## # +## # This workaround works by forcing the annotator (and all the rest of +## # the toolchain) to see libffi.Func in a "proper" context, not just as +## # the target of cast_base_ptr_to_instance. Note that the function +## # below is *never* called by any actual test, it's just annotated. +## # +## from pypy.rlib.libffi import get_libc_name, CDLL, types, ArgChain +## libc_name = get_libc_name() +## def f(n, x, *args): +## libc = CDLL(libc_name) +## ptr = libc.getpointer('labs', [types.slong], types.slong) +## chain = ArgChain() +## chain.arg(n) +## n = ptr.call(chain, lltype.Signed) +## return (n, x) + args +## return None, f, None + + def define_compile_framework_1(cls): + # a moving GC. Supports malloc_varsize_nonmovable. Simple test, works + # without write_barriers and root stack enumeration. + def f(n, x, *args): + y = X() + y.foo = x.foo + n -= y.foo + return (n, x) + args + return None, f, None + + def test_compile_framework_1(self): + self.run('compile_framework_1') + + def define_compile_framework_2(cls): + # More complex test, requires root stack enumeration but + # not write_barriers. + def f(n, x, *args): + prev = x + for j in range(101): # f() runs 20'000 times, thus allocates + y = X() # a total of 2'020'000 objects + y.foo = prev.foo + prev = y + n -= prev.foo + return (n, x) + args + return None, f, None + + def test_compile_framework_2(self): + self.run('compile_framework_2') + + def define_compile_framework_3(cls): + # Third version of the test. Really requires write_barriers. + def f(n, x, *args): + x.next = None + for j in range(101): # f() runs 20'000 times, thus allocates + y = X() # a total of 2'020'000 objects + y.foo = j+1 + y.next = x.next + x.next = y + check(x.next.foo == 101) + total = 0 + y = x + for j in range(101): + y = y.next + total += y.foo + check(not y.next) + check(total == 101*102/2) + n -= x.foo + return (n, x) + args + return None, f, None + + + + def test_compile_framework_3(self): + x_test = X() + x_test.foo = 5 + self.run_orig('compile_framework_3', 6, x_test) # check that it does not raise CheckError + self.run('compile_framework_3') + + def define_compile_framework_3_extra(cls): + # Extra version of the test, with tons of live vars around the residual + # call that all contain a GC pointer. + @dont_look_inside + def residual(n=26): + x = X() + x.next = X() + x.next.foo = n + return x + # + def before(n, x): + residual(5) + x0 = residual() + x1 = residual() + x2 = residual() + x3 = residual() + x4 = residual() + x5 = residual() + x6 = residual() + x7 = residual() + n *= 19 + return n, None, x0, x1, x2, x3, x4, x5, x6, x7, None, None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + x8 = residual() + x9 = residual() + check(x0.next.foo == 26) + check(x1.next.foo == 26) + check(x2.next.foo == 26) + check(x3.next.foo == 26) + check(x4.next.foo == 26) + check(x5.next.foo == 26) + check(x6.next.foo == 26) + check(x7.next.foo == 26) + check(x8.next.foo == 26) + check(x9.next.foo == 26) + x0, x1, x2, x3, x4, x5, x6, x7 = x7, x4, x6, x5, x3, x2, x9, x8 + n -= 1 + return n, None, x0, x1, x2, x3, x4, x5, x6, x7, None, None + return before, f, None + + def test_compile_framework_3_extra(self): + self.run_orig('compile_framework_3_extra', 6, None) # check that it does not raise CheckError + self.run('compile_framework_3_extra') + + def define_compile_framework_4(cls): + # Fourth version of the test, with __del__. + from pypy.rlib.debug import debug_print + class Counter: + cnt = 0 + counter = Counter() + class Z: + def __del__(self): + counter.cnt -= 1 + def before(n, x): + debug_print('counter.cnt =', counter.cnt) + check(counter.cnt < 5) + counter.cnt = n // x.foo + return n, x, None, None, None, None, None, None, None, None, None, None + def f(n, x, *args): + Z() + n -= x.foo + return (n, x) + args + return before, f, None + + def test_compile_framework_4(self): + self.run('compile_framework_4') + + def define_compile_framework_5(cls): + # Test string manipulation. + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + n -= x.foo + s += str(n) + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(s) == 1*5 + 2*45 + 3*450 + 4*500) + return None, f, after + + def test_compile_framework_5(self): + self.run('compile_framework_5') + + def define_compile_framework_7(cls): + # Array of pointers (test the write barrier for setarrayitem_gc) + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + l = [None] * 16 + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[8] = X(n+70) + l[9] = X(n+80) + l[10] = X(n+90) + l[11] = X(n+100) + l[12] = X(n+110) + l[13] = X(n+120) + l[14] = X(n+130) + l[15] = X(n+140) + if n < 1800: + check(len(l) == 16) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[8].x == n+70) + check(l[9].x == n+80) + check(l[10].x == n+90) + check(l[11].x == n+100) + check(l[12].x == n+110) + check(l[13].x == n+120) + check(l[14].x == n+130) + check(l[15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) == 16) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[8].x == 72) + check(l[9].x == 82) + check(l[10].x == 92) + check(l[11].x == 102) + check(l[12].x == 112) + check(l[13].x == 122) + check(l[14].x == 132) + check(l[15].x == 142) + return before, f, after + + def test_compile_framework_7(self): + self.run('compile_framework_7') + + def define_compile_framework_7_interior(cls): + # Array of structs containing pointers (test the write barrier + # for setinteriorfield_gc) + S = lltype.GcStruct('S', ('i', lltype.Signed)) + A = lltype.GcArray(lltype.Struct('entry', ('x', lltype.Ptr(S)), + ('y', lltype.Ptr(S)), + ('z', lltype.Ptr(S)))) + class Glob: + a = lltype.nullptr(A) + glob = Glob() + # + def make_s(i): + s = lltype.malloc(S) + s.i = i + return s + # + @unroll_safe + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + a = glob.a + if not a: + a = glob.a = lltype.malloc(A, 10) + i = 0 + while i < 10: + a[i].x = make_s(n + i * 100 + 1) + a[i].y = make_s(n + i * 100 + 2) + a[i].z = make_s(n + i * 100 + 3) + i += 1 + i = 0 + while i < 10: + check(a[i].x.i == n + i * 100 + 1) + check(a[i].y.i == n + i * 100 + 2) + check(a[i].z.i == n + i * 100 + 3) + i += 1 + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_7_interior(self): + self.run('compile_framework_7_interior') + + def define_compile_framework_8(cls): + # Array of pointers, of unknown length (test write_barrier_from_array) + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + l = [None] * (16 + (n & 7)) + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[8] = X(n+70) + l[9] = X(n+80) + l[10] = X(n+90) + l[11] = X(n+100) + l[12] = X(n+110) + l[13] = X(n+120) + l[14] = X(n+130) + l[15] = X(n+140) + if n < 1800: + check(len(l) == 16 + (n & 7)) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[8].x == n+70) + check(l[9].x == n+80) + check(l[10].x == n+90) + check(l[11].x == n+100) + check(l[12].x == n+110) + check(l[13].x == n+120) + check(l[14].x == n+130) + check(l[15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) >= 16) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[8].x == 72) + check(l[9].x == 82) + check(l[10].x == 92) + check(l[11].x == 102) + check(l[12].x == 112) + check(l[13].x == 122) + check(l[14].x == 132) + check(l[15].x == 142) + return before, f, after + + def test_compile_framework_8(self): + self.run('compile_framework_8') + + def define_compile_framework_9(cls): + # Like compile_framework_8, but with variable indexes and large + # arrays, testing the card_marking case + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + num = 512 + (n & 7) + l = [None] * num + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[num-8] = X(n+70) + l[num-9] = X(n+80) + l[num-10] = X(n+90) + l[num-11] = X(n+100) + l[-12] = X(n+110) + l[-13] = X(n+120) + l[-14] = X(n+130) + l[-15] = X(n+140) + if n < 1800: + num = 512 + (n & 7) + check(len(l) == num) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[num-8].x == n+70) + check(l[num-9].x == n+80) + check(l[num-10].x == n+90) + check(l[num-11].x == n+100) + check(l[-12].x == n+110) + check(l[-13].x == n+120) + check(l[-14].x == n+130) + check(l[-15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) >= 512) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[-8].x == 72) + check(l[-9].x == 82) + check(l[-10].x == 92) + check(l[-11].x == 102) + check(l[-12].x == 112) + check(l[-13].x == 122) + check(l[-14].x == 132) + check(l[-15].x == 142) + return before, f, after + + def test_compile_framework_9(self): + self.run('compile_framework_9') + + def define_compile_framework_external_exception_handling(cls): + def before(n, x): + x = X(0) + return n, x, None, None, None, None, None, None, None, None, None, None + + @dont_look_inside + def g(x): + if x > 200: + return 2 + raise ValueError + @dont_look_inside + def h(x): + if x > 150: + raise ValueError + return 2 + + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + try: + x.x += g(n) + except ValueError: + x.x += 1 + try: + x.x += h(n) + except ValueError: + x.x -= 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(x.x == 1800 * 2 + 1850 * 2 + 200 - 150) + + return before, f, None + + def test_compile_framework_external_exception_handling(self): + self.run('compile_framework_external_exception_handling') + + def define_compile_framework_bug1(self): + @elidable + def nonmoving(): + x = X(1) + for i in range(7): + rgc.collect() + return x + + @dont_look_inside + def do_more_stuff(): + x = X(5) + for i in range(7): + rgc.collect() + return x + + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + x0 = do_more_stuff() + check(nonmoving().x == 1) + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + + return None, f, None + + def test_compile_framework_bug1(self): + self.run('compile_framework_bug1', 200) + + def define_compile_framework_vref(self): + from pypy.rlib.jit import virtual_ref, virtual_ref_finish + class A: + pass + glob = A() + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + a = A() + glob.v = vref = virtual_ref(a) + virtual_ref_finish(vref, a) + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_vref(self): + self.run('compile_framework_vref', 200) + + def define_compile_framework_float(self): + # test for a bug: the fastpath_malloc does not save and restore + # xmm registers around the actual call to the slow path + class A: + x0 = x1 = x2 = x3 = x4 = x5 = x6 = x7 = 0 + @dont_look_inside + def escape1(a): + a.x0 += 0 + a.x1 += 6 + a.x2 += 12 + a.x3 += 18 + a.x4 += 24 + a.x5 += 30 + a.x6 += 36 + a.x7 += 42 + @dont_look_inside + def escape2(n, f0, f1, f2, f3, f4, f5, f6, f7): + check(f0 == n + 0.0) + check(f1 == n + 0.125) + check(f2 == n + 0.25) + check(f3 == n + 0.375) + check(f4 == n + 0.5) + check(f5 == n + 0.625) + check(f6 == n + 0.75) + check(f7 == n + 0.875) + @unroll_safe + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + i = 0 + while i < 42: + m = n + i + f0 = m + 0.0 + f1 = m + 0.125 + f2 = m + 0.25 + f3 = m + 0.375 + f4 = m + 0.5 + f5 = m + 0.625 + f6 = m + 0.75 + f7 = m + 0.875 + a1 = A() + # at this point, all or most f's are still in xmm registers + escape1(a1) + escape2(m, f0, f1, f2, f3, f4, f5, f6, f7) + i += 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_float(self): + self.run('compile_framework_float') + + def define_compile_framework_minimal_size_in_nursery(self): + S = lltype.GcStruct('S') # no fields! + T = lltype.GcStruct('T', ('i', lltype.Signed)) + @unroll_safe + def f42(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + lst1 = [] + lst2 = [] + i = 0 + while i < 42: + s1 = lltype.malloc(S) + t1 = lltype.malloc(T) + t1.i = 10000 + i + n + lst1.append(s1) + lst2.append(t1) + i += 1 + i = 0 + while i < 42: + check(lst2[i].i == 10000 + i + n) + i += 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f42, None + + def test_compile_framework_minimal_size_in_nursery(self): + self.run('compile_framework_minimal_size_in_nursery') + + +class TestShadowStack(CompileFrameworkTests): + gcrootfinder = "shadowstack" + From noreply at buildbot.pypy.org Tue Feb 7 19:24:37 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 7 Feb 2012 19:24:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: add call_release_gil Message-ID: <20120207182437.D69BA7107FA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52201:856baf37a1d4 Date: 2012-02-07 10:23 -0800 http://bitbucket.org/pypy/pypy/changeset/856baf37a1d4/ Log: add call_release_gil diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -1088,6 +1088,15 @@ emit_guard_call_release_gil = emit_guard_call_may_force + def call_release_gil(self, gcrootmap, save_registers): + # XXX don't know whether this is correct + # XXX use save_registers here + assert gcrootmap.is_shadow_stack + with Saved_Volatiles(self.mc): + self._emit_call(NO_FORCE_INDEX, self.releasegil_addr, + [], self._regalloc) + + class OpAssembler(IntOpAssembler, GuardOpAssembler, MiscOpAssembler, FieldOpAssembler, From noreply at buildbot.pypy.org Tue Feb 7 20:02:06 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 7 Feb 2012 20:02:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: Delete declaration of GC_hidden_pointer. Message-ID: <20120207190206.758717107FA@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend-rpythonization Changeset: r52202:d32609770965 Date: 2012-02-07 14:01 -0500 http://bitbucket.org/pypy/pypy/changeset/d32609770965/ Log: Delete declaration of GC_hidden_pointer. diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -47,8 +47,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): From noreply at buildbot.pypy.org Tue Feb 7 20:04:07 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 7 Feb 2012 20:04:07 +0100 (CET) Subject: [pypy-commit] pypy default: Delete declaration of GC_hidden_pointer. Message-ID: <20120207190407.29CF27107FA@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: Changeset: r52203:e298ef4bc2fb Date: 2012-02-07 14:01 -0500 http://bitbucket.org/pypy/pypy/changeset/e298ef4bc2fb/ Log: Delete declaration of GC_hidden_pointer. diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -46,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): From noreply at buildbot.pypy.org Tue Feb 7 20:48:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 20:48:56 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: The proper way. Message-ID: <20120207194856.439AE82CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52204:5be32db92ca5 Date: 2012-02-07 20:30 +0100 http://bitbucket.org/pypy/pypy/changeset/5be32db92ca5/ Log: The proper way. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -32,7 +32,7 @@ # include #else # undef assert -# define assert /* nothing */ +# define assert(x) /* nothing */ #endif /************************************************************/ From noreply at buildbot.pypy.org Tue Feb 7 20:48:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 7 Feb 2012 20:48:58 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Last missing piece in the C source: stm_copy_transactional_to_raw() Message-ID: <20120207194858.B2CA982CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52205:c1a57e0a3ac5 Date: 2012-02-07 20:48 +0100 http://bitbucket.org/pypy/pypy/changeset/c1a57e0a3ac5/ Log: Last missing piece in the C source: stm_copy_transactional_to_raw() diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -288,18 +288,15 @@ # Initialize the copy by doing an stm raw copy of the bytes stm_operations.stm_copy_transactional_to_raw(obj, localobj, size) # - # The raw copy done above includes all header fields. - # Check at least the gc flags of the copy. + # The raw copy done above does not include the header fields. hdr = self.header(obj) localhdr = self.header(localobj) GCFLAGS = (GCFLAG_GLOBAL | GCFLAG_WAS_COPIED) ll_assert(hdr.tid & GCFLAGS == GCFLAGS, "stm_write: bogus flags on source object") - ll_assert(localhdr.tid & GCFLAGS == GCFLAGS, - "stm_write: flags not copied!") # # Remove the GCFLAG_GLOBAL from the copy - localhdr.tid &= ~GCFLAG_GLOBAL + localhdr.tid = hdr.tid & ~GCFLAG_GLOBAL # # Set the 'version' field of the local copy to be a pointer # to the global obj. (The field is called 'version' because diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -106,10 +106,6 @@ locals()['stm_read_int%d' % _size] = _func def stm_copy_transactional_to_raw(self, srcobj, dstobj, size): - sizehdr = self._gc.gcheaderbuilder.size_gc_header - srchdr = srcobj - sizehdr - dsthdr = dstobj - sizehdr - llmemory.raw_memcopy(srchdr, dsthdr, sizehdr) llmemory.raw_memcopy(srcobj, dstobj, size) self._transactional_copies.append((srcobj, dstobj)) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -430,6 +430,38 @@ } /* lazy/lazy read instrumentation */ +#define STM_DO_READ(READ_OPERATION) \ + retry: \ + /* read the orec BEFORE we read anything else */ \ + ovt = o->version; \ + CFENCE; \ + \ + /* this tx doesn't hold any locks, so if the lock for this addr is \ + held, there is contention. A lock is never hold for too long, \ + so spinloop until it is released. */ \ + if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) \ + { \ + if (IS_LOCKED(ovt)) { \ + tx_spinloop(7); \ + goto retry; \ + } \ + /* else this location is too new, scale forward */ \ + owner_version_t newts = get_global_timestamp(d) & ~1; \ + validate_fast(d, 1); \ + d->start_time = newts; \ + } \ + \ + /* orec is unlocked, with ts <= start_time. read the location */ \ + READ_OPERATION; \ + \ + /* postvalidate AFTER reading addr: */ \ + CFENCE; \ + if (__builtin_expect(o->version != ovt, 0)) \ + goto retry; /* oups, try again */ \ + \ + oreclist_insert(&d->reads, (orec_t*)o); + + #define STM_READ_WORD(SIZE, TYPE) \ TYPE stm_read_int##SIZE(void* addr, long offset) \ { \ @@ -455,36 +487,7 @@ if (is_main_thread(d)) \ return *(TYPE *)(((char *)addr) + offset); \ \ - retry: \ - /* read the orec BEFORE we read anything else */ \ - ovt = o->version; \ - CFENCE; \ - \ - /* this tx doesn't hold any locks, so if the lock for this addr is \ - held, there is contention. A lock is never hold for too long, \ - so spinloop until it is released. */ \ - if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) \ - { \ - if (IS_LOCKED(ovt)) { \ - tx_spinloop(7); \ - goto retry; \ - } \ - /* else this location is too new, scale forward */ \ - owner_version_t newts = get_global_timestamp(d) & ~1; \ - validate_fast(d, 1); \ - d->start_time = newts; \ - } \ - \ - /* orec is unlocked, with ts <= start_time. read the location */ \ - TYPE tmp = *(TYPE *)(((char *)addr) + offset); \ - \ - /* postvalidate AFTER reading addr: */ \ - CFENCE; \ - if (__builtin_expect(o->version != ovt, 0)) \ - goto retry; /* oups, try again */ \ - \ - oreclist_insert(&d->reads, (orec_t*)o); \ - \ + STM_DO_READ(TYPE tmp = *(TYPE *)(((char *)addr) + offset)); \ return tmp; \ } @@ -493,6 +496,22 @@ STM_READ_WORD(4, int) STM_READ_WORD(8, long long) +void stm_copy_transactional_to_raw(void *src, void *dst, long size) +{ + struct tx_descriptor *d = thread_descriptor; + volatile orec_t *o = get_orec(src); + owner_version_t ovt; + + assert(!is_main_thread(d)); + + /* don't copy the header */ + src = ((char *)src) + sizeof(orec_t); + dst = ((char *)dst) + sizeof(orec_t); + size -= sizeof(orec_t); + + STM_DO_READ(memcpy(dst, src, size)); +} + static struct tx_descriptor *descriptor_init(_Bool is_main_thread) { diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -144,7 +144,32 @@ def test_stm_size_getter(self): def getsize(addr): - xxx + dont_call_me getter = llhelper(GETSIZE, getsize) stm_operations.setup_size_getter(getter) - # just tests that the function is really defined + # ^^^ just tests that the function is really defined + + def test_stm_copy_transactional_to_raw(self): + # doesn't test STM behavior, but just that it appears to work + s1 = lltype.malloc(S1, flavor='raw') + s1.hdr.tid = stmgc.GCFLAG_GLOBAL + s1.hdr.version = llmemory.NULL + s1.x = 909 + s1.y = 808 + s2 = lltype.malloc(S1, flavor='raw') + s2.hdr.tid = -42 # non-initialized + s2.x = -42 # non-initialized + s2.y = -42 # non-initialized + # + s1_adr = llmemory.cast_ptr_to_adr(s1) + s2_adr = llmemory.cast_ptr_to_adr(s2) + size = llmemory.sizeof(S1) + stm_operations.stm_copy_transactional_to_raw(s1_adr, s2_adr, size) + # + assert s2.hdr.tid == -42 # not touched + assert s2.x == 909 + assert s2.y == 808 + # + lltype.free(s2, flavor='raw') + lltype.free(s1, flavor='raw') + test_stm_copy_transactional_to_raw.in_main_thread = False From noreply at buildbot.pypy.org Tue Feb 7 21:40:27 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 7 Feb 2012 21:40:27 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup: close branch for merge Message-ID: <20120207204027.4254382CE3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup Changeset: r52206:0c7960a1a5bb Date: 2012-02-07 22:37 +0200 http://bitbucket.org/pypy/pypy/changeset/0c7960a1a5bb/ Log: close branch for merge From noreply at buildbot.pypy.org Tue Feb 7 21:40:28 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 7 Feb 2012 21:40:28 +0100 (CET) Subject: [pypy-commit] pypy default: merge win32-cleanup to default Message-ID: <20120207204028.E1E4B82CE3@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r52207:5b7ecbf87681 Date: 2012-02-07 22:39 +0200 http://bitbucket.org/pypy/pypy/changeset/5b7ecbf87681/ Log: merge win32-cleanup to default diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1340,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1638,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,7 +43,7 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) @@ -67,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix From noreply at buildbot.pypy.org Tue Feb 7 23:53:17 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 7 Feb 2012 23:53:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: StringBuilder now build a (unicode) str. Message-ID: <20120207225317.8A09082CE6@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52210:108cb693adaa Date: 2012-02-07 23:51 +0100 http://bitbucket.org/pypy/pypy/changeset/108cb693adaa/ Log: StringBuilder now build a (unicode) str. BytesBuilder returns bytes. diff --git a/pypy/module/__pypy__/__init__.py b/pypy/module/__pypy__/__init__.py --- a/pypy/module/__pypy__/__init__.py +++ b/pypy/module/__pypy__/__init__.py @@ -9,7 +9,7 @@ interpleveldefs = { "StringBuilder": "interp_builders.W_StringBuilder", - "UnicodeBuilder": "interp_builders.W_UnicodeBuilder", + "BytesBuilder": "interp_builders.W_BytesBuilder", } class Module(MixedModule): diff --git a/pypy/module/__pypy__/interp_builders.py b/pypy/module/__pypy__/interp_builders.py --- a/pypy/module/__pypy__/interp_builders.py +++ b/pypy/module/__pypy__/interp_builders.py @@ -64,5 +64,5 @@ W_Builder.typedef.acceptable_as_base_class = False return W_Builder -W_StringBuilder = create_builder("StringBuilder", str, StringBuilder) -W_UnicodeBuilder = create_builder("UnicodeBuilder", unicode, UnicodeBuilder) +W_StringBuilder = create_builder("StringBuilder", unicode, UnicodeBuilder) +W_BytesBuilder = create_builder("BytesBuilder", str, StringBuilder) diff --git a/pypy/module/__pypy__/test/test_builders.py b/pypy/module/__pypy__/test/test_builders.py --- a/pypy/module/__pypy__/test/test_builders.py +++ b/pypy/module/__pypy__/test/test_builders.py @@ -6,8 +6,8 @@ cls.space = gettestobjspace(usemodules=['__pypy__']) def test_simple(self): - from __pypy__.builders import UnicodeBuilder - b = UnicodeBuilder() + from __pypy__.builders import StringBuilder + b = StringBuilder() b.append("abc") b.append("123") b.append("1") @@ -17,16 +17,16 @@ raises(ValueError, b.append, "123") def test_preallocate(self): - from __pypy__.builders import UnicodeBuilder - b = UnicodeBuilder(10) + from __pypy__.builders import StringBuilder + b = StringBuilder(10) b.append("abc") b.append("123") s = b.build() assert s == "abc123" def test_append_slice(self): - from __pypy__.builders import UnicodeBuilder - b = UnicodeBuilder() + from __pypy__.builders import StringBuilder + b = StringBuilder() b.append_slice("abcdefgh", 2, 5) raises(ValueError, b.append_slice, "1", 2, 1) s = b.build() @@ -34,8 +34,8 @@ raises(ValueError, b.append_slice, "abc", 1, 2) def test_stringbuilder(self): - from __pypy__.builders import StringBuilder - b = StringBuilder() + from __pypy__.builders import BytesBuilder + b = BytesBuilder() b.append(b"abc") b.append(b"123") assert len(b) == 6 From noreply at buildbot.pypy.org Tue Feb 7 23:53:16 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 7 Feb 2012 23:53:16 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix most tests in module/__pypy__ Message-ID: <20120207225316.4F1F582CE4@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52209:0a4739cd0f36 Date: 2012-02-07 23:47 +0100 http://bitbucket.org/pypy/pypy/changeset/0a4739cd0f36/ Log: Fix most tests in module/__pypy__ diff --git a/pypy/module/__pypy__/interp_builders.py b/pypy/module/__pypy__/interp_builders.py --- a/pypy/module/__pypy__/interp_builders.py +++ b/pypy/module/__pypy__/interp_builders.py @@ -38,9 +38,12 @@ def descr_build(self, space): self._check_done(space) - w_s = space.wrap(self.builder.build()) + s = self.builder.build() self.builder = None - return w_s + if strtype is str: + return space.wrapbytes(s) + else: + return space.wrap(s) def descr_len(self, space): if self.builder is None: diff --git a/pypy/module/__pypy__/test/test_builders.py b/pypy/module/__pypy__/test/test_builders.py --- a/pypy/module/__pypy__/test/test_builders.py +++ b/pypy/module/__pypy__/test/test_builders.py @@ -8,39 +8,39 @@ def test_simple(self): from __pypy__.builders import UnicodeBuilder b = UnicodeBuilder() - b.append(u"abc") - b.append(u"123") - b.append(u"1") + b.append("abc") + b.append("123") + b.append("1") s = b.build() - assert s == u"abc1231" + assert s == "abc1231" raises(ValueError, b.build) - raises(ValueError, b.append, u"123") + raises(ValueError, b.append, "123") def test_preallocate(self): from __pypy__.builders import UnicodeBuilder b = UnicodeBuilder(10) - b.append(u"abc") - b.append(u"123") + b.append("abc") + b.append("123") s = b.build() - assert s == u"abc123" + assert s == "abc123" def test_append_slice(self): from __pypy__.builders import UnicodeBuilder b = UnicodeBuilder() - b.append_slice(u"abcdefgh", 2, 5) - raises(ValueError, b.append_slice, u"1", 2, 1) + b.append_slice("abcdefgh", 2, 5) + raises(ValueError, b.append_slice, "1", 2, 1) s = b.build() assert s == "cde" - raises(ValueError, b.append_slice, u"abc", 1, 2) + raises(ValueError, b.append_slice, "abc", 1, 2) def test_stringbuilder(self): from __pypy__.builders import StringBuilder b = StringBuilder() - b.append("abc") - b.append("123") + b.append(b"abc") + b.append(b"123") assert len(b) == 6 - b.append("you and me") + b.append(b"you and me") s = b.build() raises(ValueError, len, b) - assert s == "abc123you and me" + assert s == b"abc123you and me" raises(ValueError, b.build) diff --git a/pypy/module/__pypy__/test/test_bytebuffer.py b/pypy/module/__pypy__/test/test_bytebuffer.py --- a/pypy/module/__pypy__/test/test_bytebuffer.py +++ b/pypy/module/__pypy__/test/test_bytebuffer.py @@ -10,10 +10,10 @@ b = bytebuffer(12) assert isinstance(b, buffer) assert len(b) == 12 - b[3] = '!' - b[5] = '?' - assert b[2:7] == '\x00!\x00?\x00' - b[9:] = '+-*' - assert b[-1] == '*' - assert b[-2] == '-' - assert b[-3] == '+' + b[3] = b'!' + b[5] = b'?' + assert b[2:7] == b'\x00!\x00?\x00' + b[9:] = b'+-*' + assert b[-1] == b'*' + assert b[-2] == b'-' + assert b[-3] == b'+' diff --git a/pypy/module/__pypy__/test/test_identitydict.py b/pypy/module/__pypy__/test/test_identitydict.py --- a/pypy/module/__pypy__/test/test_identitydict.py +++ b/pypy/module/__pypy__/test/test_identitydict.py @@ -10,10 +10,9 @@ d = identity_dict() d[0] = 1 d[0.0] = 2 - d[long(0)] = 3 assert d - assert len(d) == 3 + assert len(d) == 2 del d[0] d.clear() assert not d diff --git a/pypy/module/__pypy__/test/test_special.py b/pypy/module/__pypy__/test/test_special.py --- a/pypy/module/__pypy__/test/test_special.py +++ b/pypy/module/__pypy__/test/test_special.py @@ -5,7 +5,7 @@ def setup_class(cls): if option.runappdirect: py.test.skip("does not make sense on pypy-c") - cls.space = gettestobjspace(**{"objspace.usemodules.select": False, "objspace.std.withrangelist": True}) + cls.space = gettestobjspace(**{"objspace.usemodules.select": False}) def test__isfake(self): from __pypy__ import isfake @@ -32,9 +32,9 @@ assert my.b() == () assert A.a(my) == (my,) assert A.b(my) == (my,) - assert A.a.im_func(my) == (my,) + assert not hasattr(A.a, 'im_func') assert not hasattr(A.b, 'im_func') - assert A.a is not A.__dict__['a'] + assert A.a is A.__dict__['a'] assert A.b is A.__dict__['b'] def test_lookup_special(self): From noreply at buildbot.pypy.org Tue Feb 7 23:53:15 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 7 Feb 2012 23:53:15 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix most failures in posix tests Message-ID: <20120207225315.12A3282CE3@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r52208:e86aec0bdbb3 Date: 2012-02-07 23:38 +0100 http://bitbucket.org/pypy/pypy/changeset/e86aec0bdbb3/ Log: Fix most failures in posix tests diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -93,14 +93,14 @@ def test_some_posix_basic_operation(self): path = self.path posix = self.posix - fd = posix.open(path, posix.O_RDONLY, 0777) + fd = posix.open(path, posix.O_RDONLY, 0o777) fd2 = posix.dup(fd) assert not posix.isatty(fd2) s = posix.read(fd, 1) - assert s == 't' + assert s == b't' posix.lseek(fd, 5, 0) s = posix.read(fd, 1) - assert s == 'i' + assert s == b'i' st = posix.fstat(fd) posix.close(fd2) posix.close(fd) @@ -147,7 +147,7 @@ posix.stat_float_times(False) st = posix.stat(path) - assert isinstance(st.st_mtime, (int, long)) + assert isinstance(st.st_mtime, int) assert st[7] == st.st_atime finally: posix.stat_float_times(current) @@ -170,7 +170,7 @@ for fn in [self.posix.stat, self.posix.lstat]: try: fn("nonexistentdir/nonexistentfile") - except OSError, e: + except OSError as e: assert e.errno == errno.ENOENT assert e.filename == "nonexistentdir/nonexistentfile" # On Windows, when the parent directory does not exist, @@ -183,9 +183,9 @@ def test_pickle(self): import pickle, os st = self.posix.stat(os.curdir) - print type(st).__module__ + print(type(st).__module__) s = pickle.dumps(st) - print repr(s) + print(repr(s)) new = pickle.loads(s) assert new == st assert type(new) is type(st) @@ -194,7 +194,7 @@ posix = self.posix try: posix.open('qowieuqwoeiu', 0, 0) - except OSError, e: + except OSError as e: assert e.filename == 'qowieuqwoeiu' else: assert 0 @@ -208,7 +208,7 @@ func = getattr(self.posix, fname) try: func('qowieuqw/oeiu') - except OSError, e: + except OSError as e: assert e.filename == 'qowieuqw/oeiu' else: assert 0 @@ -216,7 +216,7 @@ def test_chmod_exception(self): try: self.posix.chmod('qowieuqw/oeiu', 0) - except OSError, e: + except OSError as e: assert e.filename == 'qowieuqw/oeiu' else: assert 0 @@ -225,7 +225,7 @@ if hasattr(self.posix, 'chown'): try: self.posix.chown('qowieuqw/oeiu', 0, 0) - except OSError, e: + except OSError as e: assert e.filename == 'qowieuqw/oeiu' else: assert 0 @@ -234,7 +234,7 @@ for arg in [None, (0, 0)]: try: self.posix.utime('qowieuqw/oeiu', arg) - except OSError, e: + except OSError as e: assert e.filename == 'qowieuqw/oeiu' else: assert 0 @@ -254,7 +254,7 @@ ex(self.posix.lseek, UNUSEDFD, 123, 0) #apparently not posix-required: ex(self.posix.isatty, UNUSEDFD) ex(self.posix.read, UNUSEDFD, 123) - ex(self.posix.write, UNUSEDFD, "x") + ex(self.posix.write, UNUSEDFD, b"x") ex(self.posix.close, UNUSEDFD) #UMPF cpython raises IOError ex(self.posix.ftruncate, UNUSEDFD, 123) ex(self.posix.fstat, UNUSEDFD) @@ -262,37 +262,6 @@ # how can getcwd() raise? ex(self.posix.dup, UNUSEDFD) - def test_fdopen(self): - import errno - path = self.path - posix = self.posix - fd = posix.open(path, posix.O_RDONLY, 0777) - f = posix.fdopen(fd, "r") - f.close() - - # Ensure that fcntl is not faked - try: - import fcntl - except ImportError: - pass - else: - assert fcntl.__file__.endswith('pypy/module/fcntl') - exc = raises(OSError, posix.fdopen, fd) - assert exc.value.errno == errno.EBADF - - def test_fdopen_hackedbuiltins(self): - "Same test, with __builtins__.file removed" - _file = __builtins__.file - __builtins__.file = None - try: - path = self.path - posix = self.posix - fd = posix.open(path, posix.O_RDONLY, 0777) - f = posix.fdopen(fd, "r") - f.close() - finally: - __builtins__.file = _file - def test_getcwd(self): assert isinstance(self.posix.getcwd(), str) assert isinstance(self.posix.getcwdu(), unicode) @@ -314,8 +283,7 @@ posix = self.posix result = posix.listdir(unicode_dir) result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + assert result == ['somefile'] def test_access(self): pdir = self.pdir + '/file1' @@ -364,9 +332,9 @@ master_fd, slave_fd = os.openpty() assert isinstance(master_fd, int) assert isinstance(slave_fd, int) - os.write(slave_fd, 'x\n') + os.write(slave_fd, b'x\n') data = os.read(master_fd, 100) - assert data.startswith('x') + assert data.startswith(b'x') if hasattr(__import__(os.name), "forkpty"): def test_forkpty(self): @@ -379,11 +347,11 @@ assert isinstance(master_fd, int) if childpid == 0: data = os.read(0, 100) - if data.startswith('abc'): + if data.startswith(b'abc'): os._exit(42) else: os._exit(43) - os.write(master_fd, 'abc\n') + os.write(master_fd, b'abc\n') _, status = os.waitpid(childpid, 0) assert status >> 8 == 42 @@ -412,7 +380,7 @@ for n in 3, [3, "a"]: try: os.execv("xxx", n) - except TypeError,t: + except TypeError as t: assert str(t) == "execv() arg 2 must be an iterable of strings" else: py.test.fail("didn't raise") @@ -423,15 +391,15 @@ if not hasattr(os, "fork"): skip("Need fork() to test execv()") try: - output = u"caf\xe9 \u1234\n".encode(sys.getfilesystemencoding()) + output = "caf\xe9 \u1234\n".encode(sys.getfilesystemencoding()) except UnicodeEncodeError: skip("encoding not good enough") pid = os.fork() if pid == 0: - os.execv(u"/bin/sh", ["sh", "-c", - u"echo caf\xe9 \u1234 > onefile"]) + os.execv("/bin/sh", ["sh", "-c", + "echo caf\xe9 \u1234 > onefile"]) os.waitpid(pid, 0) - assert open("onefile").read() == output + assert open("onefile", "rb").read() == output os.unlink("onefile") def test_execve(self): @@ -451,16 +419,16 @@ if not hasattr(os, "fork"): skip("Need fork() to test execve()") try: - output = u"caf\xe9 \u1234\n".encode(sys.getfilesystemencoding()) + output = "caf\xe9 \u1234\n".encode(sys.getfilesystemencoding()) except UnicodeEncodeError: skip("encoding not good enough") pid = os.fork() if pid == 0: - os.execve(u"/bin/sh", ["sh", "-c", - u"echo caf\xe9 \u1234 > onefile"], + os.execve("/bin/sh", ["sh", "-c", + "echo caf\xe9 \u1234 > onefile"], {'ddd': 'xxx'}) os.waitpid(pid, 0) - assert open("onefile").read() == output + assert open("onefile", "rb").read() == output os.unlink("onefile") pass # <- please, inspect.getsource(), don't crash @@ -468,7 +436,7 @@ def test_spawnv(self): os = self.posix import sys - print self.python + print(self.python) ret = os.spawnv(os.P_WAIT, self.python, ['python', '-c', 'raise(SystemExit(42))']) assert ret == 42 @@ -477,7 +445,7 @@ def test_spawnve(self): os = self.posix import sys - print self.python + print(self.python) ret = os.spawnve(os.P_WAIT, self.python, ['python', '-c', "raise(SystemExit(int(__import__('os').environ['FOOBAR'])))"], @@ -654,7 +622,6 @@ try: fd = f.fileno() os.fsync(fd) - os.fsync(long(fd)) os.fsync(f) # <- should also work with a file, or anything finally: # with a fileno() method f.close() @@ -701,49 +668,48 @@ def test_largefile(self): os = self.posix - fd = os.open(self.path2 + 'test_largefile', os.O_RDWR | os.O_CREAT, 0666) - os.ftruncate(fd, 10000000000L) - res = os.lseek(fd, 9900000000L, 0) - assert res == 9900000000L - res = os.lseek(fd, -5000000000L, 1) - assert res == 4900000000L - res = os.lseek(fd, -5200000000L, 2) - assert res == 4800000000L + fd = os.open(self.path2 + 'test_largefile', + os.O_RDWR | os.O_CREAT, 0o666) + os.ftruncate(fd, 10000000000) + res = os.lseek(fd, 9900000000, 0) + assert res == 9900000000 + res = os.lseek(fd, -5000000000, 1) + assert res == 4900000000 + res = os.lseek(fd, -5200000000, 2) + assert res == 4800000000 os.close(fd) st = os.stat(self.path2 + 'test_largefile') - assert st.st_size == 10000000000L + assert st.st_size == 10000000000 test_largefile.need_sparse_files = True def test_write_buffer(self): os = self.posix - fd = os.open(self.path2 + 'test_write_buffer', os.O_RDWR | os.O_CREAT, 0666) + fd = os.open(self.path2 + 'test_write_buffer', + os.O_RDWR | os.O_CREAT, 0o666) def writeall(s): while s: count = os.write(fd, s) assert count > 0 s = s[count:] - writeall('hello, ') - writeall(buffer('world!\n')) + writeall(b'hello, ') + writeall(buffer(b'world!\n')) res = os.lseek(fd, 0, 0) assert res == 0 - data = '' + data = b'' while True: s = os.read(fd, 100) if not s: break data += s - assert data == 'hello, world!\n' + assert data == b'hello, world!\n' os.close(fd) def test_write_unicode(self): os = self.posix - fd = os.open(self.path2 + 'test_write_unicode', os.O_RDWR | os.O_CREAT, 0666) - os.write(fd, u'X') - raises(UnicodeEncodeError, os.write, fd, u'\xe9') - os.lseek(fd, 0, 0) - data = os.read(fd, 2) - assert data == 'X' + fd = os.open(self.path2 + 'test_write_unicode', + os.O_RDWR | os.O_CREAT, 0o666) + raises(TypeError, os.write, fd, 'X') os.close(fd) if hasattr(__import__(os.name), "fork"): @@ -762,7 +728,7 @@ os = self.posix if not hasattr(os, 'closerange'): skip("missing os.closerange()") - fds = [os.open(self.path + str(i), os.O_CREAT|os.O_WRONLY, 0777) + fds = [os.open(self.path + str(i), os.O_CREAT|os.O_WRONLY, 0o777) for i in range(15)] fds.sort() start = fds.pop() @@ -796,7 +762,7 @@ if hasattr(os, 'mkfifo'): def test_mkfifo(self): os = self.posix - os.mkfifo(self.path2 + 'test_mkfifo', 0666) + os.mkfifo(self.path2 + 'test_mkfifo', 0o666) st = os.lstat(self.path2 + 'test_mkfifo') import stat assert stat.S_ISFIFO(st.st_mode) @@ -809,12 +775,12 @@ try: # not very useful: os.mknod() without specifying 'mode' os.mknod(self.path2 + 'test_mknod-1') - except OSError, e: + except OSError as e: skip("os.mknod(): got %r" % (e,)) st = os.lstat(self.path2 + 'test_mknod-1') assert stat.S_ISREG(st.st_mode) # os.mknod() with S_IFIFO - os.mknod(self.path2 + 'test_mknod-2', 0600 | stat.S_IFIFO) + os.mknod(self.path2 + 'test_mknod-2', 0o600 | stat.S_IFIFO) st = os.lstat(self.path2 + 'test_mknod-2') assert stat.S_ISFIFO(st.st_mode) @@ -825,9 +791,9 @@ if hasattr(os.lstat('.'), 'st_rdev'): import stat try: - os.mknod(self.path2 + 'test_mknod-3', 0600 | stat.S_IFCHR, + os.mknod(self.path2 + 'test_mknod-3', 0o600 | stat.S_IFCHR, 0x105) - except OSError, e: + except OSError as e: skip("os.mknod() with S_IFCHR: got %r" % (e,)) else: st = os.lstat(self.path2 + 'test_mknod-3') @@ -855,8 +821,8 @@ unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") - dest = u"%s/file.txt" % unicode_dir - posix.symlink(u"%s/somefile" % unicode_dir, dest) + dest = "%s/file.txt" % unicode_dir + posix.symlink("%s/somefile" % unicode_dir, dest) with open(dest) as f: data = f.read() assert data == "who cares?" @@ -874,11 +840,11 @@ def test_tmpfile(self): os = self.posix f = os.tmpfile() - f.write("xxx") + f.write(b"xxx") f.flush() f.seek(0, 0) assert isinstance(f, file) - assert f.read() == 'xxx' + assert f.read() == b'xxx' def test_tmpnam(self): import stat, os @@ -943,9 +909,9 @@ def test_environ(self): posix = self.posix os = self.os - assert posix.environ['PATH'] - del posix.environ['PATH'] - def fn(): posix.environ['PATH'] + assert posix.environ[b'PATH'] + del posix.environ[b'PATH'] + def fn(): posix.environ[b'PATH'] raises(KeyError, fn) if hasattr(__import__(os.name), "unsetenv"): @@ -989,28 +955,28 @@ def test_stat_unicode(self): # test that passing unicode would not raise UnicodeDecodeError try: - self.posix.stat(u"ą") + self.posix.stat("ą") except OSError: pass def test_open_unicode(self): # Ensure passing unicode doesn't raise UnicodeEncodeError try: - self.posix.open(u"ą", self.posix.O_WRONLY) + self.posix.open("ą", self.posix.O_WRONLY) except OSError: pass def test_remove_unicode(self): # See 2 above ;) try: - self.posix.remove(u"ą") + self.posix.remove("ą") except OSError: pass class AppTestUnicodeFilename: def setup_class(cls): ufilename = (unicode(udir.join('test_unicode_filename_')) + - u'\u65e5\u672c.txt') # "Japan" + '\u65e5\u672c.txt') # "Japan" try: f = file(ufilename, 'w') except UnicodeEncodeError: @@ -1027,7 +993,7 @@ content = self.posix.read(fd, 50) finally: self.posix.close(fd) - assert content == "test" + assert content == b"test" class TestPexpect(object): diff --git a/pypy/module/posix/test/test_posix_libfile.py b/pypy/module/posix/test/test_posix_libfile.py --- a/pypy/module/posix/test/test_posix_libfile.py +++ b/pypy/module/posix/test/test_posix_libfile.py @@ -19,7 +19,7 @@ def test_fdopen(self): path = self.path posix = self.posix - fd = posix.open(path, posix.O_RDONLY, 0777) + fd = posix.open(path, posix.O_RDONLY, 0o777) f = posix.fdopen(fd, "r") result = f.read() assert result == "this is a test" @@ -56,12 +56,12 @@ if sys.platform.startswith('win'): skip("unix specific") # - import __builtin__ + import builtins posix = self.posix orig_file = file try: f = posix.popen('true') - __builtin__.file = lambda x : explode + builtins.file = lambda x : explode f.close() finally: - __builtin__.file = orig_file + builtins.file = orig_file From noreply at buildbot.pypy.org Wed Feb 8 00:59:28 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 8 Feb 2012 00:59:28 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) make pyflakes happy Message-ID: <20120207235928.404E882CE3@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52211:ba631ebda607 Date: 2012-02-07 09:56 -0800 http://bitbucket.org/pypy/pypy/changeset/ba631ebda607/ Log: o) make pyflakes happy diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -1,7 +1,6 @@ import sys from pypy.interpreter.error import OperationError -from pypy.interpreter.buffer import Buffer from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib.rarithmetic import r_singlefloat from pypy.rlib import jit, libffi, clibffi @@ -557,7 +556,6 @@ _converters = {} # builtin and custom types _a_converters = {} # array and ptr versions of above def get_converter(space, name, default): - from pypy.module.cppyy import interp_cppyy # The matching of the name to a converter should follow: # 1) full, exact match # 1a) const-removed match @@ -572,13 +570,13 @@ # 1) full, exact match try: return _converters[name](space, default) - except KeyError, k: + except KeyError: pass # 1a) const-removed match try: return _converters[helper.remove_const(name)](space, default) - except KeyError, k: + except KeyError: pass # 2) match of decorated, unqualified type @@ -588,7 +586,7 @@ # array_index may be negative to indicate no size or no size found array_size = helper.array_size(name) return _a_converters[clean_name+compound](space, array_size) - except KeyError, k: + except KeyError: pass # 3) accept const ref as by value diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -1,7 +1,5 @@ -import pypy.module.cppyy.capi as capi - from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import ObjSpace, interp2app, unwrap_spec +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.interpreter.baseobjspace import Wrappable, W_Root @@ -227,7 +225,7 @@ assert lltype.typeOf(newthis) == capi.C_OBJECT try: CPPMethod.call(self, newthis, None, args_w) - except Exception, e: + except Exception: capi.c_deallocate(self.cpptype.handle, newthis) raise return new_instance(self.space, w_type, self.cpptype, newthis, True) diff --git a/pypy/module/cppyy/pythonify.py b/pypy/module/cppyy/pythonify.py --- a/pypy/module/cppyy/pythonify.py +++ b/pypy/module/cppyy/pythonify.py @@ -74,7 +74,7 @@ else: # return instance try: cppclass = get_cppclass(rettype) - except AttributeError, e: + except AttributeError: import warnings warnings.warn("class %s unknown: no data member access" % rettype, RuntimeWarning) From noreply at buildbot.pypy.org Wed Feb 8 00:59:29 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 8 Feb 2012 00:59:29 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) mixin for floats Message-ID: <20120207235929.8117082CE3@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52212:c1bf38d38a14 Date: 2012-02-07 14:35 -0800 http://bitbucket.org/pypy/pypy/changeset/c1bf38d38a14/ Log: o) mixin for floats o) combine common mixing parts for floats and integers o) support for float default arguments on ffi path diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -3,7 +3,7 @@ from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib.rarithmetic import r_singlefloat -from pypy.rlib import jit, libffi, clibffi +from pypy.rlib import jit, libffi, clibffi, rfloat from pypy.module._rawffi.interp_rawffi import unpack_simple_shape from pypy.module._rawffi.array import W_Array @@ -140,14 +140,10 @@ space.wrap("raw buffer interface not supported")) -class IntTypeConverterMixin(object): +class NumericTypeConverterMixin(object): _mixin_ = True _immutable_ = True - def convert_argument(self, space, w_obj, address): - x = rffi.cast(self.rffiptype, address) - x[0] = self._unwrap_object(space, w_obj) - def convert_argument_libffi(self, space, w_obj, argchain): argchain.arg(self._unwrap_object(space, w_obj)) @@ -164,6 +160,25 @@ rffiptr = rffi.cast(self.rffiptype, address) rffiptr[0] = self._unwrap_object(space, w_value) +class IntTypeConverterMixin(NumericTypeConverterMixin): + _mixin_ = True + _immutable_ = True + + def convert_argument(self, space, w_obj, address): + x = rffi.cast(self.rffiptype, address) + x[0] = self._unwrap_object(space, w_obj) + +class FloatTypeConverterMixin(NumericTypeConverterMixin): + _mixin_ = True + _immutable_ = True + + def convert_argument(self, space, w_obj, address): + x = rffi.cast(self.rffiptype, address) + x[0] = self._unwrap_object(space, w_obj) + typecode = rffi.cast(rffi.CCHARP, + _direct_ptradd(address, capi.c_function_arg_typeoffset())) + typecode[0] = self.typecode + class VoidConverter(TypeConverter): _immutable_ = True @@ -312,63 +327,43 @@ def _unwrap_object(self, space, w_obj): return space.uint_w(w_obj) -class FloatConverter(TypeConverter): + +class FloatConverter(FloatTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.float + rffiptype = rffi.FLOATP + typecode = 'f' + + def __init__(self, space, default): + if default: + fval = float(rfloat.rstring_to_float(default)) + else: + fval = float(0.) + self.default = r_singlefloat(fval) def _unwrap_object(self, space, w_obj): return r_singlefloat(space.float_w(w_obj)) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.FLOATP, address) - x[0] = self._unwrap_object(space, w_obj) - typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'f' - - def convert_argument_libffi(self, space, w_obj, argchain): - from pypy.rlib.rarithmetic import r_singlefloat - fval = space.float_w(w_obj) - sfval = r_singlefloat(fval) - argchain.arg(sfval) - def from_memory(self, space, w_obj, w_type, offset): address = self._get_raw_address(space, w_obj, offset) - floatptr = rffi.cast(rffi.FLOATP, address) - return space.wrap(float(floatptr[0])) + rffiptr = rffi.cast(self.rffiptype, address) + return space.wrap(float(rffiptr[0])) - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - floatptr = rffi.cast(rffi.FLOATP, address) - floatptr[0] = self._unwrap_object(space, w_value) - -class DoubleConverter(TypeConverter): +class DoubleConverter(FloatTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.double + rffiptype = rffi.DOUBLEP + typecode = 'd' + + def __init__(self, space, default): + if default: + self.default = rffi.cast(rffi.DOUBLE, rfloat.rstring_to_float(default)) + else: + self.default = rffi.cast(rffi.DOUBLE, 0.) def _unwrap_object(self, space, w_obj): return space.float_w(w_obj) - def convert_argument(self, space, w_obj, address): - x = rffi.cast(rffi.DOUBLEP, address) - x[0] = self._unwrap_object(space, w_obj) - typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'd' - - def convert_argument_libffi(self, space, w_obj, argchain): - argchain.arg(self._unwrap_object(space, w_obj)) - - def from_memory(self, space, w_obj, w_type, offset): - address = self._get_raw_address(space, w_obj, offset) - doubleptr = rffi.cast(rffi.DOUBLEP, address) - return space.wrap(doubleptr[0]) - - def to_memory(self, space, w_obj, w_value, offset): - address = self._get_raw_address(space, w_obj, offset) - doubleptr = rffi.cast(rffi.DOUBLEP, address) - doubleptr[0] = self._unwrap_object(space, w_value) - class CStringConverter(TypeConverter): _immutable_ = True diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -1,3 +1,5 @@ +import pypy.module.cppyy.capi as capi + from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, interp_attrproperty @@ -8,13 +10,14 @@ from pypy.rlib import libffi, rdynload, rweakref from pypy.rlib import jit, debug -from pypy.module.cppyy import converter, executor, helper, capi +from pypy.module.cppyy import converter, executor, helper class FastCallNotPossible(Exception): pass def _direct_ptradd(ptr, offset): # TODO: factor out with convert.py + assert lltype.typeOf(ptr) == capi.C_OBJECT address = rffi.cast(rffi.CCHARP, ptr) return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) diff --git a/pypy/module/cppyy/test/example01.cxx b/pypy/module/cppyy/test/example01.cxx --- a/pypy/module/cppyy/test/example01.cxx +++ b/pypy/module/cppyy/test/example01.cxx @@ -162,6 +162,9 @@ typeValueImp(long, long) typeValueImp(unsigned long, ulong) +typeValueImp(float, float) +typeValueImp(double, double) + std::string ArgPasser::stringValue(std::string arg0, int argn, std::string arg1) { switch (argn) { diff --git a/pypy/module/cppyy/test/example01.h b/pypy/module/cppyy/test/example01.h --- a/pypy/module/cppyy/test/example01.h +++ b/pypy/module/cppyy/test/example01.h @@ -61,18 +61,24 @@ int globalAddOneToInt(int a); } -#define typeValue(itype, tname) \ +#define itypeValue(itype, tname) \ itype tname##Value(itype arg0, int argn=0, itype arg1=1, itype arg2=2) +#define ftypeValue(ftype) \ + ftype ftype##Value(ftype arg0, int argn=0, ftype arg1=1., ftype arg2=2.) + // argument passing class ArgPasser { // use a class for now as methptrgetter not public: // implemented for global functions - typeValue(short, short); - typeValue(unsigned short, ushort); - typeValue(int, int); - typeValue(unsigned int, uint); - typeValue(long, long); - typeValue(unsigned long, ulong); + itypeValue(short, short); + itypeValue(unsigned short, ushort); + itypeValue(int, int); + itypeValue(unsigned int, uint); + itypeValue(long, long); + itypeValue(unsigned long, ulong); + + ftypeValue(float); + ftypeValue(double); std::string stringValue( std::string arg0, int argn=0, std::string arg1 = "default"); diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -268,7 +268,18 @@ assert g(11, 2) == 2 assert g(11) == 11 - def test12_underscore_in_class_name(self): + for ftype in ['float', 'double']: + g = getattr(a, '%sValue' % ftype) + raises(TypeError, 'g(1., 2, 3., 4., 6.)') + assert g(11., 0, 12., 13.) == 11. + assert g(11., 1, 12., 13.) == 12. + assert g(11., 1, 12.) == 12. + assert g(11., 2, 12.) == 2. + assert g(11., 1) == 1. + assert g(11., 2) == 2. + assert g(11.) == 11. + + def test11_underscore_in_class_name(self): """Test recognition of '_' as part of a valid class name""" import cppyy From noreply at buildbot.pypy.org Wed Feb 8 06:12:03 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 06:12:03 +0100 (CET) Subject: [pypy-commit] pypy default: #1033 -- added truediv to numpy boxes Message-ID: <20120208051203.0E0EB82CE3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52213:820edf258da9 Date: 2012-02-08 00:11 -0500 http://bitbucket.org/pypy/pypy/changeset/820edf258da9/ Log: #1033 -- added truediv to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -80,6 +80,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -174,6 +175,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -401,3 +401,9 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_ + + assert truediv(int_(3), int_(2)) == float64(1.5) From noreply at buildbot.pypy.org Wed Feb 8 09:21:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 09:21:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: in-progress Message-ID: <20120208082110.2165382B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52214:a3d4b51ec806 Date: 2012-02-08 10:20 +0200 http://bitbucket.org/pypy/pypy/changeset/a3d4b51ec806/ Log: in-progress diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -95,8 +95,14 @@ def dtype_from_list(space, w_lst): lst_w = space.listview(w_lst) fieldlist = [] + offset = 0 for w_elem in lst_w: - fldname, flddesc = space.fixedview(w_elem, 2) + w_fldname, w_flddesc = space.fixedview(w_elem, 2) + subdtype = descr__new__(space.gettypefor(W_Dtype), w_flddesc) + align = subdtype.alignment + offset = (offset + (align-1)) & ~ (align-1) + fieldlist.append((offset, space.str_w(w_fldname), subdtype)) + xxx def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) From noreply at buildbot.pypy.org Wed Feb 8 10:57:59 2012 From: noreply at buildbot.pypy.org (Stefano Parmesan) Date: Wed, 8 Feb 2012 10:57:59 +0100 (CET) Subject: [pypy-commit] pypy default: removed cPython-oriented code in json and added KeyValueBuilder(s) for speeding up json decoding Message-ID: <20120208095759.31F7682B1E@wyvern.cs.uni-duesseldorf.de> Author: Stefano Parmesan Branch: Changeset: r52215:de5504a0f4f0 Date: 2012-01-27 15:06 +0100 http://bitbucket.org/pypy/pypy/changeset/de5504a0f4f0/ Log: removed cPython-oriented code in json and added KeyValueBuilder(s) for speeding up json decoding diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -5,15 +5,47 @@ import struct from json.scanner import make_scanner -try: - from _json import scanstring as c_scanstring -except ImportError: - c_scanstring = None __all__ = ['JSONDecoder'] FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL + +class KeyValueElement(object): + __slots__ = ['key', 'value'] + + def __init__(self, key, value): + self.key = key + self.value = value + + +class KeyValueAbstractBuilder(object): + __slots__ = ['elements', 'base_type'] + + def __init__(self): + self.elements = self.base_type() + + def append(self, key, value): + pass + + def build(self): + return self.elements + + +class KeyValueListBuilder(KeyValueAbstractBuilder): + base_type = list + + def append(self, key, value): + self.elements.append((key, value)) + + +class KeyValueDictBuilder(KeyValueAbstractBuilder): + base_type = dict + + def append(self, key, value): + self.elements[key] = value + + def _floatconstants(): _BYTES = '7FF80000000000007FF0000000000000'.decode('hex') if sys.byteorder != 'big': @@ -62,7 +94,7 @@ DEFAULT_ENCODING = "utf-8" -def py_scanstring(s, end, encoding=None, strict=True, +def scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): """Scan the string s for a JSON string. End is the index of the character in s after the quote that started the JSON string. @@ -75,7 +107,6 @@ if encoding is None: encoding = DEFAULT_ENCODING chunks = [] - _append = chunks.append begin = end - 1 while 1: chunk = _m(s, end) @@ -84,11 +115,13 @@ errmsg("Unterminated string starting at", s, begin)) end = chunk.end() content, terminator = chunk.groups() + del chunk # Content is contains zero or more unescaped string characters if content: if not isinstance(content, unicode): content = unicode(content, encoding) - _append(content) + chunks.append(content) + del content # Terminator is the end of string, a literal control character, # or a backslash denoting that an escape sequence follows if terminator == '"': @@ -99,7 +132,8 @@ msg = "Invalid control character {0!r} at".format(terminator) raise ValueError(errmsg(msg, s, end)) else: - _append(terminator) + chunks.append(terminator) + del terminator continue try: esc = s[end] @@ -136,21 +170,16 @@ char = unichr(uni) end = next_end # Append the unescaped character - _append(char) + chunks.append(char) return u''.join(chunks), end -# Use speedup if available -scanstring = c_scanstring or py_scanstring - WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS) WHITESPACE_STR = ' \t\n\r' def JSONObject(s_and_end, encoding, strict, scan_once, object_hook, object_pairs_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR): s, end = s_and_end - pairs = [] - pairs_append = pairs.append # Use a slice to prevent IndexError from being raised, the following # check will raise a more specific ValueError if the string is empty nextchar = s[end:end + 1] @@ -162,7 +191,7 @@ # Trivial empty object if nextchar == '}': if object_pairs_hook is not None: - result = object_pairs_hook(pairs) + result = object_pairs_hook([]) return result, end pairs = {} if object_hook is not None: @@ -171,7 +200,13 @@ elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) end += 1 - while True: + + if object_pairs_hook is not None: + pairs = KeyValueListBuilder() + else: + pairs = KeyValueDictBuilder() + + while 1: key, end = scanstring(s, end, encoding, strict) # To skip some function call overhead we optimize the fast paths where @@ -195,7 +230,7 @@ value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - pairs_append((key, value)) + pairs.append(key, value) try: nextchar = s[end] @@ -227,9 +262,9 @@ raise ValueError(errmsg("Expecting property name", s, end - 1)) if object_pairs_hook is not None: - result = object_pairs_hook(pairs) + result = object_pairs_hook(pairs.build()) # to list return result, end - pairs = dict(pairs) + pairs = pairs.build() # to dict if object_hook is not None: pairs = object_hook(pairs) return pairs, end @@ -244,13 +279,12 @@ # Look-ahead for trivial empty array if nextchar == ']': return values, end + 1 - _append = values.append - while True: + while 1: try: value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - _append(value) + values.append(value) nextchar = s[end:end + 1] if nextchar in _ws: end = _w(s, end + 1).end() diff --git a/lib-python/modified-2.7/json/scanner.py b/lib-python/modified-2.7/json/scanner.py --- a/lib-python/modified-2.7/json/scanner.py +++ b/lib-python/modified-2.7/json/scanner.py @@ -1,10 +1,6 @@ """JSON token scanner """ import re -try: - from _json import make_scanner as c_make_scanner -except ImportError: - c_make_scanner = None __all__ = ['make_scanner'] @@ -12,19 +8,7 @@ r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?', (re.VERBOSE | re.MULTILINE | re.DOTALL)) -def py_make_scanner(context): - parse_object = context.parse_object - parse_array = context.parse_array - parse_string = context.parse_string - match_number = NUMBER_RE.match - encoding = context.encoding - strict = context.strict - parse_float = context.parse_float - parse_int = context.parse_int - parse_constant = context.parse_constant - object_hook = context.object_hook - object_pairs_hook = context.object_pairs_hook - +def make_scanner(context): def _scan_once(string, idx): try: nextchar = string[idx] @@ -32,12 +16,12 @@ raise StopIteration if nextchar == '"': - return parse_string(string, idx + 1, encoding, strict) + return context.parse_string(string, idx + 1, context.encoding, context.strict) elif nextchar == '{': - return parse_object((string, idx + 1), encoding, strict, - _scan_once, object_hook, object_pairs_hook) + return context.parse_object((string, idx + 1), context.encoding, context.strict, + _scan_once, context.object_hook, context.object_pairs_hook) elif nextchar == '[': - return parse_array((string, idx + 1), _scan_once) + return context.parse_array((string, idx + 1), _scan_once) elif nextchar == 'n' and string[idx:idx + 4] == 'null': return None, idx + 4 elif nextchar == 't' and string[idx:idx + 4] == 'true': @@ -45,23 +29,21 @@ elif nextchar == 'f' and string[idx:idx + 5] == 'false': return False, idx + 5 - m = match_number(string, idx) + m = NUMBER_RE.match(string, idx) if m is not None: integer, frac, exp = m.groups() if frac or exp: - res = parse_float(integer + (frac or '') + (exp or '')) + res = context.parse_float(integer + (frac or '') + (exp or '')) else: - res = parse_int(integer) + res = context.parse_int(integer) return res, m.end() elif nextchar == 'N' and string[idx:idx + 3] == 'NaN': - return parse_constant('NaN'), idx + 3 + return context.parse_constant('NaN'), idx + 3 elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity': - return parse_constant('Infinity'), idx + 8 + return context.parse_constant('Infinity'), idx + 8 elif nextchar == '-' and string[idx:idx + 9] == '-Infinity': - return parse_constant('-Infinity'), idx + 9 + return context.parse_constant('-Infinity'), idx + 9 else: raise StopIteration return _scan_once - -make_scanner = c_make_scanner or py_make_scanner From noreply at buildbot.pypy.org Wed Feb 8 10:58:18 2012 From: noreply at buildbot.pypy.org (Stefano Parmesan) Date: Wed, 8 Feb 2012 10:58:18 +0100 (CET) Subject: [pypy-commit] pypy default: merged from pypy's 2c2944d51e51 and fixed conflicts Message-ID: <20120208095818.DEACC82B1E@wyvern.cs.uni-duesseldorf.de> Author: Stefano Parmesan Branch: Changeset: r52216:a330d824b42f Date: 2012-01-31 08:59 +0100 http://bitbucket.org/pypy/pypy/changeset/a330d824b42f/ Log: merged from pypy's 2c2944d51e51 and fixed conflicts diff too long, truncating to 10000 out of 149264 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
\n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::=
\n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Wed Feb 8 10:58:22 2012 From: noreply at buildbot.pypy.org (Stefano Parmesan) Date: Wed, 8 Feb 2012 10:58:22 +0100 (CET) Subject: [pypy-commit] pypy default: merged from pypy a3d4b51ec806 Message-ID: <20120208095822.6CEF782B1E@wyvern.cs.uni-duesseldorf.de> Author: Stefano Parmesan Branch: Changeset: r52217:4fd8ce7642df Date: 2012-02-08 09:45 +0100 http://bitbucket.org/pypy/pypy/changeset/4fd8ce7642df/ Log: merged from pypy a3d4b51ec806 diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,43 +227,57 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +518,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +733,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,52 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As became a habit, this +release brings a lot of bugfixes, performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +list strategies which makes homogenous lists more efficient both in terms +of performance and memory. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are too many examples + of python constructs that now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + xxxx # list it, multidim arrays in particular + +* Fundraising XXX + +.. _`numpy status page`: xxx +.. _`numpy status update blog report`: xxx diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) @@ -1331,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1629,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -1552,7 +1556,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1591,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1607,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1666,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2187,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2202,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): @@ -3695,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -80,6 +80,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -174,6 +175,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -267,7 +207,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +220,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +453,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -679,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -744,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -817,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -858,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -876,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -973,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -997,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1033,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1084,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1289,11 +1263,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1345,12 +1321,15 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1359,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1387,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1419,22 +1398,29 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -482,6 +401,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -401,3 +401,9 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_ + + assert truediv(int_(3), int_(2)) == float64(1.5) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -173,7 +175,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +308,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -936,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot @@ -1121,14 +1122,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1164,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1.0).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3.0).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,35 +1478,37 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros @@ -1740,10 +1762,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1757,6 +1780,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -181,6 +181,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -56,13 +56,13 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -548,7 +548,7 @@ for s in result ] else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +635,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +650,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +765,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +785,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +914,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1103,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1113,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -127,6 +127,7 @@ l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, + op.getarg(1).getint(), w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, @@ -163,14 +164,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + at unwrap_spec(repr=str, jd_name=str, call_depth=int) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, call_depth, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_greenkey) + repr, jd_name, call_depth, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,10 +206,11 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + def __init__(self, space, op, repr_of_resop, jd_name, call_depth, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) + self.jd_name = jd_name + self.call_depth = call_depth self.w_greenkey = w_greenkey - self.jd_name = jd_name def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: @@ -243,6 +245,7 @@ greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + call_depth = interp_attrproperty("call_depth", cls=DebugMergePoint), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -122,7 +122,8 @@ assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code assert dmp.greenkey == (self.f.func_code, 0, False) - #assert int_add.name == 'int_add' + assert dmp.call_depth == 0 + assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 @@ -223,11 +224,13 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num - op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + assert op.call_depth == 2 + op = DebugMergePoint([Box(0)], 'repr', 'notmain', 5, ('str',)) raises(AttributeError, 'op.pycode') + assert op.call_depth == 5 diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -1,7 +1,6 @@ # Package initialisation from pypy.interpreter.mixedmodule import MixedModule -import select import sys @@ -15,18 +14,13 @@ 'error' : 'space.fromcache(interp_select.Cache).w_error' } - # TODO: this doesn't feel right... - if hasattr(select, "epoll"): + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' - symbols = [ - "EPOLLIN", "EPOLLOUT", "EPOLLPRI", "EPOLLERR", "EPOLLHUP", - "EPOLLET", "EPOLLONESHOT", "EPOLLRDNORM", "EPOLLRDBAND", - "EPOLLWRNORM", "EPOLLWRBAND", "EPOLLMSG" - ] - for symbol in symbols: - if hasattr(select, symbol): - interpleveldefs[symbol] = "space.wrap(%s)" % getattr(select, symbol) - + from pypy.module.select.interp_epoll import cconfig, public_symbols + for symbol in public_symbols: + value = cconfig[symbol] + if value is not None: + interpleveldefs[symbol] = "space.wrap(%r)" % value def buildloaders(cls): from pypy.rlib import rpoll diff --git a/pypy/module/select/interp_epoll.py b/pypy/module/select/interp_epoll.py --- a/pypy/module/select/interp_epoll.py +++ b/pypy/module/select/interp_epoll.py @@ -29,8 +29,16 @@ ("data", CConfig.epoll_data) ]) +public_symbols = [ + "EPOLLIN", "EPOLLOUT", "EPOLLPRI", "EPOLLERR", "EPOLLHUP", + "EPOLLET", "EPOLLONESHOT", "EPOLLRDNORM", "EPOLLRDBAND", + "EPOLLWRNORM", "EPOLLWRBAND", "EPOLLMSG" + ] +for symbol in public_symbols: + setattr(CConfig, symbol, rffi_platform.DefinedConstantInteger(symbol)) + for symbol in ["EPOLL_CTL_ADD", "EPOLL_CTL_MOD", "EPOLL_CTL_DEL"]: - setattr(CConfig, symbol, rffi_platform.DefinedConstantInteger(symbol)) + setattr(CConfig, symbol, rffi_platform.ConstantInteger(symbol)) cconfig = rffi_platform.configure(CConfig) diff --git a/pypy/module/select/test/test_epoll.py b/pypy/module/select/test/test_epoll.py --- a/pypy/module/select/test/test_epoll.py +++ b/pypy/module/select/test/test_epoll.py @@ -1,23 +1,17 @@ import py +import sys from pypy.conftest import gettestobjspace class AppTestEpoll(object): def setup_class(cls): + # NB. we should ideally py.test.skip() if running on an old linux + # where the kernel doesn't support epoll() + if not sys.platform.startswith('linux'): + py.test.skip("test requires linux (assumed >= 2.6)") cls.space = gettestobjspace(usemodules=["select", "_socket", "posix"]) - import errno - import select - - if not hasattr(select, "epoll"): - py.test.skip("test requires linux 2.6") - try: - select.epoll() - except IOError, e: - if e.errno == errno.ENOSYS: - py.test.skip("kernel doesn't support epoll()") - def setup_method(self, meth): self.w_sockets = self.space.wrap([]) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -342,7 +344,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,9 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -39,12 +43,15 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -60,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix @@ -68,6 +75,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +93,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +310,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +328,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +342,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +363,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +388,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +527,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +622,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1060,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1072,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1184,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1251,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1264,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1293,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1370,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1392,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1410,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1433,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1446,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1463,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1485,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1498,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1512,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1564,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1577,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,43 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Wed Feb 8 10:58:23 2012 From: noreply at buildbot.pypy.org (Stefano Parmesan) Date: Wed, 8 Feb 2012 10:58:23 +0100 (CET) Subject: [pypy-commit] pypy default: restored original code for json decoder Message-ID: <20120208095823.B21A782B1E@wyvern.cs.uni-duesseldorf.de> Author: Stefano Parmesan Branch: Changeset: r52218:fb7b52083b47 Date: 2012-02-08 09:52 +0100 http://bitbucket.org/pypy/pypy/changeset/fb7b52083b47/ Log: restored original code for json decoder diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -5,47 +5,15 @@ import struct from json import scanner +try: + from _json import scanstring as c_scanstring +except ImportError: + c_scanstring = None __all__ = ['JSONDecoder'] FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL - -class KeyValueElement(object): - __slots__ = ['key', 'value'] - - def __init__(self, key, value): - self.key = key - self.value = value - - -class KeyValueAbstractBuilder(object): - __slots__ = ['elements', 'base_type'] - - def __init__(self): - self.elements = self.base_type() - - def append(self, key, value): - pass - - def build(self): - return self.elements - - -class KeyValueListBuilder(KeyValueAbstractBuilder): - base_type = list - - def append(self, key, value): - self.elements.append((key, value)) - - -class KeyValueDictBuilder(KeyValueAbstractBuilder): - base_type = dict - - def append(self, key, value): - self.elements[key] = value - - def _floatconstants(): _BYTES = '7FF80000000000007FF0000000000000'.decode('hex') if sys.byteorder != 'big': @@ -94,7 +62,7 @@ DEFAULT_ENCODING = "utf-8" -def scanstring(s, end, encoding=None, strict=True, +def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): """Scan the string s for a JSON string. End is the index of the character in s after the quote that started the JSON string. @@ -107,6 +75,7 @@ if encoding is None: encoding = DEFAULT_ENCODING chunks = [] + _append = chunks.append begin = end - 1 while 1: chunk = _m(s, end) @@ -115,13 +84,11 @@ errmsg("Unterminated string starting at", s, begin)) end = chunk.end() content, terminator = chunk.groups() - del chunk # Content is contains zero or more unescaped string characters if content: if not isinstance(content, unicode): content = unicode(content, encoding) - chunks.append(content) - del content + _append(content) # Terminator is the end of string, a literal control character, # or a backslash denoting that an escape sequence follows if terminator == '"': @@ -132,8 +99,7 @@ msg = "Invalid control character {0!r} at".format(terminator) raise ValueError(errmsg(msg, s, end)) else: - chunks.append(terminator) - del terminator + _append(terminator) continue try: esc = s[end] @@ -170,16 +136,21 @@ char = unichr(uni) end = next_end # Append the unescaped character - chunks.append(char) + _append(char) return u''.join(chunks), end +# Use speedup if available +scanstring = c_scanstring or py_scanstring + WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS) WHITESPACE_STR = ' \t\n\r' def JSONObject(s_and_end, encoding, strict, scan_once, object_hook, object_pairs_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR): s, end = s_and_end + pairs = [] + pairs_append = pairs.append # Use a slice to prevent IndexError from being raised, the following # check will raise a more specific ValueError if the string is empty nextchar = s[end:end + 1] @@ -191,7 +162,7 @@ # Trivial empty object if nextchar == '}': if object_pairs_hook is not None: - result = object_pairs_hook([]) + result = object_pairs_hook(pairs) return result, end pairs = {} if object_hook is not None: @@ -200,13 +171,7 @@ elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) end += 1 - - if object_pairs_hook is not None: - pairs = KeyValueListBuilder() - else: - pairs = KeyValueDictBuilder() - - while 1: + while True: key, end = scanstring(s, end, encoding, strict) # To skip some function call overhead we optimize the fast paths where @@ -230,7 +195,7 @@ value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - pairs.append(key, value) + pairs_append((key, value)) try: nextchar = s[end] @@ -262,9 +227,9 @@ raise ValueError(errmsg("Expecting property name", s, end - 1)) if object_pairs_hook is not None: - result = object_pairs_hook(pairs.build()) # to list + result = object_pairs_hook(pairs) return result, end - pairs = pairs.build() # to dict + pairs = dict(pairs) if object_hook is not None: pairs = object_hook(pairs) return pairs, end @@ -279,12 +244,13 @@ # Look-ahead for trivial empty array if nextchar == ']': return values, end + 1 - while 1: + _append = values.append + while True: try: value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - values.append(value) + _append(value) nextchar = s[end:end + 1] if nextchar in _ws: end = _w(s, end + 1).end() diff --git a/lib-python/modified-2.7/json/scanner.py b/lib-python/modified-2.7/json/scanner.py --- a/lib-python/modified-2.7/json/scanner.py +++ b/lib-python/modified-2.7/json/scanner.py @@ -1,6 +1,10 @@ """JSON token scanner """ import re +try: + from _json import make_scanner as c_make_scanner +except ImportError: + c_make_scanner = None __all__ = ['make_scanner'] @@ -8,7 +12,19 @@ r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?', (re.VERBOSE | re.MULTILINE | re.DOTALL)) -def make_scanner(context): +def py_make_scanner(context): + parse_object = context.parse_object + parse_array = context.parse_array + parse_string = context.parse_string + match_number = NUMBER_RE.match + encoding = context.encoding + strict = context.strict + parse_float = context.parse_float + parse_int = context.parse_int + parse_constant = context.parse_constant + object_hook = context.object_hook + object_pairs_hook = context.object_pairs_hook + def _scan_once(string, idx): try: nextchar = string[idx] @@ -16,12 +32,12 @@ raise StopIteration if nextchar == '"': - return context.parse_string(string, idx + 1, context.encoding, context.strict) + return parse_string(string, idx + 1, encoding, strict) elif nextchar == '{': - return context.parse_object((string, idx + 1), context.encoding, context.strict, - _scan_once, context.object_hook, context.object_pairs_hook) + return parse_object((string, idx + 1), encoding, strict, + _scan_once, object_hook, object_pairs_hook) elif nextchar == '[': - return context.parse_array((string, idx + 1), _scan_once) + return parse_array((string, idx + 1), _scan_once) elif nextchar == 'n' and string[idx:idx + 4] == 'null': return None, idx + 4 elif nextchar == 't' and string[idx:idx + 4] == 'true': @@ -29,21 +45,23 @@ elif nextchar == 'f' and string[idx:idx + 5] == 'false': return False, idx + 5 - m = NUMBER_RE.match(string, idx) + m = match_number(string, idx) if m is not None: integer, frac, exp = m.groups() if frac or exp: - res = context.parse_float(integer + (frac or '') + (exp or '')) + res = parse_float(integer + (frac or '') + (exp or '')) else: - res = context.parse_int(integer) + res = parse_int(integer) return res, m.end() elif nextchar == 'N' and string[idx:idx + 3] == 'NaN': - return context.parse_constant('NaN'), idx + 3 + return parse_constant('NaN'), idx + 3 elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity': - return context.parse_constant('Infinity'), idx + 8 + return parse_constant('Infinity'), idx + 8 elif nextchar == '-' and string[idx:idx + 9] == '-Infinity': - return context.parse_constant('-Infinity'), idx + 9 + return parse_constant('-Infinity'), idx + 9 else: raise StopIteration return _scan_once + +make_scanner = c_make_scanner or py_make_scanner From noreply at buildbot.pypy.org Wed Feb 8 10:58:24 2012 From: noreply at buildbot.pypy.org (Stefano Parmesan) Date: Wed, 8 Feb 2012 10:58:24 +0100 (CET) Subject: [pypy-commit] pypy json-decoder-speedups: added json-decoder-speedups branch moved modified code Message-ID: <20120208095824.F360D82B1E@wyvern.cs.uni-duesseldorf.de> Author: Stefano Parmesan Branch: json-decoder-speedups Changeset: r52219:fd1859ebc28a Date: 2012-02-08 09:56 +0100 http://bitbucket.org/pypy/pypy/changeset/fd1859ebc28a/ Log: added json-decoder-speedups branch moved modified code diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -5,15 +5,43 @@ import struct from json import scanner -try: - from _json import scanstring as c_scanstring -except ImportError: - c_scanstring = None __all__ = ['JSONDecoder'] FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL + +class KeyValueElement(object): + def __init__(self, key, value): + self.key = key + self.value = value + + +class KeyValueAbstractBuilder(object): + def __init__(self): + self.elements = self.base_type() + + def append(self, key, value): + pass + + def build(self): + return self.elements + + +class KeyValueListBuilder(KeyValueAbstractBuilder): + base_type = list + + def append(self, key, value): + self.elements.append((key, value)) + + +class KeyValueDictBuilder(KeyValueAbstractBuilder): + base_type = dict + + def append(self, key, value): + self.elements[key] = value + + def _floatconstants(): _BYTES = '7FF80000000000007FF0000000000000'.decode('hex') if sys.byteorder != 'big': @@ -62,7 +90,7 @@ DEFAULT_ENCODING = "utf-8" -def py_scanstring(s, end, encoding=None, strict=True, +def scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match): """Scan the string s for a JSON string. End is the index of the character in s after the quote that started the JSON string. @@ -75,7 +103,6 @@ if encoding is None: encoding = DEFAULT_ENCODING chunks = [] - _append = chunks.append begin = end - 1 while 1: chunk = _m(s, end) @@ -84,11 +111,13 @@ errmsg("Unterminated string starting at", s, begin)) end = chunk.end() content, terminator = chunk.groups() + del chunk # Content is contains zero or more unescaped string characters if content: if not isinstance(content, unicode): content = unicode(content, encoding) - _append(content) + chunks.append(content) + del content # Terminator is the end of string, a literal control character, # or a backslash denoting that an escape sequence follows if terminator == '"': @@ -99,7 +128,8 @@ msg = "Invalid control character {0!r} at".format(terminator) raise ValueError(errmsg(msg, s, end)) else: - _append(terminator) + chunks.append(terminator) + del terminator continue try: esc = s[end] @@ -136,21 +166,16 @@ char = unichr(uni) end = next_end # Append the unescaped character - _append(char) + chunks.append(char) return u''.join(chunks), end -# Use speedup if available -scanstring = c_scanstring or py_scanstring - WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS) WHITESPACE_STR = ' \t\n\r' def JSONObject(s_and_end, encoding, strict, scan_once, object_hook, object_pairs_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR): s, end = s_and_end - pairs = [] - pairs_append = pairs.append # Use a slice to prevent IndexError from being raised, the following # check will raise a more specific ValueError if the string is empty nextchar = s[end:end + 1] @@ -162,7 +187,7 @@ # Trivial empty object if nextchar == '}': if object_pairs_hook is not None: - result = object_pairs_hook(pairs) + result = object_pairs_hook([]) return result, end pairs = {} if object_hook is not None: @@ -171,7 +196,13 @@ elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) end += 1 - while True: + + if object_pairs_hook is not None: + pairs = KeyValueListBuilder() + else: + pairs = KeyValueDictBuilder() + + while 1: key, end = scanstring(s, end, encoding, strict) # To skip some function call overhead we optimize the fast paths where @@ -195,7 +226,7 @@ value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - pairs_append((key, value)) + pairs.append(key, value) try: nextchar = s[end] @@ -227,9 +258,9 @@ raise ValueError(errmsg("Expecting property name", s, end - 1)) if object_pairs_hook is not None: - result = object_pairs_hook(pairs) + result = object_pairs_hook(pairs.build()) # to list return result, end - pairs = dict(pairs) + pairs = pairs.build() # to dict if object_hook is not None: pairs = object_hook(pairs) return pairs, end @@ -244,13 +275,12 @@ # Look-ahead for trivial empty array if nextchar == ']': return values, end + 1 - _append = values.append - while True: + while 1: try: value, end = scan_once(s, end) except StopIteration: raise ValueError(errmsg("Expecting object", s, end)) - _append(value) + values.append(value) nextchar = s[end:end + 1] if nextchar in _ws: end = _w(s, end + 1).end() diff --git a/lib-python/modified-2.7/json/scanner.py b/lib-python/modified-2.7/json/scanner.py --- a/lib-python/modified-2.7/json/scanner.py +++ b/lib-python/modified-2.7/json/scanner.py @@ -1,10 +1,6 @@ """JSON token scanner """ import re -try: - from _json import make_scanner as c_make_scanner -except ImportError: - c_make_scanner = None __all__ = ['make_scanner'] @@ -12,19 +8,7 @@ r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?', (re.VERBOSE | re.MULTILINE | re.DOTALL)) -def py_make_scanner(context): - parse_object = context.parse_object - parse_array = context.parse_array - parse_string = context.parse_string - match_number = NUMBER_RE.match - encoding = context.encoding - strict = context.strict - parse_float = context.parse_float - parse_int = context.parse_int - parse_constant = context.parse_constant - object_hook = context.object_hook - object_pairs_hook = context.object_pairs_hook - +def make_scanner(context): def _scan_once(string, idx): try: nextchar = string[idx] @@ -32,12 +16,12 @@ raise StopIteration if nextchar == '"': - return parse_string(string, idx + 1, encoding, strict) + return context.parse_string(string, idx + 1, context.encoding, context.strict) elif nextchar == '{': - return parse_object((string, idx + 1), encoding, strict, - _scan_once, object_hook, object_pairs_hook) + return context.parse_object((string, idx + 1), context.encoding, context.strict, + _scan_once, context.object_hook, context.object_pairs_hook) elif nextchar == '[': - return parse_array((string, idx + 1), _scan_once) + return context.parse_array((string, idx + 1), _scan_once) elif nextchar == 'n' and string[idx:idx + 4] == 'null': return None, idx + 4 elif nextchar == 't' and string[idx:idx + 4] == 'true': @@ -45,23 +29,21 @@ elif nextchar == 'f' and string[idx:idx + 5] == 'false': return False, idx + 5 - m = match_number(string, idx) + m = NUMBER_RE.match(string, idx) if m is not None: integer, frac, exp = m.groups() if frac or exp: - res = parse_float(integer + (frac or '') + (exp or '')) + res = context.parse_float(integer + (frac or '') + (exp or '')) else: - res = parse_int(integer) + res = context.parse_int(integer) return res, m.end() elif nextchar == 'N' and string[idx:idx + 3] == 'NaN': - return parse_constant('NaN'), idx + 3 + return context.parse_constant('NaN'), idx + 3 elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity': - return parse_constant('Infinity'), idx + 8 + return context.parse_constant('Infinity'), idx + 8 elif nextchar == '-' and string[idx:idx + 9] == '-Infinity': - return parse_constant('-Infinity'), idx + 9 + return context.parse_constant('-Infinity'), idx + 9 else: raise StopIteration return _scan_once - -make_scanner = c_make_scanner or py_make_scanner From noreply at buildbot.pypy.org Wed Feb 8 11:24:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 11:24:29 +0100 (CET) Subject: [pypy-commit] pypy json-decoder-speedups: small cleanups Message-ID: <20120208102429.19EB182B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: json-decoder-speedups Changeset: r52220:6745ed04227e Date: 2012-02-08 12:24 +0200 http://bitbucket.org/pypy/pypy/changeset/6745ed04227e/ Log: small cleanups diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -173,15 +173,18 @@ WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS) WHITESPACE_STR = ' \t\n\r' +def is_whitespace(c): + return c == ' ' or c == '\t' or c == '\n' or c == '\r' + def JSONObject(s_and_end, encoding, strict, scan_once, object_hook, - object_pairs_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR): + object_pairs_hook, _w=WHITESPACE.match): s, end = s_and_end # Use a slice to prevent IndexError from being raised, the following # check will raise a more specific ValueError if the string is empty nextchar = s[end:end + 1] # Normally we expect nextchar == '"' if nextchar != '"': - if nextchar in _ws: + if is_whitespace(nextchar): end = _w(s, end).end() nextchar = s[end:end + 1] # Trivial empty object @@ -215,9 +218,9 @@ end += 1 try: - if s[end] in _ws: + if is_whitespace(s[end]): end += 1 - if s[end] in _ws: + if is_whitespace(s[end]): end = _w(s, end + 1).end() except IndexError: pass @@ -230,7 +233,7 @@ try: nextchar = s[end] - if nextchar in _ws: + if is_whitespace(nextchar): end = _w(s, end + 1).end() nextchar = s[end] except IndexError: @@ -244,10 +247,10 @@ try: nextchar = s[end] - if nextchar in _ws: + if is_whitespace(nextchar): end += 1 nextchar = s[end] - if nextchar in _ws: + if is_whitespace(nextchar): end = _w(s, end + 1).end() nextchar = s[end] except IndexError: @@ -265,11 +268,11 @@ pairs = object_hook(pairs) return pairs, end -def JSONArray(s_and_end, scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR): +def JSONArray(s_and_end, scan_once, _w=WHITESPACE.match): s, end = s_and_end values = [] nextchar = s[end:end + 1] - if nextchar in _ws: + if is_whitespace(nextchar): end = _w(s, end + 1).end() nextchar = s[end:end + 1] # Look-ahead for trivial empty array @@ -282,7 +285,7 @@ raise ValueError(errmsg("Expecting object", s, end)) values.append(value) nextchar = s[end:end + 1] - if nextchar in _ws: + if is_whitespace(nextchar): end = _w(s, end + 1).end() nextchar = s[end:end + 1] end += 1 @@ -292,9 +295,9 @@ raise ValueError(errmsg("Expecting , delimiter", s, end)) try: - if s[end] in _ws: + if is_whitespace(s[end]): end += 1 - if s[end] in _ws: + if is_whitespace(s[end]): end = _w(s, end + 1).end() except IndexError: pass From noreply at buildbot.pypy.org Wed Feb 8 11:33:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 11:33:19 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: merge default up to 820edf258da9 Message-ID: <20120208103319.970EE82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52221:9d3e67742b56 Date: 2012-02-08 12:28 +0200 http://bitbucket.org/pypy/pypy/changeset/9d3e67742b56/ Log: merge default up to 820edf258da9 diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,52 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As became a habit, this +release brings a lot of bugfixes, performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +list strategies which makes homogenous lists more efficient both in terms +of performance and memory. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are too many examples + of python constructs that now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + xxxx # list it, multidim arrays in particular + +* Fundraising XXX + +.. _`numpy status page`: xxx +.. _`numpy status update blog report`: xxx diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1340,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1638,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3706,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -80,6 +80,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -174,6 +175,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -401,3 +401,9 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_ + + assert truediv(int_(3), int_(2)) == float64(1.5) diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,7 +43,7 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) @@ -67,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): From noreply at buildbot.pypy.org Wed Feb 8 11:40:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 11:40:32 +0100 (CET) Subject: [pypy-commit] pypy default: Close the "default" branch :-/ as a way to forget about the last 4 Message-ID: <20120208104032.8FD4282B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52222:b7efb3f29d39 Date: 2012-02-08 11:39 +0100 http://bitbucket.org/pypy/pypy/changeset/b7efb3f29d39/ Log: Close the "default" branch :-/ as a way to forget about the last 4 checkins for now. From noreply at buildbot.pypy.org Wed Feb 8 11:40:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 11:40:33 +0100 (CET) Subject: [pypy-commit] pypy default: Fix a typo. Message-ID: <20120208104033.C2A3882CE3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52223:50a9ef5dd554 Date: 2012-02-08 11:39 +0100 http://bitbucket.org/pypy/pypy/changeset/50a9ef5dd554/ Log: Fix a typo. diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -33,8 +33,8 @@ the JIT performance in places that use such lists. There are also special strategies for unicode and string lists. -* As usual, numerous performance improvements. There are too many examples - of python constructs that now should behave faster to list them. +* As usual, numerous performance improvements. There are many examples + of python constructs that now should behave faster; too many to list them. * Bugfixes and compatibility fixes with CPython. From noreply at buildbot.pypy.org Wed Feb 8 11:43:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 11:43:33 +0100 (CET) Subject: [pypy-commit] pypy json-decoder-speedups: Merge the closed branch on default. Message-ID: <20120208104333.E12E582B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: json-decoder-speedups Changeset: r52224:5df909dcb699 Date: 2012-02-08 11:43 +0100 http://bitbucket.org/pypy/pypy/changeset/5df909dcb699/ Log: Merge the closed branch on default. From noreply at buildbot.pypy.org Wed Feb 8 12:44:18 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 8 Feb 2012 12:44:18 +0100 (CET) Subject: [pypy-commit] buildbot default: rearrange win32 slave order Message-ID: <20120208114418.855DB82B1E@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r635:557fc81a4306 Date: 2012-02-08 13:37 +0200 http://bitbucket.org/pypy/buildbot/changeset/557fc81a4306/ Log: rearrange win32 slave order diff --git a/bot2/pypybuildbot/master.py b/bot2/pypybuildbot/master.py --- a/bot2/pypybuildbot/master.py +++ b/bot2/pypybuildbot/master.py @@ -349,7 +349,7 @@ 'category' : 'mac64', }, {"name": WIN32, - "slavenames": ["snakepit32", "bigboard", "SalsaSalsa"], + "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], "builddir": WIN32, "factory": pypyOwnTestFactoryWin, "category": 'win32' @@ -361,13 +361,13 @@ "category": 'win32' }, {"name": APPLVLWIN32, - "slavenames": ["snakepit32", "bigboard", "SalsaSalsa"], + "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], "builddir": APPLVLWIN32, "factory": pypyTranslatedAppLevelTestFactoryWin, "category": "win32" }, {"name" : JITWIN32, - "slavenames": ["snakepit32", "bigboard", "SalsaSalsa"], + "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], 'builddir' : JITWIN32, 'factory' : pypyJITTranslatedTestFactoryWin, 'category' : 'win32', From noreply at buildbot.pypy.org Wed Feb 8 12:45:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 12:45:03 +0100 (CET) Subject: [pypy-commit] pypy default: work on release announcement Message-ID: <20120208114503.BE42782B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52225:3230ad30b32c Date: 2012-02-08 13:42 +0200 http://bitbucket.org/pypy/pypy/changeset/3230ad30b32c/ Log: work on release announcement diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -44,9 +44,19 @@ consult the `numpy status page`_. A tentative list of things that has been done: - xxxx # list it, multidim arrays in particular + * multi dimensional arrays -* Fundraising XXX + * various sizes of dtypes -.. _`numpy status page`: xxx -.. _`numpy status update blog report`: xxx + * a lot of ufuncs + + * a lot of other minor changes + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_ + +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html From noreply at buildbot.pypy.org Wed Feb 8 12:45:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 12:45:05 +0100 (CET) Subject: [pypy-commit] pypy default: export few boring constants Message-ID: <20120208114505.0A09E82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52226:c772d9d6c77f Date: 2012-02-08 13:44 +0200 http://bitbucket.org/pypy/pypy/changeset/c772d9d6c77f/ Log: export few boring constants diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,5 +1,5 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate import sys import _numpypy as multiarray # ARGH @@ -309,3 +309,8 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) From noreply at buildbot.pypy.org Wed Feb 8 13:51:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 13:51:23 +0100 (CET) Subject: [pypy-commit] pypy default: Add a module 'numpy' which raises an ImportError giving a detailed Message-ID: <20120208125123.37D0382B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52227:65001a8720b8 Date: 2012-02-08 13:50 +0100 http://bitbucket.org/pypy/pypy/changeset/65001a8720b8/ Log: Add a module 'numpy' which raises an ImportError giving a detailed explanation. Tweak 'numpypy' to replace 'numpy' when imported. diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported From noreply at buildbot.pypy.org Wed Feb 8 13:51:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 13:51:24 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: hg merge default Message-ID: <20120208125124.6FB8582B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: release-1.8.x Changeset: r52228:48ebdce33e1b Date: 2012-02-08 13:50 +0100 http://bitbucket.org/pypy/pypy/changeset/48ebdce33e1b/ Log: hg merge default diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,5 +1,5 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate import sys import _numpypy as multiarray # ARGH @@ -309,3 +309,8 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -33,8 +33,8 @@ the JIT performance in places that use such lists. There are also special strategies for unicode and string lists. -* As usual, numerous performance improvements. There are too many examples - of python constructs that now should behave faster to list them. +* As usual, numerous performance improvements. There are many examples + of python constructs that now should behave faster; too many to list them. * Bugfixes and compatibility fixes with CPython. @@ -44,9 +44,19 @@ consult the `numpy status page`_. A tentative list of things that has been done: - xxxx # list it, multidim arrays in particular + * multi dimensional arrays -* Fundraising XXX + * various sizes of dtypes -.. _`numpy status page`: xxx -.. _`numpy status update blog report`: xxx + * a lot of ufuncs + + * a lot of other minor changes + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_ + +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported From noreply at buildbot.pypy.org Wed Feb 8 14:33:12 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 14:33:12 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: remove unused code Message-ID: <20120208133312.1F9A582B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52229:8edc07deec57 Date: 2012-02-08 04:52 -0800 http://bitbucket.org/pypy/pypy/changeset/8edc07deec57/ Log: remove unused code diff --git a/pypy/jit/backend/ppc/ppcgen/util.py b/pypy/jit/backend/ppc/ppcgen/util.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/ppcgen/util.py +++ /dev/null @@ -1,23 +0,0 @@ -from pypy.jit.codegen.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.codegen.ppc.ppcgen.func_builder import make_func - -from regname import * - -def access_at(): - a = MyPPCAssembler() - - a.lwzx(r3, r3, r4) - a.blr() - - return make_func(a, "i", "ii") - -access_at = access_at() - -def itoO(): - a = MyPPCAssembler() - - a.blr() - - return make_func(a, "O", "i") - -itoO = itoO() diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/regalloc.py +++ /dev/null @@ -1,213 +0,0 @@ -from pypy.jit.codegen.ppc.instruction import \ - gprs, fprs, crfs, ctr, \ - NO_REGISTER, GP_REGISTER, FP_REGISTER, CR_FIELD, CT_REGISTER, \ - CMPInsn, Spill, Unspill, stack_slot, \ - rSCRATCH - -from pypy.jit.codegen.ppc.conftest import option - -DEBUG_PRINT = option.debug_print - -class RegisterAllocation: - def __init__(self, freeregs, initial_mapping, initial_spill_offset): - if DEBUG_PRINT: - print - print "RegisterAllocation __init__", initial_mapping.items() - - self.insns = [] # output list of instructions - - # registers with dead values - self.freeregs = {} - for regcls in freeregs: - self.freeregs[regcls] = freeregs[regcls][:] - - self.var2loc = {} # maps Vars to AllocationSlots - self.lru = [] # least-recently-used list of vars; first is oldest. - # contains all vars in registers, and no vars on stack - - self.spill_offset = initial_spill_offset # where to put next spilled - # value, relative to rFP, - # measured in bytes - self.free_stack_slots = [] # a free list for stack slots - - # go through the initial mapping and initialize the data structures - for var, loc in initial_mapping.iteritems(): - self.set(var, loc) - if loc.is_register: - if loc.alloc in self.freeregs[loc.regclass]: - self.freeregs[loc.regclass].remove(loc.alloc) - self.lru.append(var) - else: - assert loc.offset >= self.spill_offset - - self.labels_to_tell_spill_offset_to = [] - self.builders_to_tell_spill_offset_to = [] - - def set(self, var, loc): - assert var not in self.var2loc - self.var2loc[var] = loc - - def forget(self, var, loc): - assert self.var2loc[var] is loc - del self.var2loc[var] - - def loc_of(self, var): - return self.var2loc[var] - - def spill_slot(self): - """ Returns an unused stack location. """ - if self.free_stack_slots: - return self.free_stack_slots.pop() - else: - self.spill_offset -= 4 - return stack_slot(self.spill_offset) - - def spill(self, reg, argtospill): - if argtospill in self.lru: - self.lru.remove(argtospill) - self.forget(argtospill, reg) - spillslot = self.spill_slot() - if reg.regclass != GP_REGISTER: - self.insns.append(reg.move_to_gpr(0)) - reg = gprs[0] - self.insns.append(Spill(argtospill, reg, spillslot)) - self.set(argtospill, spillslot) - - def _allocate_reg(self, regclass, newarg): - - # check if there is a register available - freeregs = self.freeregs[regclass] - - if freeregs: - reg = freeregs.pop().make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Putting %r into fresh register %r" % (newarg, reg) - return reg - - # if not, find something to spill - for i in range(len(self.lru)): - argtospill = self.lru[i] - reg = self.loc_of(argtospill) - assert reg.is_register - if reg.regclass == regclass: - del self.lru[i] - break - else: - assert 0 - - # Move the value we are spilling onto the stack, both in the - # data structures and in the instructions: - - self.spill(reg, argtospill) - - if DEBUG_PRINT: - print "allocate_reg: Spilled %r from %r to %r." % (argtospill, reg, self.loc_of(argtospill)) - - # update data structures to put newarg into the register - reg = reg.alloc.make_loc() - self.set(newarg, reg) - if DEBUG_PRINT: - print "allocate_reg: Put %r in stolen reg %r." % (newarg, reg) - return reg - - def _promote(self, arg): - if arg in self.lru: - self.lru.remove(arg) - self.lru.append(arg) - - def allocate_for_insns(self, insns): - from pypy.jit.codegen.ppc.rgenop import Var - - insns2 = [] - - # make a pass through the instructions, loading constants into - # Vars where needed. - for insn in insns: - newargs = [] - for arg in insn.reg_args: - if not isinstance(arg, Var): - newarg = Var() - arg.load(insns2, newarg) - newargs.append(newarg) - else: - newargs.append(arg) - insn.reg_args[0:len(newargs)] = newargs - insns2.append(insn) - - # Walk through instructions in forward order - for insn in insns2: - - if DEBUG_PRINT: - print "Processing instruction" - print insn - print "LRU list was:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru] - - # put things into the lru - for arg in insn.reg_args: - self._promote(arg) - if insn.result: - self._promote(insn.result) - if DEBUG_PRINT: - print "LRU list is now:", self.lru - print 'located at', [self.loc_of(a) for a in self.lru if a is not insn.result] - - # We need to allocate a register for each used - # argument that is not already in one - for i in range(len(insn.reg_args)): - arg = insn.reg_args[i] - argcls = insn.reg_arg_regclasses[i] - if DEBUG_PRINT: - print "Allocating register for", arg, "..." - argloc = self.loc_of(arg) - if DEBUG_PRINT: - print "currently in", argloc - - if not argloc.is_register: - # It has no register now because it has been spilled - self.forget(arg, argloc) - newargloc = self._allocate_reg(argcls, arg) - if DEBUG_PRINT: - print "unspilling to", newargloc - self.insns.append(Unspill(arg, newargloc, argloc)) - self.free_stack_slots.append(argloc) - elif argloc.regclass != argcls: - # it's in the wrong kind of register - # (this code is excessively confusing) - self.forget(arg, argloc) - self.freeregs[argloc.regclass].append(argloc.alloc) - if argloc.regclass != GP_REGISTER: - if argcls == GP_REGISTER: - gpr = self._allocate_reg(GP_REGISTER, arg).number - else: - gpr = rSCRATCH - self.insns.append( - argloc.move_to_gpr(gpr)) - else: - gpr = argloc.number - if argcls != GP_REGISTER: - newargloc = self._allocate_reg(argcls, arg) - self.insns.append( - newargloc.move_from_gpr(gpr)) - else: - if DEBUG_PRINT: - print "it was in ", argloc - pass - - # Need to allocate a register for the destination - assert not insn.result or insn.result not in self.var2loc - if insn.result_regclass != NO_REGISTER: - if DEBUG_PRINT: - print "Allocating register for result %r..." % (insn.result,) - resultreg = self._allocate_reg(insn.result_regclass, insn.result) - insn.allocate(self) - if DEBUG_PRINT: - print insn - print - self.insns.append(insn) - #print 'allocation done' - #for i in self.insns: - # print i - #print self.var2loc - return self.insns From noreply at buildbot.pypy.org Wed Feb 8 14:33:14 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 14:33:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: remove ppcgen directory Message-ID: <20120208133314.EB78D82B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52230:ed5e44d50011 Date: 2012-02-08 05:32 -0800 http://bitbucket.org/pypy/pypy/changeset/ed5e44d50011/ Log: remove ppcgen directory diff --git a/pypy/jit/backend/ppc/ppcgen/_ppcgen.c b/pypy/jit/backend/ppc/_ppcgen.c rename from pypy/jit/backend/ppc/ppcgen/_ppcgen.c rename to pypy/jit/backend/ppc/_ppcgen.c diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/arch.py rename from pypy/jit/backend/ppc/ppcgen/arch.py rename to pypy/jit/backend/ppc/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/arch.py @@ -1,6 +1,6 @@ # Constants that depend on whether we are on 32-bit or 64-bit -from pypy.jit.backend.ppc.ppcgen.register import (NONVOLATILES, +from pypy.jit.backend.ppc.register import (NONVOLATILES, NONVOLATILES_FLOAT, MANAGED_REGS) diff --git a/pypy/jit/backend/ppc/ppcgen/asmfunc.py b/pypy/jit/backend/ppc/asmfunc.py rename from pypy/jit/backend/ppc/ppcgen/asmfunc.py rename to pypy/jit/backend/ppc/asmfunc.py --- a/pypy/jit/backend/ppc/ppcgen/asmfunc.py +++ b/pypy/jit/backend/ppc/asmfunc.py @@ -4,7 +4,7 @@ from pypy.jit.backend.ppc.codebuf import MachineCodeBlockWrapper from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager from pypy.rpython.lltypesystem import lltype, rffi -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD from pypy.rlib.rarithmetic import r_uint _ppcgen = None diff --git a/pypy/jit/backend/ppc/ppcgen/assembler.py b/pypy/jit/backend/ppc/assembler.py rename from pypy/jit/backend/ppc/ppcgen/assembler.py rename to pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -1,5 +1,5 @@ import os -from pypy.jit.backend.ppc.ppcgen import form +from pypy.jit.backend.ppc import form # don't be fooled by the fact that there's some separation between a # generic assembler class and a PPC assembler class... there's @@ -62,7 +62,7 @@ def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): #insns = self.assemble0(dump) - from pypy.jit.backend.ppc.ppcgen import asmfunc + from pypy.jit.backend.ppc import asmfunc c = asmfunc.AsmCode(len(self.insts)*4) for i in self.insts: c.emit(i)#.assemble()) diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py rename from pypy/jit/backend/ppc/ppcgen/codebuilder.py rename to pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1,16 +1,16 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, NONVOLATILES, GPR_SAVE_AREA, IS_PPC_64) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import gen_emit_cmp_op -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +from pypy.jit.backend.ppc.helper.assembler import gen_emit_cmp_op +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, AsmMemoryManager, MachineDataBlockWrapper) @@ -26,7 +26,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.translator.tool.cbuild import ExternalCompilationInfo -from pypy.jit.backend.ppc.ppcgen.rassemblermaker import make_rassembler +from pypy.jit.backend.ppc.rassemblermaker import make_rassembler A = Form("frD", "frA", "frB", "XO3", "Rc") A1 = Form("frD", "frB", "XO3", "Rc") diff --git a/pypy/jit/backend/ppc/ppcgen/condition.py b/pypy/jit/backend/ppc/condition.py rename from pypy/jit/backend/ppc/ppcgen/condition.py rename to pypy/jit/backend/ppc/condition.py diff --git a/pypy/jit/backend/ppc/ppcgen/field.py b/pypy/jit/backend/ppc/field.py rename from pypy/jit/backend/ppc/ppcgen/field.py rename to pypy/jit/backend/ppc/field.py diff --git a/pypy/jit/backend/ppc/ppcgen/form.py b/pypy/jit/backend/ppc/form.py rename from pypy/jit/backend/ppc/ppcgen/form.py rename to pypy/jit/backend/ppc/form.py diff --git a/pypy/jit/backend/ppc/ppcgen/func_builder.py b/pypy/jit/backend/ppc/func_builder.py rename from pypy/jit/backend/ppc/ppcgen/func_builder.py rename to pypy/jit/backend/ppc/func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/func_builder.py +++ b/pypy/jit/backend/ppc/func_builder.py @@ -1,6 +1,6 @@ -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.regname import * def load_arg(code, argi, typecode): rD = r3+argi diff --git a/pypy/jit/backend/ppc/helper/__init__.py b/pypy/jit/backend/ppc/helper/__init__.py new file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/helper/assembler.py rename from pypy/jit/backend/ppc/ppcgen/helper/assembler.py rename to pypy/jit/backend/ppc/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/helper/assembler.py @@ -1,10 +1,10 @@ -import pypy.jit.backend.ppc.ppcgen.condition as c +import pypy.jit.backend.ppc.condition as c from pypy.rlib.rarithmetic import r_uint, r_longlong, intmask -from pypy.jit.backend.ppc.ppcgen.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, +from pypy.jit.backend.ppc.arch import (MAX_REG_PARAMS, IS_PPC_32, WORD, BACKCHAIN_SIZE) from pypy.jit.metainterp.history import FLOAT from pypy.rlib.unroll import unrolling_iterable -import pypy.jit.backend.ppc.ppcgen.register as r +import pypy.jit.backend.ppc.register as r from pypy.rpython.lltypesystem import rffi, lltype def gen_emit_cmp_op(condition, signed=True): diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py rename from pypy/jit/backend/ppc/ppcgen/helper/regalloc.py rename to pypy/jit/backend/ppc/helper/regalloc.py diff --git a/pypy/jit/backend/ppc/ppcgen/jump.py b/pypy/jit/backend/ppc/jump.py rename from pypy/jit/backend/ppc/ppcgen/jump.py rename to pypy/jit/backend/ppc/jump.py --- a/pypy/jit/backend/ppc/ppcgen/jump.py +++ b/pypy/jit/backend/ppc/jump.py @@ -76,7 +76,7 @@ src_locations2, dst_locations2, tmpreg2): # find and push the xmm stack locations from src_locations2 that # are going to be overwritten by dst_locations1 - from pypy.jit.backend.ppc.ppcgen.arch import WORD + from pypy.jit.backend.ppc.arch import WORD extrapushes = [] dst_keys = {} for loc in dst_locations1: diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/locations.py rename from pypy/jit/backend/ppc/ppcgen/locations.py rename to pypy/jit/backend/ppc/locations.py diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/opassembler.py rename from pypy/jit/backend/ppc/ppcgen/opassembler.py rename to pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1,19 +1,19 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, gen_emit_unary_cmp_op) -import pypy.jit.backend.ppc.ppcgen.condition as c -import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, +import pypy.jit.backend.ppc.condition as c +import pypy.jit.backend.ppc.register as r +from pypy.jit.backend.ppc.arch import (IS_PPC_32, WORD, GPR_SAVE_AREA, BACKCHAIN_SIZE, MAX_REG_PARAMS) from pypy.jit.metainterp.history import (JitCellToken, TargetToken, Box, AbstractFailDescr, FLOAT, INT, REF) from pypy.rlib.objectmodel import we_are_translated -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, +from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.codebuilder import OverwritingBuilder -from pypy.jit.backend.ppc.ppcgen.regalloc import TempPtr, TempInt +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype from pypy.jit.metainterp.resoperation import rop diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py rename from pypy/jit/backend/ppc/ppcgen/ppc_assembler.py rename to pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -1,27 +1,27 @@ import os import struct -from pypy.jit.backend.ppc.ppcgen.ppc_form import PPCForm as Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields -from pypy.jit.backend.ppc.ppcgen.regalloc import (TempInt, PPCFrameManager, +from pypy.jit.backend.ppc.ppc_form import PPCForm as Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields +from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, Regalloc) -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler -from pypy.jit.backend.ppc.ppcgen.opassembler import OpAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout -from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, IS_PPC_64, WORD, +from pypy.jit.backend.ppc.assembler import Assembler +from pypy.jit.backend.ppc.opassembler import OpAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc.jump import remap_frame_layout +from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, NONVOLATILES, MAX_REG_PARAMS, GPR_SAVE_AREA, BACKCHAIN_SIZE, FPR_SAVE_AREA, FLOAT_INT_CONVERSION, FORCE_INDEX, SIZE_LOAD_IMM_PATCH_SP) -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, +from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, encode32, encode64, decode32, decode64, count_reg_args, Saved_Volatiles) -import pypy.jit.backend.ppc.ppcgen.register as r -import pypy.jit.backend.ppc.ppcgen.condition as c +import pypy.jit.backend.ppc.register as r +import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, @@ -40,7 +40,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.ppc.ppcgen.locations import StackLocation, get_spp_offset +from pypy.jit.backend.ppc.locations import StackLocation, get_spp_offset memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_field.py b/pypy/jit/backend/ppc/ppc_field.py rename from pypy/jit/backend/ppc/ppcgen/ppc_field.py rename to pypy/jit/backend/ppc/ppc_field.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_field.py +++ b/pypy/jit/backend/ppc/ppc_field.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen import regname +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc import regname fields = { # bit margins are *inclusive*! (and bit 0 is # most-significant, 31 least significant) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_form.py b/pypy/jit/backend/ppc/ppc_form.py rename from pypy/jit/backend/ppc/ppcgen/ppc_form.py rename to pypy/jit/backend/ppc/ppc_form.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_form.py +++ b/pypy/jit/backend/ppc/ppc_form.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.form import Form -from pypy.jit.backend.ppc.ppcgen.ppc_field import ppc_fields +from pypy.jit.backend.ppc.form import Form +from pypy.jit.backend.ppc.ppc_field import ppc_fields class PPCForm(Form): fieldmap = ppc_fields diff --git a/pypy/jit/backend/ppc/ppcgen/__init__.py b/pypy/jit/backend/ppc/ppcgen/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/helper/__init__.py b/pypy/jit/backend/ppc/ppcgen/helper/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/test/__init__.py b/pypy/jit/backend/ppc/ppcgen/test/__init__.py deleted file mode 100644 diff --git a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py b/pypy/jit/backend/ppc/rassemblermaker.py rename from pypy/jit/backend/ppc/ppcgen/rassemblermaker.py rename to pypy/jit/backend/ppc/rassemblermaker.py --- a/pypy/jit/backend/ppc/ppcgen/rassemblermaker.py +++ b/pypy/jit/backend/ppc/rassemblermaker.py @@ -1,7 +1,7 @@ from pypy.tool.sourcetools import compile2 from pypy.rlib.rarithmetic import r_uint -from pypy.jit.backend.ppc.ppcgen.form import IDesc, IDupDesc -from pypy.jit.backend.ppc.ppcgen.ppc_field import IField +from pypy.jit.backend.ppc.form import IDesc, IDupDesc +from pypy.jit.backend.ppc.ppc_field import IField ## "opcode": ( 0, 5), ## "rA": (11, 15, 'unsigned', regname._R), diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/regalloc.py rename from pypy/jit/backend/ppc/ppcgen/regalloc.py rename to pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -1,10 +1,10 @@ from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, TempBox, compute_vars_longevity) -from pypy.jit.backend.ppc.ppcgen.arch import (WORD, MY_COPY_OF_REGS) -from pypy.jit.backend.ppc.ppcgen.jump import (remap_frame_layout_mixed, +from pypy.jit.backend.ppc.arch import (WORD, MY_COPY_OF_REGS) +from pypy.jit.backend.ppc.jump import (remap_frame_layout_mixed, remap_frame_layout) -from pypy.jit.backend.ppc.ppcgen.locations import imm -from pypy.jit.backend.ppc.ppcgen.helper.regalloc import (_check_imm_arg, +from pypy.jit.backend.ppc.locations import imm +from pypy.jit.backend.ppc.helper.regalloc import (_check_imm_arg, check_imm_box, prepare_cmp_op, prepare_unary_int_op, @@ -15,12 +15,12 @@ ConstPtr, Box) from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop -from pypy.jit.backend.ppc.ppcgen import locations +from pypy.jit.backend.ppc import locations from pypy.rpython.lltypesystem import rffi, lltype, rstr from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.descr import ArrayDescr from pypy.jit.codewriter.effectinfo import EffectInfo -import pypy.jit.backend.ppc.ppcgen.register as r +import pypy.jit.backend.ppc.register as r from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.descr import unpack_arraydescr from pypy.jit.backend.llsupport.descr import unpack_fielddescr diff --git a/pypy/jit/backend/ppc/ppcgen/register.py b/pypy/jit/backend/ppc/register.py rename from pypy/jit/backend/ppc/ppcgen/register.py rename to pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/ppcgen/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen.locations import (RegisterLocation, +from pypy.jit.backend.ppc.locations import (RegisterLocation, FPRegisterLocation) ALL_REGS = [RegisterLocation(i) for i in range(32)] diff --git a/pypy/jit/backend/ppc/ppcgen/regname.py b/pypy/jit/backend/ppc/regname.py rename from pypy/jit/backend/ppc/ppcgen/regname.py rename to pypy/jit/backend/ppc/regname.py diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -6,16 +6,16 @@ from pypy.jit.metainterp import history, compile from pypy.jit.metainterp.history import BoxPtr from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.ppc.ppcgen.arch import FORCE_INDEX_OFS +from pypy.jit.backend.ppc.arch import FORCE_INDEX_OFS from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc from pypy.jit.backend.x86.support import values_array -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import NONVOLATILES, GPR_SAVE_AREA, WORD -from pypy.jit.backend.ppc.ppcgen.regalloc import PPCRegisterManager, PPCFrameManager -from pypy.jit.backend.ppc.ppcgen.codebuilder import PPCBuilder -from pypy.jit.backend.ppc.ppcgen import register as r +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import NONVOLATILES, GPR_SAVE_AREA, WORD +from pypy.jit.backend.ppc.regalloc import PPCRegisterManager, PPCFrameManager +from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc import register as r import sys from pypy.tool.ansi_print import ansi_log diff --git a/pypy/jit/backend/ppc/ppcgen/symbol_lookup.py b/pypy/jit/backend/ppc/symbol_lookup.py rename from pypy/jit/backend/ppc/ppcgen/symbol_lookup.py rename to pypy/jit/backend/ppc/symbol_lookup.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/autopath.py b/pypy/jit/backend/ppc/test/autopath.py rename from pypy/jit/backend/ppc/ppcgen/test/autopath.py rename to pypy/jit/backend/ppc/test/autopath.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py b/pypy/jit/backend/ppc/test/test_call_assembler.py rename from pypy/jit/backend/ppc/ppcgen/test/test_call_assembler.py rename to pypy/jit/backend/ppc/test/test_call_assembler.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_field.py b/pypy/jit/backend/ppc/test/test_field.py rename from pypy/jit/backend/ppc/ppcgen/test/test_field.py rename to pypy/jit/backend/ppc/test/test_field.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_field.py +++ b/pypy/jit/backend/ppc/test/test_field.py @@ -1,6 +1,6 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.field import Field +from pypy.jit.backend.ppc.field import Field from py.test import raises import random diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_form.py b/pypy/jit/backend/ppc/test/test_form.py rename from pypy/jit/backend/ppc/ppcgen/test/test_form.py rename to pypy/jit/backend/ppc/test/test_form.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py b/pypy/jit/backend/ppc/test/test_func_builder.py rename from pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py rename to pypy/jit/backend/ppc/test/test_func_builder.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_func_builder.py +++ b/pypy/jit/backend/ppc/test/test_func_builder.py @@ -1,11 +1,11 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import MyPPCAssembler -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.func_builder import make_func -from pypy.jit.backend.ppc.ppcgen import form, func_builder -from pypy.jit.backend.ppc.ppcgen.regname import * +from pypy.jit.backend.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.func_builder import make_func +from pypy.jit.backend.ppc import form, func_builder +from pypy.jit.backend.ppc.regname import * from pypy.jit.backend.detect_cpu import autodetect_main_model class TestFuncBuilderTest(object): @@ -78,7 +78,7 @@ f = make_func(a, "O", "O") assert f(1) == 1 b = MyPPCAssembler() - from pypy.jit.backend.ppc.ppcgen import util + from pypy.jit.backend.ppc import util # eurgh!: b.load_word(r0, util.access_at(id(f.code), 8) + f.FAST_ENTRY_LABEL) b.mtctr(r0) diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py b/pypy/jit/backend/ppc/test/test_helper.py rename from pypy/jit/backend/ppc/ppcgen/test/test_helper.py rename to pypy/jit/backend/ppc/test/test_helper.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_helper.py +++ b/pypy/jit/backend/ppc/test/test_helper.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen.helper.assembler import (encode32, decode32) +from pypy.jit.backend.ppc.helper.assembler import (encode32, decode32) #encode64, decode64) def test_encode32(): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_ppc.py rename to pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -1,13 +1,13 @@ import py import random, sys, os -from pypy.jit.backend.ppc.ppcgen.codebuilder import BasicPPCAssembler, PPCBuilder -from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup -from pypy.jit.backend.ppc.ppcgen.regname import * -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen import form, pystructs +from pypy.jit.backend.ppc.codebuilder import BasicPPCAssembler, PPCBuilder +from pypy.jit.backend.ppc.symbol_lookup import lookup +from pypy.jit.backend.ppc.regname import * +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc import form from pypy.jit.backend.detect_cpu import autodetect_main_model -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD from pypy.rpython.lltypesystem import lltype, rffi from pypy.rpython.annlowlevel import llhelper @@ -59,6 +59,7 @@ def setup_class(cls): if autodetect_main_model() not in ["ppc", "ppc64"]: py.test.skip("can't test all of ppcgen on non-PPC!") + py.test.xfail("assemble does not return a function any longer, fix tests") """ Tests are build like this: diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py rename from pypy/jit/backend/ppc/ppcgen/test/test_rassemblermaker.py rename to pypy/jit/backend/ppc/test/test_rassemblermaker.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py b/pypy/jit/backend/ppc/test/test_regalloc.py rename from pypy/jit/backend/ppc/ppcgen/test/test_regalloc.py rename to pypy/jit/backend/ppc/test/test_regalloc.py diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py b/pypy/jit/backend/ppc/test/test_register_manager.py rename from pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py rename to pypy/jit/backend/ppc/test/test_register_manager.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_register_manager.py +++ b/pypy/jit/backend/ppc/test/test_register_manager.py @@ -1,4 +1,4 @@ -from pypy.jit.backend.ppc.ppcgen import regalloc, register +from pypy.jit.backend.ppc import regalloc, register class TestPPCRegisterManager(object): def test_allocate_scratch_register(self): diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py b/pypy/jit/backend/ppc/test/test_stackframe.py rename from pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py rename to pypy/jit/backend/ppc/test/test_stackframe.py From noreply at buildbot.pypy.org Wed Feb 8 14:51:51 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 14:51:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: merge Message-ID: <20120208135151.4937882B1E@wyvern.cs.uni-duesseldorf.de> Author: Sven Hager Branch: ppc-jit-backend-rpythonization Changeset: r52231:9da8efc953d9 Date: 2012-02-08 14:51 +0100 http://bitbucket.org/pypy/pypy/changeset/9da8efc953d9/ Log: merge diff --git a/pypy/jit/backend/ppc/arch.py b/pypy/jit/backend/ppc/arch.py --- a/pypy/jit/backend/ppc/arch.py +++ b/pypy/jit/backend/ppc/arch.py @@ -1,8 +1,8 @@ # Constants that depend on whether we are on 32-bit or 64-bit from pypy.jit.backend.ppc.register import (NONVOLATILES, - NONVOLATILES_FLOAT, - MANAGED_REGS) + NONVOLATILES_FLOAT, + MANAGED_REGS) import sys if sys.maxint == (2**31 - 1): diff --git a/pypy/jit/backend/ppc/locations.py b/pypy/jit/backend/ppc/locations.py --- a/pypy/jit/backend/ppc/locations.py +++ b/pypy/jit/backend/ppc/locations.py @@ -110,7 +110,4 @@ return ImmLocation(val) def get_spp_offset(pos): - if pos < 0: - return -pos * WORD - else: - return -(pos + 1) * WORD + return -(pos + 1) * WORD diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -16,9 +16,9 @@ FLOAT_INT_CONVERSION, FORCE_INDEX, SIZE_LOAD_IMM_PATCH_SP) from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, - encode32, encode64, - decode32, decode64, - count_reg_args, + encode32, encode64, + decode32, decode64, + count_reg_args, Saved_Volatiles) import pypy.jit.backend.ppc.register as r import pypy.jit.backend.ppc.condition as c diff --git a/pypy/jit/backend/ppc/register.py b/pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -1,5 +1,5 @@ from pypy.jit.backend.ppc.locations import (RegisterLocation, - FPRegisterLocation) + FPRegisterLocation) ALL_REGS = [RegisterLocation(i) for i in range(32)] ALL_FLOAT_REGS = [FPRegisterLocation(i) for i in range(32)] diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -44,7 +44,7 @@ def setup_once(self): self.asm.setup_once() - def compile_loop(self, inputargs, operations, looptoken, log=False, name=""): + def compile_loop(self, inputargs, operations, looptoken, log=True, name=""): self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, diff --git a/pypy/jit/backend/ppc/test/support.py b/pypy/jit/backend/ppc/test/support.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/support.py @@ -0,0 +1,9 @@ +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.metainterp.test import support + +class JitPPCMixin(support.LLJitMixin): + type_system = 'lltype' + CPUClass = getcpuclass() + + def check_jumps(self, maxcount): + pass diff --git a/pypy/jit/backend/ppc/test/test_form.py b/pypy/jit/backend/ppc/test/test_form.py --- a/pypy/jit/backend/ppc/test/test_form.py +++ b/pypy/jit/backend/ppc/test/test_form.py @@ -1,11 +1,11 @@ import autopath -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import b +from pypy.jit.backend.ppc.ppc_assembler import b import random import sys -from pypy.jit.backend.ppc.ppcgen.form import Form, FormException -from pypy.jit.backend.ppc.ppcgen.field import Field -from pypy.jit.backend.ppc.ppcgen.assembler import Assembler +from pypy.jit.backend.ppc.form import Form, FormException +from pypy.jit.backend.ppc.field import Field +from pypy.jit.backend.ppc.assembler import Assembler # 0 31 # +-------------------------------+ diff --git a/pypy/jit/backend/ppc/test/test_list.py b/pypy/jit/backend/ppc/test/test_list.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_list.py @@ -0,0 +1,8 @@ + +from pypy.jit.metainterp.test.test_list import ListTests +from pypy.jit.backend.ppc.test.support import JitPPCMixin + +class TestList(JitPPCMixin, ListTests): + # for individual tests see + # ====> ../../../metainterp/test/test_list.py + pass diff --git a/pypy/jit/backend/ppc/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py --- a/pypy/jit/backend/ppc/test/test_rassemblermaker.py +++ b/pypy/jit/backend/ppc/test/test_rassemblermaker.py @@ -1,5 +1,5 @@ -from pypy.jit.backend.ppc.ppcgen.rassemblermaker import make_rassembler -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.rassemblermaker import make_rassembler +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler RPPCAssembler = make_rassembler(PPCAssembler) diff --git a/pypy/jit/backend/ppc/test/test_regalloc.py b/pypy/jit/backend/ppc/test/test_regalloc.py --- a/pypy/jit/backend/ppc/test/test_regalloc.py +++ b/pypy/jit/backend/ppc/test/test_regalloc.py @@ -1,10 +1,11 @@ from pypy.rlib.objectmodel import instantiate -from pypy.jit.backend.ppc.ppcgen.locations import (imm, RegisterLocation, - ImmLocation, StackLocation) -from pypy.jit.backend.ppc.ppcgen.register import * -from pypy.jit.backend.ppc.ppcgen.codebuilder import hi, lo -from pypy.jit.backend.ppc.ppcgen.ppc_assembler import AssemblerPPC -from pypy.jit.backend.ppc.ppcgen.arch import WORD +from pypy.jit.backend.ppc.locations import (imm, RegisterLocation, + ImmLocation, StackLocation) +from pypy.jit.backend.ppc.register import * +from pypy.jit.backend.ppc.codebuilder import hi, lo +from pypy.jit.backend.ppc.ppc_assembler import AssemblerPPC +from pypy.jit.backend.ppc.arch import WORD +from pypy.jit.backend.ppc.locations import get_spp_offset class MockBuilder(object): @@ -94,23 +95,31 @@ big = 2 << 28 self.asm.regalloc_mov(imm(big), stack(7)) - exp_instr = [MI("load_imm", 0, 5), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD)), - MI("load_imm", 0, big), - MI("stw", r0.value, SPP.value, -(7 * WORD + WORD))] + exp_instr = [MI("alloc_scratch_reg"), + MI("load_imm", r0, 5), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg"), + + MI("alloc_scratch_reg"), + MI("load_imm", r0, big), + MI("store", r0.value, SPP.value, get_spp_offset(7)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instr def test_mem_to_reg(self): self.asm.regalloc_mov(stack(5), reg(10)) self.asm.regalloc_mov(stack(0), reg(0)) - exp_instrs = [MI("lwz", r10.value, SPP.value, -(5 * WORD + WORD)), - MI("lwz", r0.value, SPP.value, -(WORD))] + exp_instrs = [MI("load", r10.value, SPP.value, -(5 * WORD + WORD)), + MI("load", r0.value, SPP.value, -(WORD))] assert self.asm.mc.instrs == exp_instrs def test_mem_to_mem(self): self.asm.regalloc_mov(stack(5), stack(6)) - exp_instrs = [MI("lwz", r0.value, SPP.value, -(5 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(6 * WORD + WORD))] + exp_instrs = [ + MI("alloc_scratch_reg"), + MI("load", r0.value, SPP.value, get_spp_offset(5)), + MI("store", r0.value, SPP.value, get_spp_offset(6)), + MI("free_scratch_reg")] assert self.asm.mc.instrs == exp_instrs def test_reg_to_reg(self): @@ -123,8 +132,8 @@ def test_reg_to_mem(self): self.asm.regalloc_mov(reg(5), stack(10)) self.asm.regalloc_mov(reg(0), stack(2)) - exp_instrs = [MI("stw", r5.value, SPP.value, -(10 * WORD + WORD)), - MI("stw", r0.value, SPP.value, -(2 * WORD + WORD))] + exp_instrs = [MI("store", r5.value, SPP.value, -(10 * WORD + WORD)), + MI("store", r0.value, SPP.value, -(2 * WORD + WORD))] assert self.asm.mc.instrs == exp_instrs def reg(i): diff --git a/pypy/jit/backend/ppc/test/test_ztranslation.py b/pypy/jit/backend/ppc/test/test_ztranslation.py --- a/pypy/jit/backend/ppc/test/test_ztranslation.py +++ b/pypy/jit/backend/ppc/test/test_ztranslation.py @@ -8,7 +8,7 @@ from pypy.jit.backend.test.support import CCompiledMixin from pypy.jit.codewriter.policy import StopAtXPolicy from pypy.translator.translator import TranslationContext -from pypy.jit.backend.ppc.ppcgen.arch import IS_PPC_32, IS_PPC_64 +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64 from pypy.config.translationoption import DEFL_GC from pypy.rlib import rgc diff --git a/pypy/jit/backend/ppc/util.py b/pypy/jit/backend/ppc/util.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/util.py @@ -0,0 +1,23 @@ +from pypy.jit.codegen.ppc.ppc_assembler import MyPPCAssembler +from pypy.jit.codegen.ppc.func_builder import make_func + +from regname import * + +def access_at(): + a = MyPPCAssembler() + + a.lwzx(r3, r3, r4) + a.blr() + + return make_func(a, "i", "ii") + +access_at = access_at() + +def itoO(): + a = MyPPCAssembler() + + a.blr() + + return make_func(a, "O", "i") + +itoO = itoO() diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3233,6 +3233,24 @@ 'float', descr=calldescr) assert res.getfloat() == expected + def test_wrong_guard_nonnull_class(self): + t_box, T_box = self.alloc_instance(self.T) + null_box = self.null_instance() + faildescr = BasicFailDescr(42) + operations = [ + ResOperation(rop.GUARD_NONNULL_CLASS, [t_box, T_box], None, + descr=faildescr), + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(1))] + operations[0].setfailargs([]) + looptoken = JitCellToken() + inputargs = [t_box] + self.cpu.compile_loop(inputargs, operations, looptoken) + operations = [ + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(99)) + ] + self.cpu.compile_bridge(faildescr, [], operations, looptoken) + fail = self.cpu.execute_token(looptoken, null_box.getref_base()) + assert fail.identifier == 99 def test_compile_loop_with_target(self): i0 = BoxInt() From noreply at buildbot.pypy.org Wed Feb 8 15:04:04 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 15:04:04 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: fix merge bug Message-ID: <20120208140404.4A22082B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend-rpythonization Changeset: r52232:ca06b153e756 Date: 2012-02-08 06:03 -0800 http://bitbucket.org/pypy/pypy/changeset/ca06b153e756/ Log: fix merge bug diff --git a/pypy/jit/backend/ppc/locations.py b/pypy/jit/backend/ppc/locations.py --- a/pypy/jit/backend/ppc/locations.py +++ b/pypy/jit/backend/ppc/locations.py @@ -110,4 +110,7 @@ return ImmLocation(val) def get_spp_offset(pos): - return -(pos + 1) * WORD + if pos < 0: + return -pos * WORD + else: + return -(pos + 1) * WORD From noreply at buildbot.pypy.org Wed Feb 8 15:05:57 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 15:05:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120208140557.D5AD182B1E@wyvern.cs.uni-duesseldorf.de> Author: Sven Hager Branch: ppc-jit-backend Changeset: r52233:b877b6a48b06 Date: 2012-02-08 15:05 +0100 http://bitbucket.org/pypy/pypy/changeset/b877b6a48b06/ Log: merge diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -112,7 +112,7 @@ (frame_depth + len(all_regs) * WORD + len(all_vfp_regs) * 2 * WORD)) - fail_index_2 = self.assembler.failure_recovery_func( + fail_index_2 = self.assembler.decode_registers_and_descr( faildescr._failure_recovery_code, addr_of_force_index, addr_end_of_frame) diff --git a/pypy/jit/backend/ppc/asmfunc.py b/pypy/jit/backend/ppc/asmfunc.py --- a/pypy/jit/backend/ppc/asmfunc.py +++ b/pypy/jit/backend/ppc/asmfunc.py @@ -5,6 +5,7 @@ from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager from pypy.rpython.lltypesystem import lltype, rffi from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, WORD +from pypy.rlib.rarithmetic import r_uint _ppcgen = None @@ -19,17 +20,14 @@ self.code = MachineCodeBlockWrapper() if IS_PPC_64: # allocate function descriptor - 3 doublewords - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) - self.emit(0) + for i in range(6): + self.emit(r_uint(0)) - def emit(self, insn): - bytes = struct.pack("i", insn) - for byte in bytes: - self.code.writechar(byte) + def emit(self, word): + self.code.writechar(chr((word >> 24) & 0xFF)) + self.code.writechar(chr((word >> 16) & 0xFF)) + self.code.writechar(chr((word >> 8) & 0xFF)) + self.code.writechar(chr(word & 0xFF)) def get_function(self): i = self.code.materialize(AsmMemoryManager(), []) diff --git a/pypy/jit/backend/ppc/assembler.py b/pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -51,7 +51,7 @@ inst.fields[f] = l buf = [] for inst in self.insts: - buf.append(inst.assemble()) + buf.append(inst)#.assemble()) if dump: for i in range(len(buf)): inst = self.disassemble(buf[i], self.rlabels, i*4) @@ -61,12 +61,12 @@ return buf def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): - insns = self.assemble0(dump) + #insns = self.assemble0(dump) from pypy.jit.backend.ppc import asmfunc - c = asmfunc.AsmCode(len(insns)*4) - for i in insns: - c.emit(i) - return c.get_function() + c = asmfunc.AsmCode(len(self.insts)*4) + for i in self.insts: + c.emit(i)#.assemble()) + #return c.get_function() def get_idescs(cls): r = [] diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -22,11 +22,11 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import (BoxInt, ConstInt, ConstPtr, ConstFloat, Box, INT, REF, FLOAT) -from pypy.jit.backend.x86.support import values_array from pypy.tool.udir import udir from pypy.rlib.objectmodel import we_are_translated from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.ppc.rassemblermaker import make_rassembler A = Form("frD", "frA", "frB", "XO3", "Rc") A1 = Form("frD", "frB", "XO3", "Rc") @@ -888,10 +888,7 @@ mtcr = BA.mtcrf(CRM=0xFF) - def emit(self, insn): - bytes = struct.pack("i", insn) - for byte in bytes: - self.writechar(byte) +PPCAssembler = make_rassembler(PPCAssembler) def hi(w): return w >> 16 @@ -964,9 +961,16 @@ def __init__(self, failargs_limit=1000, r0_in_use=False): PPCAssembler.__init__(self) self.init_block_builder() - self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.r0_in_use = r0_in_use + def check(self, desc, v, *args): + desc.__get__(self)(*args) + ins = self.insts.pop() + expected = ins.assemble() + if expected < 0: + expected += 1<<32 + assert v == expected + def load_imm(self, rD, word): rD = rD.as_key() if word <= 32767 and word >= -32768: @@ -990,7 +994,8 @@ self.ldx(rD.value, 0, rD.value) def store_reg(self, source_reg, addr): - self.alloc_scratch_reg(addr) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, addr) if IS_PPC_32: self.stwx(source_reg.value, 0, r.SCRATCH.value) else: @@ -1015,13 +1020,15 @@ BI = condition[0] BO = condition[1] - self.alloc_scratch_reg(addr) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, addr) self.mtctr(r.SCRATCH.value) self.free_scratch_reg() self.bcctr(BO, BI) def b_abs(self, address, trap=False): - self.alloc_scratch_reg(address) + self.alloc_scratch_reg() + self.load_imm(r.SCRATCH, address) self.mtctr(r.SCRATCH.value) self.free_scratch_reg() if trap: @@ -1075,7 +1082,7 @@ self.assemble(show) insts = self.insts for inst in insts: - self.write32(inst.assemble()) + self.write32(inst)#.assemble()) def _dump_trace(self, addr, name, formatter=-1): if not we_are_translated(): @@ -1148,11 +1155,9 @@ # 64 bit unsigned self.cmpld(block, a, b) - def alloc_scratch_reg(self, value=None): + def alloc_scratch_reg(self): assert not self.r0_in_use self.r0_in_use = True - if value is not None: - self.load_imm(r.r0, value) def free_scratch_reg(self): assert self.r0_in_use diff --git a/pypy/jit/backend/ppc/form.py b/pypy/jit/backend/ppc/form.py --- a/pypy/jit/backend/ppc/form.py +++ b/pypy/jit/backend/ppc/form.py @@ -14,8 +14,8 @@ self.fields = fields self.lfields = [k for (k,v) in fields.iteritems() if isinstance(v, str)] - if not self.lfields: - self.assemble() # for error checking only + #if not self.lfields: + # self.assemble() # for error checking only def assemble(self): r = 0 for field in self.fields: diff --git a/pypy/jit/backend/ppc/helper/assembler.py b/pypy/jit/backend/ppc/helper/assembler.py --- a/pypy/jit/backend/ppc/helper/assembler.py +++ b/pypy/jit/backend/ppc/helper/assembler.py @@ -83,7 +83,7 @@ def decode64(mem, index): value = 0 - for x in unrolling_iterable(range(8)): + for x in range(8): value |= (ord(mem[index + x]) << (56 - x * 8)) return intmask(value) diff --git a/pypy/jit/backend/ppc/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py --- a/pypy/jit/backend/ppc/helper/regalloc.py +++ b/pypy/jit/backend/ppc/helper/regalloc.py @@ -10,7 +10,7 @@ return False def _check_imm_arg(arg, size=IMM_SIZE, allow_zero=True): - #assert not isinstance(arg, ConstInt) + assert not isinstance(arg, ConstInt) #if not we_are_translated(): # if not isinstance(arg, int): # import pdb; pdb.set_trace() @@ -25,8 +25,8 @@ def f(self, op): boxes = op.getarglist() arg0, arg1 = boxes - imm_a0 = _check_imm_arg(arg0) - imm_a1 = _check_imm_arg(arg1) + imm_a0 = check_imm_box(arg0) + imm_a1 = check_imm_box(arg1) l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1 and not imm_a0: @@ -63,8 +63,8 @@ def f(self, op): boxes = op.getarglist() b0, b1 = boxes - imm_b0 = _check_imm_arg(b0) - imm_b1 = _check_imm_arg(b1) + imm_b0 = check_imm_box(b0) + imm_b1 = check_imm_box(b1) l0 = self._ensure_value_is_boxed(b0, boxes) l1 = self._ensure_value_is_boxed(b1, boxes) locs = [l0, l1] diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -288,7 +288,8 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) self.mc.storex(loc.value, 0, r.SCRATCH.value) self.mc.free_scratch_reg() elif loc.is_vfp_reg(): @@ -372,7 +373,8 @@ if resloc: self.mc.load(resloc.value, loc.value, 0) - self.mc.alloc_scratch_reg(0) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, 0) self.mc.store(r.SCRATCH.value, loc.value, 0) self.mc.store(r.SCRATCH.value, loc1.value, 0) self.mc.free_scratch_reg() @@ -748,7 +750,8 @@ bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.alloc_scratch_reg(1 << scale) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, 1 << scale) if IS_PPC_32: self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) else: @@ -857,7 +860,8 @@ def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) self.mc.free_scratch_reg() @@ -986,7 +990,8 @@ # check value resloc = regalloc.try_allocate_reg(resbox) assert resloc is r.RES - self.mc.alloc_scratch_reg(value) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, value) self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) self.mc.free_scratch_reg() regalloc.possibly_free_var(resbox) @@ -1026,16 +1031,16 @@ # Reset the vable token --- XXX really too much special logic here:-( if jd.index_of_virtualizable >= 0: - from pypy.jit.backend.llsupport.descr import BaseFieldDescr + from pypy.jit.backend.llsupport.descr import FieldDescr fielddescr = jd.vable_token_descr - assert isinstance(fielddescr, BaseFieldDescr) + assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset resloc = regalloc.force_allocate_reg(resbox) - self.alloc_scratch_reg() + self.mc.alloc_scratch_reg() self.mov_loc_loc(arglocs[1], r.SCRATCH) self.mc.li(resloc.value, 0) self.mc.storex(resloc.value, 0, r.SCRATCH.value) - self.free_scratch_reg() + self.mc.free_scratch_reg() regalloc.possibly_free_var(resbox) if op.result is not None: @@ -1051,7 +1056,8 @@ raise AssertionError(kind) resloc = regalloc.force_allocate_reg(op.result) regalloc.possibly_free_var(resbox) - self.mc.alloc_scratch_reg(adr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, adr) if op.result.type == FLOAT: assert 0, "not implemented yet" else: @@ -1082,6 +1088,15 @@ emit_guard_call_release_gil = emit_guard_call_may_force + def call_release_gil(self, gcrootmap, save_registers): + # XXX don't know whether this is correct + # XXX use save_registers here + assert gcrootmap.is_shadow_stack + with Saved_Volatiles(self.mc): + self._emit_call(NO_FORCE_INDEX, self.releasegil_addr, + [], self._regalloc) + + class OpAssembler(IntOpAssembler, GuardOpAssembler, MiscOpAssembler, FieldOpAssembler, diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -3,23 +3,23 @@ from pypy.jit.backend.ppc.ppc_form import PPCForm as Form from pypy.jit.backend.ppc.ppc_field import ppc_fields from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, - Regalloc) + Regalloc) from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup from pypy.jit.backend.ppc.codebuilder import PPCBuilder from pypy.jit.backend.ppc.jump import remap_frame_layout from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, - NONVOLATILES, MAX_REG_PARAMS, - GPR_SAVE_AREA, BACKCHAIN_SIZE, - FPR_SAVE_AREA, - FLOAT_INT_CONVERSION, FORCE_INDEX, - SIZE_LOAD_IMM_PATCH_SP) + NONVOLATILES, MAX_REG_PARAMS, + GPR_SAVE_AREA, BACKCHAIN_SIZE, + FPR_SAVE_AREA, + FLOAT_INT_CONVERSION, FORCE_INDEX, + SIZE_LOAD_IMM_PATCH_SP) from pypy.jit.backend.ppc.helper.assembler import (gen_emit_cmp_op, encode32, encode64, decode32, decode64, count_reg_args, - Saved_Volatiles) + Saved_Volatiles) import pypy.jit.backend.ppc.register as r import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, @@ -106,6 +106,7 @@ self._regalloc = None self.max_stack_params = 0 self.propagate_exception_path = 0 + self.setup_failure_recovery() def _save_nonvolatiles(self): """ save nonvolatile GPRs in GPR SAVE AREA @@ -237,6 +238,7 @@ descr = decode32(enc, i+1) self.fail_boxes_count = fail_index self.fail_force_index = spp_loc + assert isinstance(descr, int) return descr def decode_inputargs(self, enc): @@ -382,7 +384,6 @@ gc_ll_descr.initialize() self._build_propagate_exception_path() self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) - self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) @@ -600,8 +601,7 @@ if op.is_ovf(): if (operations[i + 1].getopnum() != rop.GUARD_NO_OVERFLOW and operations[i + 1].getopnum() != rop.GUARD_OVERFLOW): - not_implemented("int_xxx_ovf not followed by " - "guard_(no)_overflow") + assert 0, "int_xxx_ovf not followed by guard_(no)_overflow" return True return False if (operations[i + 1].getopnum() != rop.GUARD_TRUE and @@ -682,7 +682,8 @@ memaddr = self.gen_descr_encoding(descr, args, arglocs) # store addr in force index field - self.mc.alloc_scratch_reg(memaddr) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, memaddr) self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.free_scratch_reg() @@ -886,7 +887,8 @@ return 0 def _write_fail_index(self, fail_index): - self.mc.alloc_scratch_reg(fail_index) + self.mc.alloc_scratch_reg() + self.mc.load_imm(r.SCRATCH, fail_index) self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.free_scratch_reg() diff --git a/pypy/jit/backend/ppc/rassemblermaker.py b/pypy/jit/backend/ppc/rassemblermaker.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/rassemblermaker.py @@ -0,0 +1,74 @@ +from pypy.tool.sourcetools import compile2 +from pypy.rlib.rarithmetic import r_uint +from pypy.jit.backend.ppc.form import IDesc, IDupDesc +from pypy.jit.backend.ppc.ppc_field import IField + +## "opcode": ( 0, 5), +## "rA": (11, 15, 'unsigned', regname._R), +## "rB": (16, 20, 'unsigned', regname._R), +## "Rc": (31, 31), +## "rD": ( 6, 10, 'unsigned', regname._R), +## "OE": (21, 21), +## "XO2": (22, 30), + +## XO = Form("rD", "rA", "rB", "OE", "XO2", "Rc") + +## add = XO(31, XO2=266, OE=0, Rc=0) + +## def add(rD, rA, rB): +## v = 0 +## v |= (31&(2**(5-0+1)-1)) << (32-5-1) +## ... +## return v + +def make_func(name, desc): + sig = [] + fieldvalues = [] + for field in desc.fields: + if field in desc.specializations: + fieldvalues.append((field, desc.specializations[field])) + else: + sig.append(field.name) + fieldvalues.append((field, field.name)) + if isinstance(desc, IDupDesc): + for destfield, srcfield in desc.dupfields.iteritems(): + fieldvalues.append((destfield, srcfield.name)) + body = ['v = r_uint(0)'] + assert 'v' not in sig # that wouldn't be funny + #body.append('print %r'%name + ', ' + ', '.join(["'%s:', %s"%(s, s) for s in sig])) + for field, value in fieldvalues: + if field.name == 'spr': + body.append('spr1 = (%s&31) << 5 | (%s >> 5 & 31)'%(value, value)) + value = 'spr1' + elif field.name == 'mbe': + body.append('mbe1 = (%s & 31) << 1 | (%s & 32) >> 5' % (value, value)) + value = 'mbe1' + elif field.name == 'sh': + body.append('sh1 = (%s & 31) << 10 | (%s & 32) >> 5' % (value, value)) + value = 'sh1' + if isinstance(field, IField): + body.append('v |= ((%3s >> 2) & r_uint(%#05x)) << 2' % (value, field.mask)) + else: + body.append('v |= (%3s & r_uint(%#05x)) << %d'%(value, + field.mask, + (32 - field.right - 1))) + #body.append('self.check(desc, v, %s)' % ', '.join(sig)) + body.append('self.emit(v)') + src = 'def %s(self, %s):\n %s'%(name, ', '.join(sig), '\n '.join(body)) + d = {'r_uint':r_uint, 'desc': desc} + #print src + exec compile2(src) in d + return d[name] + +def make_rassembler(cls): + bases = [make_rassembler(b) for b in cls.__bases__] + ns = {} + for k, v in cls.__dict__.iteritems(): + if isinstance(v, IDesc): + v = make_func(k, v) + ns[k] = v + rcls = type('R' + cls.__name__, tuple(bases), ns) + def emit(self, value): + self.insts.append(value) + rcls.emit = emit + return rcls diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -2,15 +2,15 @@ TempBox, compute_vars_longevity) from pypy.jit.backend.ppc.arch import (WORD, MY_COPY_OF_REGS) from pypy.jit.backend.ppc.jump import (remap_frame_layout_mixed, - remap_frame_layout) + remap_frame_layout) from pypy.jit.backend.ppc.locations import imm from pypy.jit.backend.ppc.helper.regalloc import (_check_imm_arg, - check_imm_box, - prepare_cmp_op, - prepare_unary_int_op, - prepare_binary_int_op, - prepare_binary_int_op_with_imm, - prepare_unary_cmp) + check_imm_box, + prepare_cmp_op, + prepare_unary_int_op, + prepare_binary_int_op, + prepare_binary_int_op_with_imm, + prepare_unary_cmp) from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, ConstPtr, Box) from pypy.jit.metainterp.history import JitCellToken, TargetToken @@ -512,7 +512,7 @@ loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) arglocs.append(loc) argboxes.append(box) - self.assembler.call_release_gil(gcrootmap, arglocs, fcond) + self.assembler.call_release_gil(gcrootmap, arglocs) self.possibly_free_vars(argboxes) # do the call faildescr = guard_op.getdescr() @@ -522,7 +522,8 @@ self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack if gcrootmap: - self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) + assert 0, "not implemented yet" + # self.assembler.call_reacquire_gil(gcrootmap, registers) locs = self._prepare_guard(guard_op) self.possibly_free_vars(guard_op.getfailargs()) return locs @@ -595,11 +596,10 @@ args = op.getarglist() base_loc = self._ensure_value_is_boxed(op.getarg(0), args) index_loc = self._ensure_value_is_boxed(op.getarg(1), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): + if _check_imm_arg(ofs): ofs_loc = imm(ofs) else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) self.possibly_free_vars_for_op(op) self.free_temp_vars() result_loc = self.force_allocate_reg(op.result) @@ -614,11 +614,10 @@ base_loc = self._ensure_value_is_boxed(op.getarg(0), args) index_loc = self._ensure_value_is_boxed(op.getarg(1), args) value_loc = self._ensure_value_is_boxed(op.getarg(2), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): + if _check_imm_arg(ofs): ofs_loc = imm(ofs) else: - ofs_loc = self._ensure_value_is_boxed(c_ofs, args) + ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] @@ -640,8 +639,7 @@ base_loc = self._ensure_value_is_boxed(args[0], args) ofs_loc = self._ensure_value_is_boxed(args[1], args) value_loc = self._ensure_value_is_boxed(args[2], args) - scratch_loc = self.rm.get_scratch_reg(INT, - [base_loc, ofs_loc, value_loc]) + scratch_loc = self.rm.get_scratch_reg(INT, args) assert _check_imm_arg(ofs) return [value_loc, base_loc, ofs_loc, scratch_loc, imm(scale), imm(ofs)] prepare_setarrayitem_raw = prepare_setarrayitem_gc @@ -652,7 +650,7 @@ scale = get_scale(size) base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) - scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + scratch_loc = self.rm.get_scratch_reg(INT, boxes) self.possibly_free_vars_for_op(op) self.free_temp_vars() res = self.force_allocate_reg(op.result) @@ -766,11 +764,13 @@ def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: - oopspecindex = effectinfo.oopspecindex - if oopspecindex == EffectInfo.OS_MATH_SQRT: - args = self.prepare_op_math_sqrt(op, fcond) - self.assembler.emit_op_math_sqrt(op, args, self, fcond) - return + # XXX TODO + #oopspecindex = effectinfo.oopspecindex + #if oopspecindex == EffectInfo.OS_MATH_SQRT: + # args = self.prepare_op_math_sqrt(op, fcond) + # self.assembler.emit_op_math_sqrt(op, args, self, fcond) + # return + pass args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -44,7 +44,7 @@ def setup_once(self): self.asm.setup_once() - def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): + def compile_loop(self, inputargs, operations, looptoken, log=True, name=""): self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, @@ -97,7 +97,7 @@ rffi.cast(TP, addr_of_force_index)[0] = ~fail_index # start of "no gc operation!" block - fail_index_2 = self.asm.failure_recovery_func( + fail_index_2 = self.asm.decode_registers_and_descr( faildescr._failure_recovery_code, spilling_pointer) self.asm.leave_jitted_hook() # end of "no gc operation!" block diff --git a/pypy/jit/backend/ppc/test/test_func_builder.py b/pypy/jit/backend/ppc/test/test_func_builder.py --- a/pypy/jit/backend/ppc/test/test_func_builder.py +++ b/pypy/jit/backend/ppc/test/test_func_builder.py @@ -78,7 +78,7 @@ f = make_func(a, "O", "O") assert f(1) == 1 b = MyPPCAssembler() - from pypy.jit.backend.ppc.ppcgen import util + from pypy.jit.backend.ppc import util # eurgh!: b.load_word(r0, util.access_at(id(f.code), 8) + f.FAST_ENTRY_LABEL) b.mtctr(r0) diff --git a/pypy/jit/backend/ppc/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -2,6 +2,7 @@ import random, sys, os from pypy.jit.backend.ppc.codebuilder import BasicPPCAssembler, PPCBuilder +from pypy.jit.backend.ppc.symbol_lookup import lookup from pypy.jit.backend.ppc.regname import * from pypy.jit.backend.ppc.register import * from pypy.jit.backend.ppc import form @@ -58,6 +59,7 @@ def setup_class(cls): if autodetect_main_model() not in ["ppc", "ppc64"]: py.test.skip("can't test all of ppcgen on non-PPC!") + py.test.xfail("assemble does not return a function any longer, fix tests") """ Tests are build like this: diff --git a/pypy/jit/backend/ppc/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_rassemblermaker.py @@ -0,0 +1,39 @@ +from pypy.jit.backend.ppc.rassemblermaker import make_rassembler +from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler + +RPPCAssembler = make_rassembler(PPCAssembler) + +_a = PPCAssembler() +_a.add(3, 3, 4) +add_r3_r3_r4 = _a.insts[0].assemble() + +def test_simple(): + ra = RPPCAssembler() + ra.add(3, 3, 4) + assert ra.insts == [add_r3_r3_r4] + +def test_rtyped(): + from pypy.rpython.test.test_llinterp import interpret + def f(): + ra = RPPCAssembler() + ra.add(3, 3, 4) + ra.lwz(1, 1, 1) # ensure that high bit doesn't produce long but r_uint + return ra.insts[0] + res = interpret(f, []) + assert res == add_r3_r3_r4 + +def test_mnemonic(): + mrs = [] + for A in PPCAssembler, RPPCAssembler: + a = A() + a.mr(3, 4) + mrs.append(a.insts[0]) + assert mrs[0].assemble() == mrs[1] + +def test_spr_coding(): + mrs = [] + for A in PPCAssembler, RPPCAssembler: + a = A() + a.mtctr(3) + mrs.append(a.insts[0]) + assert mrs[0].assemble() == mrs[1] diff --git a/pypy/jit/backend/ppc/test/test_zrpy_gc.py b/pypy/jit/backend/ppc/test/test_zrpy_gc.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_zrpy_gc.py @@ -0,0 +1,795 @@ +""" +This is a test that translates a complete JIT together with a GC and runs it. +It is testing that the GC-dependent aspects basically work, mostly the mallocs +and the various cases of write barrier. +""" + +import weakref +import py, os +from pypy.annotation import policy as annpolicy +from pypy.rlib import rgc +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib.jit import JitDriver, dont_look_inside +from pypy.rlib.jit import elidable, unroll_safe +from pypy.jit.backend.llsupport.gc import GcLLDescr_framework +from pypy.tool.udir import udir +from pypy.config.translationoption import DEFL_GC + +class X(object): + def __init__(self, x=0): + self.x = x + + next = None + +class CheckError(Exception): + pass + +def check(flag): + if not flag: + raise CheckError + +def get_g(main): + main._dont_inline_ = True + def g(name, n): + x = X() + x.foo = 2 + main(n, x) + x.foo = 5 + return weakref.ref(x) + g._dont_inline_ = True + return g + + +def get_entry(g): + + def entrypoint(args): + name = '' + n = 2000 + argc = len(args) + if argc > 1: + name = args[1] + if argc > 2: + n = int(args[2]) + r_list = [] + for i in range(20): + r = g(name, n) + r_list.append(r) + rgc.collect() + rgc.collect(); rgc.collect() + freed = 0 + for r in r_list: + if r() is None: + freed += 1 + print freed + return 0 + + return entrypoint + + +def get_functions_to_patch(): + from pypy.jit.backend.llsupport import gc + # + can_use_nursery_malloc1 = gc.GcLLDescr_framework.can_use_nursery_malloc + def can_use_nursery_malloc2(*args): + try: + if os.environ['PYPY_NO_INLINE_MALLOC']: + return False + except KeyError: + pass + return can_use_nursery_malloc1(*args) + # + return {(gc.GcLLDescr_framework, 'can_use_nursery_malloc'): + can_use_nursery_malloc2} + +def compile(f, gc, enable_opts='', **kwds): + from pypy.annotation.listdef import s_list_of_strings + from pypy.translator.translator import TranslationContext + from pypy.jit.metainterp.warmspot import apply_jit + from pypy.translator.c import genc + # + t = TranslationContext() + t.config.translation.gc = gc + if gc != 'boehm': + t.config.translation.gcremovetypeptr = True + for name, value in kwds.items(): + setattr(t.config.translation, name, value) + ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) + ann.build_types(f, [s_list_of_strings], main_entry_point=True) + t.buildrtyper().specialize() + + if kwds['jit']: + patch = get_functions_to_patch() + old_value = {} + try: + for (obj, attr), value in patch.items(): + old_value[obj, attr] = getattr(obj, attr) + setattr(obj, attr, value) + # + apply_jit(t, enable_opts=enable_opts) + # + finally: + for (obj, attr), oldvalue in old_value.items(): + setattr(obj, attr, oldvalue) + + cbuilder = genc.CStandaloneBuilder(t, f, t.config) + cbuilder.generate_source(defines=cbuilder.DEBUG_DEFINES) + cbuilder.compile() + return cbuilder + +def run(cbuilder, args=''): + # + pypylog = udir.join('test_zrpy_gc.log') + data = cbuilder.cmdexec(args, env={'PYPYLOG': ':%s' % pypylog}) + return data.strip() + +def compile_and_run(f, gc, **kwds): + cbuilder = compile(f, gc, **kwds) + return run(cbuilder) + + + +def test_compile_boehm(): + myjitdriver = JitDriver(greens = [], reds = ['n', 'x']) + @dont_look_inside + def see(lst, n): + assert len(lst) == 3 + assert lst[0] == n+10 + assert lst[1] == n+20 + assert lst[2] == n+30 + def main(n, x): + while n > 0: + myjitdriver.can_enter_jit(n=n, x=x) + myjitdriver.jit_merge_point(n=n, x=x) + y = X() + y.foo = x.foo + n -= y.foo + see([n+10, n+20, n+30], n) + res = compile_and_run(get_entry(get_g(main)), "boehm", jit=True) + assert int(res) >= 16 + +# ______________________________________________________________________ + + +class BaseFrameworkTests(object): + compile_kwds = {} + + def setup_class(cls): + funcs = [] + name_to_func = {} + for fullname in dir(cls): + if not fullname.startswith('define'): + continue + definefunc = getattr(cls, fullname) + _, name = fullname.split('_', 1) + beforefunc, loopfunc, afterfunc = definefunc.im_func(cls) + if beforefunc is None: + def beforefunc(n, x): + return n, x, None, None, None, None, None, None, None, None, None, '' + if afterfunc is None: + def afterfunc(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + pass + beforefunc.func_name = 'before_'+name + loopfunc.func_name = 'loop_'+name + afterfunc.func_name = 'after_'+name + funcs.append((beforefunc, loopfunc, afterfunc)) + assert name not in name_to_func + name_to_func[name] = len(name_to_func) + print name_to_func + def allfuncs(name, n): + x = X() + x.foo = 2 + main_allfuncs(name, n, x) + x.foo = 5 + return weakref.ref(x) + def main_allfuncs(name, n, x): + num = name_to_func[name] + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s = funcs[num][0](n, x) + while n > 0: + myjitdriver.can_enter_jit(num=num, n=n, x=x, x0=x0, x1=x1, + x2=x2, x3=x3, x4=x4, x5=x5, x6=x6, x7=x7, l=l, s=s) + myjitdriver.jit_merge_point(num=num, n=n, x=x, x0=x0, x1=x1, + x2=x2, x3=x3, x4=x4, x5=x5, x6=x6, x7=x7, l=l, s=s) + + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s = funcs[num][1]( + n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s) + funcs[num][2](n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s) + myjitdriver = JitDriver(greens = ['num'], + reds = ['n', 'x', 'x0', 'x1', 'x2', 'x3', 'x4', + 'x5', 'x6', 'x7', 'l', 's']) + cls.main_allfuncs = staticmethod(main_allfuncs) + cls.name_to_func = name_to_func + OLD_DEBUG = GcLLDescr_framework.DEBUG + try: + GcLLDescr_framework.DEBUG = True + cls.cbuilder = compile(get_entry(allfuncs), DEFL_GC, + gcrootfinder=cls.gcrootfinder, jit=True, + **cls.compile_kwds) + finally: + GcLLDescr_framework.DEBUG = OLD_DEBUG + + def _run(self, name, n, env): + res = self.cbuilder.cmdexec("%s %d" %(name, n), env=env) + assert int(res) == 20 + + def run(self, name, n=2000): + pypylog = udir.join('TestCompileFramework.log') + env = {'PYPYLOG': ':%s' % pypylog, + 'PYPY_NO_INLINE_MALLOC': '1'} + self._run(name, n, env) + env['PYPY_NO_INLINE_MALLOC'] = '' + self._run(name, n, env) + + def run_orig(self, name, n, x): + self.main_allfuncs(name, n, x) + + +class CompileFrameworkTests(BaseFrameworkTests): + # Test suite using (so far) the minimark GC. + +## def define_libffi_workaround(cls): +## # XXX: this is a workaround for a bug in database.py. It seems that +## # the problem is triggered by optimizeopt/fficall.py, and in +## # particular by the ``cast_base_ptr_to_instance(Func, llfunc)``: in +## # these tests, that line is the only place where libffi.Func is +## # referenced. +## # +## # The problem occurs because the gctransformer tries to annotate a +## # low-level helper to call the __del__ of libffi.Func when it's too +## # late. +## # +## # This workaround works by forcing the annotator (and all the rest of +## # the toolchain) to see libffi.Func in a "proper" context, not just as +## # the target of cast_base_ptr_to_instance. Note that the function +## # below is *never* called by any actual test, it's just annotated. +## # +## from pypy.rlib.libffi import get_libc_name, CDLL, types, ArgChain +## libc_name = get_libc_name() +## def f(n, x, *args): +## libc = CDLL(libc_name) +## ptr = libc.getpointer('labs', [types.slong], types.slong) +## chain = ArgChain() +## chain.arg(n) +## n = ptr.call(chain, lltype.Signed) +## return (n, x) + args +## return None, f, None + + def define_compile_framework_1(cls): + # a moving GC. Supports malloc_varsize_nonmovable. Simple test, works + # without write_barriers and root stack enumeration. + def f(n, x, *args): + y = X() + y.foo = x.foo + n -= y.foo + return (n, x) + args + return None, f, None + + def test_compile_framework_1(self): + self.run('compile_framework_1') + + def define_compile_framework_2(cls): + # More complex test, requires root stack enumeration but + # not write_barriers. + def f(n, x, *args): + prev = x + for j in range(101): # f() runs 20'000 times, thus allocates + y = X() # a total of 2'020'000 objects + y.foo = prev.foo + prev = y + n -= prev.foo + return (n, x) + args + return None, f, None + + def test_compile_framework_2(self): + self.run('compile_framework_2') + + def define_compile_framework_3(cls): + # Third version of the test. Really requires write_barriers. + def f(n, x, *args): + x.next = None + for j in range(101): # f() runs 20'000 times, thus allocates + y = X() # a total of 2'020'000 objects + y.foo = j+1 + y.next = x.next + x.next = y + check(x.next.foo == 101) + total = 0 + y = x + for j in range(101): + y = y.next + total += y.foo + check(not y.next) + check(total == 101*102/2) + n -= x.foo + return (n, x) + args + return None, f, None + + + + def test_compile_framework_3(self): + x_test = X() + x_test.foo = 5 + self.run_orig('compile_framework_3', 6, x_test) # check that it does not raise CheckError + self.run('compile_framework_3') + + def define_compile_framework_3_extra(cls): + # Extra version of the test, with tons of live vars around the residual + # call that all contain a GC pointer. + @dont_look_inside + def residual(n=26): + x = X() + x.next = X() + x.next.foo = n + return x + # + def before(n, x): + residual(5) + x0 = residual() + x1 = residual() + x2 = residual() + x3 = residual() + x4 = residual() + x5 = residual() + x6 = residual() + x7 = residual() + n *= 19 + return n, None, x0, x1, x2, x3, x4, x5, x6, x7, None, None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + x8 = residual() + x9 = residual() + check(x0.next.foo == 26) + check(x1.next.foo == 26) + check(x2.next.foo == 26) + check(x3.next.foo == 26) + check(x4.next.foo == 26) + check(x5.next.foo == 26) + check(x6.next.foo == 26) + check(x7.next.foo == 26) + check(x8.next.foo == 26) + check(x9.next.foo == 26) + x0, x1, x2, x3, x4, x5, x6, x7 = x7, x4, x6, x5, x3, x2, x9, x8 + n -= 1 + return n, None, x0, x1, x2, x3, x4, x5, x6, x7, None, None + return before, f, None + + def test_compile_framework_3_extra(self): + self.run_orig('compile_framework_3_extra', 6, None) # check that it does not raise CheckError + self.run('compile_framework_3_extra') + + def define_compile_framework_4(cls): + # Fourth version of the test, with __del__. + from pypy.rlib.debug import debug_print + class Counter: + cnt = 0 + counter = Counter() + class Z: + def __del__(self): + counter.cnt -= 1 + def before(n, x): + debug_print('counter.cnt =', counter.cnt) + check(counter.cnt < 5) + counter.cnt = n // x.foo + return n, x, None, None, None, None, None, None, None, None, None, None + def f(n, x, *args): + Z() + n -= x.foo + return (n, x) + args + return before, f, None + + def test_compile_framework_4(self): + self.run('compile_framework_4') + + def define_compile_framework_5(cls): + # Test string manipulation. + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + n -= x.foo + s += str(n) + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(s) == 1*5 + 2*45 + 3*450 + 4*500) + return None, f, after + + def test_compile_framework_5(self): + self.run('compile_framework_5') + + def define_compile_framework_7(cls): + # Array of pointers (test the write barrier for setarrayitem_gc) + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + l = [None] * 16 + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[8] = X(n+70) + l[9] = X(n+80) + l[10] = X(n+90) + l[11] = X(n+100) + l[12] = X(n+110) + l[13] = X(n+120) + l[14] = X(n+130) + l[15] = X(n+140) + if n < 1800: + check(len(l) == 16) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[8].x == n+70) + check(l[9].x == n+80) + check(l[10].x == n+90) + check(l[11].x == n+100) + check(l[12].x == n+110) + check(l[13].x == n+120) + check(l[14].x == n+130) + check(l[15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) == 16) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[8].x == 72) + check(l[9].x == 82) + check(l[10].x == 92) + check(l[11].x == 102) + check(l[12].x == 112) + check(l[13].x == 122) + check(l[14].x == 132) + check(l[15].x == 142) + return before, f, after + + def test_compile_framework_7(self): + self.run('compile_framework_7') + + def define_compile_framework_7_interior(cls): + # Array of structs containing pointers (test the write barrier + # for setinteriorfield_gc) + S = lltype.GcStruct('S', ('i', lltype.Signed)) + A = lltype.GcArray(lltype.Struct('entry', ('x', lltype.Ptr(S)), + ('y', lltype.Ptr(S)), + ('z', lltype.Ptr(S)))) + class Glob: + a = lltype.nullptr(A) + glob = Glob() + # + def make_s(i): + s = lltype.malloc(S) + s.i = i + return s + # + @unroll_safe + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + a = glob.a + if not a: + a = glob.a = lltype.malloc(A, 10) + i = 0 + while i < 10: + a[i].x = make_s(n + i * 100 + 1) + a[i].y = make_s(n + i * 100 + 2) + a[i].z = make_s(n + i * 100 + 3) + i += 1 + i = 0 + while i < 10: + check(a[i].x.i == n + i * 100 + 1) + check(a[i].y.i == n + i * 100 + 2) + check(a[i].z.i == n + i * 100 + 3) + i += 1 + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_7_interior(self): + self.run('compile_framework_7_interior') + + def define_compile_framework_8(cls): + # Array of pointers, of unknown length (test write_barrier_from_array) + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + l = [None] * (16 + (n & 7)) + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[8] = X(n+70) + l[9] = X(n+80) + l[10] = X(n+90) + l[11] = X(n+100) + l[12] = X(n+110) + l[13] = X(n+120) + l[14] = X(n+130) + l[15] = X(n+140) + if n < 1800: + check(len(l) == 16 + (n & 7)) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[8].x == n+70) + check(l[9].x == n+80) + check(l[10].x == n+90) + check(l[11].x == n+100) + check(l[12].x == n+110) + check(l[13].x == n+120) + check(l[14].x == n+130) + check(l[15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) >= 16) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[8].x == 72) + check(l[9].x == 82) + check(l[10].x == 92) + check(l[11].x == 102) + check(l[12].x == 112) + check(l[13].x == 122) + check(l[14].x == 132) + check(l[15].x == 142) + return before, f, after + + def test_compile_framework_8(self): + self.run('compile_framework_8') + + def define_compile_framework_9(cls): + # Like compile_framework_8, but with variable indexes and large + # arrays, testing the card_marking case + def before(n, x): + return n, x, None, None, None, None, None, None, None, None, [X(123)], None + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + if n < 1900: + check(l[0].x == 123) + num = 512 + (n & 7) + l = [None] * num + l[0] = X(123) + l[1] = X(n) + l[2] = X(n+10) + l[3] = X(n+20) + l[4] = X(n+30) + l[5] = X(n+40) + l[6] = X(n+50) + l[7] = X(n+60) + l[num-8] = X(n+70) + l[num-9] = X(n+80) + l[num-10] = X(n+90) + l[num-11] = X(n+100) + l[-12] = X(n+110) + l[-13] = X(n+120) + l[-14] = X(n+130) + l[-15] = X(n+140) + if n < 1800: + num = 512 + (n & 7) + check(len(l) == num) + check(l[0].x == 123) + check(l[1].x == n) + check(l[2].x == n+10) + check(l[3].x == n+20) + check(l[4].x == n+30) + check(l[5].x == n+40) + check(l[6].x == n+50) + check(l[7].x == n+60) + check(l[num-8].x == n+70) + check(l[num-9].x == n+80) + check(l[num-10].x == n+90) + check(l[num-11].x == n+100) + check(l[-12].x == n+110) + check(l[-13].x == n+120) + check(l[-14].x == n+130) + check(l[-15].x == n+140) + n -= x.foo + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(len(l) >= 512) + check(l[0].x == 123) + check(l[1].x == 2) + check(l[2].x == 12) + check(l[3].x == 22) + check(l[4].x == 32) + check(l[5].x == 42) + check(l[6].x == 52) + check(l[7].x == 62) + check(l[-8].x == 72) + check(l[-9].x == 82) + check(l[-10].x == 92) + check(l[-11].x == 102) + check(l[-12].x == 112) + check(l[-13].x == 122) + check(l[-14].x == 132) + check(l[-15].x == 142) + return before, f, after + + def test_compile_framework_9(self): + self.run('compile_framework_9') + + def define_compile_framework_external_exception_handling(cls): + def before(n, x): + x = X(0) + return n, x, None, None, None, None, None, None, None, None, None, None + + @dont_look_inside + def g(x): + if x > 200: + return 2 + raise ValueError + @dont_look_inside + def h(x): + if x > 150: + raise ValueError + return 2 + + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + try: + x.x += g(n) + except ValueError: + x.x += 1 + try: + x.x += h(n) + except ValueError: + x.x -= 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + + def after(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + check(x.x == 1800 * 2 + 1850 * 2 + 200 - 150) + + return before, f, None + + def test_compile_framework_external_exception_handling(self): + self.run('compile_framework_external_exception_handling') + + def define_compile_framework_bug1(self): + @elidable + def nonmoving(): + x = X(1) + for i in range(7): + rgc.collect() + return x + + @dont_look_inside + def do_more_stuff(): + x = X(5) + for i in range(7): + rgc.collect() + return x + + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + x0 = do_more_stuff() + check(nonmoving().x == 1) + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + + return None, f, None + + def test_compile_framework_bug1(self): + self.run('compile_framework_bug1', 200) + + def define_compile_framework_vref(self): + from pypy.rlib.jit import virtual_ref, virtual_ref_finish + class A: + pass + glob = A() + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + a = A() + glob.v = vref = virtual_ref(a) + virtual_ref_finish(vref, a) + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_vref(self): + self.run('compile_framework_vref', 200) + + def define_compile_framework_float(self): + # test for a bug: the fastpath_malloc does not save and restore + # xmm registers around the actual call to the slow path + class A: + x0 = x1 = x2 = x3 = x4 = x5 = x6 = x7 = 0 + @dont_look_inside + def escape1(a): + a.x0 += 0 + a.x1 += 6 + a.x2 += 12 + a.x3 += 18 + a.x4 += 24 + a.x5 += 30 + a.x6 += 36 + a.x7 += 42 + @dont_look_inside + def escape2(n, f0, f1, f2, f3, f4, f5, f6, f7): + check(f0 == n + 0.0) + check(f1 == n + 0.125) + check(f2 == n + 0.25) + check(f3 == n + 0.375) + check(f4 == n + 0.5) + check(f5 == n + 0.625) + check(f6 == n + 0.75) + check(f7 == n + 0.875) + @unroll_safe + def f(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + i = 0 + while i < 42: + m = n + i + f0 = m + 0.0 + f1 = m + 0.125 + f2 = m + 0.25 + f3 = m + 0.375 + f4 = m + 0.5 + f5 = m + 0.625 + f6 = m + 0.75 + f7 = m + 0.875 + a1 = A() + # at this point, all or most f's are still in xmm registers + escape1(a1) + escape2(m, f0, f1, f2, f3, f4, f5, f6, f7) + i += 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f, None + + def test_compile_framework_float(self): + self.run('compile_framework_float') + + def define_compile_framework_minimal_size_in_nursery(self): + S = lltype.GcStruct('S') # no fields! + T = lltype.GcStruct('T', ('i', lltype.Signed)) + @unroll_safe + def f42(n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s): + lst1 = [] + lst2 = [] + i = 0 + while i < 42: + s1 = lltype.malloc(S) + t1 = lltype.malloc(T) + t1.i = 10000 + i + n + lst1.append(s1) + lst2.append(t1) + i += 1 + i = 0 + while i < 42: + check(lst2[i].i == 10000 + i + n) + i += 1 + n -= 1 + return n, x, x0, x1, x2, x3, x4, x5, x6, x7, l, s + return None, f42, None + + def test_compile_framework_minimal_size_in_nursery(self): + self.run('compile_framework_minimal_size_in_nursery') + + +class TestShadowStack(CompileFrameworkTests): + gcrootfinder = "shadowstack" + diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -47,8 +47,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): From noreply at buildbot.pypy.org Wed Feb 8 15:12:26 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 15:12:26 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix same bug again ... Message-ID: <20120208141226.109EE82B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52234:796fffeedaff Date: 2012-02-08 06:11 -0800 http://bitbucket.org/pypy/pypy/changeset/796fffeedaff/ Log: fix same bug again ... diff --git a/pypy/jit/backend/ppc/locations.py b/pypy/jit/backend/ppc/locations.py --- a/pypy/jit/backend/ppc/locations.py +++ b/pypy/jit/backend/ppc/locations.py @@ -110,4 +110,7 @@ return ImmLocation(val) def get_spp_offset(pos): - return -(pos + 1) * WORD + if pos < 0: + return -pos * WORD + else: + return -(pos + 1) * WORD From noreply at buildbot.pypy.org Wed Feb 8 16:22:16 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:22:16 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove comments and move of import to top of file Message-ID: <20120208152216.EE73482B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52235:aa6ac7b936e9 Date: 2012-02-08 16:19 +0100 http://bitbucket.org/pypy/pypy/changeset/aa6ac7b936e9/ Log: remove comments and move of import to top of file diff --git a/pypy/jit/backend/ppc/assembler.py b/pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -1,5 +1,6 @@ import os from pypy.jit.backend.ppc import form +from pypy.jit.backend.ppc import asmfunc # don't be fooled by the fact that there's some separation between a # generic assembler class and a PPC assembler class... there's @@ -51,7 +52,7 @@ inst.fields[f] = l buf = [] for inst in self.insts: - buf.append(inst)#.assemble()) + buf.append(inst) if dump: for i in range(len(buf)): inst = self.disassemble(buf[i], self.rlabels, i*4) @@ -61,12 +62,8 @@ return buf def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): - #insns = self.assemble0(dump) - from pypy.jit.backend.ppc import asmfunc c = asmfunc.AsmCode(len(self.insts)*4) for i in self.insts: - c.emit(i)#.assemble()) - #return c.get_function() def get_idescs(cls): r = [] From noreply at buildbot.pypy.org Wed Feb 8 16:22:19 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:22:19 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: kill old tests Message-ID: <20120208152219.0F7F282B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52236:ea34b0faf22c Date: 2012-02-08 16:20 +0100 http://bitbucket.org/pypy/pypy/changeset/ea34b0faf22c/ Log: kill old tests diff --git a/pypy/jit/backend/ppc/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -325,183 +325,3 @@ def is_64_bit_arch(): import sys return sys.maxint == 9223372036854775807 - -""" -class TestAssemble(object): - - def setup_class(cls): - #if (not hasattr(os, 'uname') or - if autodetect_main_model() not in ["ppc", "ppc64"]: - #os.uname()[-1] in ['Power Macintosh', 'PPC64']: - - py.test.skip("can't test all of ppcgen on non-PPC!") - - def test_tuplelength(self): - a = MyPPCAssembler() - - a.lwz(3, 4, pystructs.PyVarObject.ob_size) - a.load_imm(5, lookup("PyInt_FromLong")) - a.mtctr(5) - a.bctr() - - f = a.assemble() - assert f() == 0 - assert f(1) == 1 - assert f('') == 1 - - - def test_tuplelength2(self): - a = MyPPCAssembler() - - a.mflr(0) - a.stw(0, 1, 8) - a.stwu(1, 1, -80) - a.mr(3, 4) - a.load_imm(5, lookup("PyTuple_Size")) - a.mtctr(5) - a.bctrl() - a.load_imm(5, lookup("PyInt_FromLong")) - a.mtctr(5) - a.bctrl() - a.lwz(0, 1, 88) - a.addi(1, 1, 80) - a.mtlr(0) - a.blr() - - f = a.assemble() - assert f() == 0 - assert f(1) == 1 - assert f('') == 1 - assert f('', 3) == 2 - - - def test_intcheck(self): - a = MyPPCAssembler() - - a.lwz(r5, r4, pystructs.PyVarObject.ob_size) - a.cmpwi(r5, 1) - a.bne("not_one") - a.lwz(r5, r4, pystructs.PyTupleObject.ob_item + 0*4) - a.lwz(r5, r5, 4) - a.load_imm(r6, lookup("PyInt_Type")) - a.cmpw(r5, r6) - a.bne("not_int") - a.li(r3, 1) - a.b("exit") - a.label("not_int") - a.li(r3, 0) - a.b("exit") - a.label("not_one") - a.li(r3, 2) - a.label("exit") - a.load_imm(r5, lookup("PyInt_FromLong")) - a.mtctr(r5) - a.bctr() - - f = a.assemble() - - assert f() == 2 - assert f("", "") == 2 - assert f("") == 0 - assert f(1) == 1 - - - def test_raise(self): - a = MyPPCAssembler() - - a.mflr(0) - a.stw(0, 1, 8) - a.stwu(1, 1, -80) - - err_set = lookup("PyErr_SetObject") - exc = lookup("PyExc_ValueError") - - a.load_imm(5, err_set) - a.mtctr(5) - a.load_from(3, exc) - a.mr(4, 3) - a.bctrl() - - a.li(3, 0) - - a.lwz(0, 1, 88) - a.addi(1, 1, 80) - a.mtlr(0) - a.blr() - - raises(ValueError, a.assemble()) - - - def test_makestring(self): - a = MyPPCAssembler() - - a.li(r3, 0) - a.li(r4, 0) - a.load_imm(r5, lookup("PyString_FromStringAndSize")) - a.mtctr(r5) - a.bctr() - - f = a.assemble() - assert f() == '' - - - def test_numberadd(self): - a = MyPPCAssembler() - - a.lwz(r5, r4, pystructs.PyVarObject.ob_size) - a.cmpwi(r5, 2) - a.bne("err_out") - - a.lwz(r3, r4, 12) - a.lwz(r4, r4, 16) - - a.load_imm(r5, lookup("PyNumber_Add")) - a.mtctr(r5) - a.bctr() - - a.label("err_out") - - a.mflr(r0) - a.stw(r0, r1, 8) - a.stwu(r1, r1, -80) - - err_set = lookup("PyErr_SetObject") - exc = lookup("PyExc_TypeError") - - a.load_imm(r5, err_set) - a.mtctr(r5) - a.load_from(r3, exc) - a.mr(r4, r3) - a.bctrl() - - a.li(r3, 0) - - a.lwz(r0, r1, 88) - a.addi(r1, r1, 80) - a.mtlr(r0) - a.blr() - - f = a.assemble() - - raises(TypeError, f) - raises(TypeError, f, '', 1) - raises(TypeError, f, 1) - raises(TypeError, f, 1, 2, 3) - assert f(1, 2) == 3 - assert f('a', 'b') == 'ab' - - - def test_assemblerChecks(self): - def testFailure(idesc, *args): - a = MyPPCAssembler() - raises(ValueError, idesc.__get__(a), *args) - def testSucceed(idesc, *args): - a = MyPPCAssembler() - # "assertNotRaises" :-) - idesc.__get__(a)(*args) - testFailure(MyPPCAssembler.add, 32, 31, 30) - testFailure(MyPPCAssembler.add, -1, 31, 30) - testSucceed(MyPPCAssembler.bne, -12) - testSucceed(MyPPCAssembler.lwz, 0, 0, 32767) - testSucceed(MyPPCAssembler.lwz, 0, 0, -32768) -""" From noreply at buildbot.pypy.org Wed Feb 8 16:22:21 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:22:21 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: repair test_ppc.py Message-ID: <20120208152221.890B782B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52237:7e373f1b0a4f Date: 2012-02-08 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/7e373f1b0a4f/ Log: repair test_ppc.py diff --git a/pypy/jit/backend/ppc/assembler.py b/pypy/jit/backend/ppc/assembler.py --- a/pypy/jit/backend/ppc/assembler.py +++ b/pypy/jit/backend/ppc/assembler.py @@ -64,6 +64,13 @@ def assemble(self, dump=os.environ.has_key('PPY_DEBUG')): c = asmfunc.AsmCode(len(self.insts)*4) for i in self.insts: + c.emit(i) + + def get_assembler_function(self): + c = asmfunc.AsmCode(len(self.insts)*4) + for i in self.insts: + c.emit(i) + return c.get_function() def get_idescs(cls): r = [] diff --git a/pypy/jit/backend/ppc/test/test_ppc.py b/pypy/jit/backend/ppc/test/test_ppc.py --- a/pypy/jit/backend/ppc/test/test_ppc.py +++ b/pypy/jit/backend/ppc/test/test_ppc.py @@ -33,7 +33,8 @@ def newtest(self): a = PPCBuilder() test(self, a) - f = a.assemble() + #f = a.assemble() + f = a.get_assembler_function() assert f() == expected return newtest return testmaker @@ -59,7 +60,7 @@ def setup_class(cls): if autodetect_main_model() not in ["ppc", "ppc64"]: py.test.skip("can't test all of ppcgen on non-PPC!") - py.test.xfail("assemble does not return a function any longer, fix tests") + #py.test.xfail("assemble does not return a function any longer, fix tests") """ Tests are build like this: @@ -202,7 +203,7 @@ a.bctr() a.blr() - f = a.assemble() + f = a.get_assembler_function() assert f() == 65 @asmtest(expected=0) @@ -263,7 +264,7 @@ a.load_imm(r10, 0x0000F0F0) a.neg(3, 10) a.blr() - f = a.assemble() + f = a.get_assembler_function() assert f() == hex_to_signed_int("FFFF0F10") def test_load_and_store(self): @@ -284,7 +285,7 @@ a.lwz(5, 9, 0) a.add(3, 4, 5) a.blr() - f = a.assemble() + f = a.get_assembler_function() assert f() == word1 + word2 lltype.free(p, flavor="raw") @@ -297,7 +298,7 @@ a.load_from_addr(r3, addr) a.blr() - f = a.assemble() + f = a.get_assembler_function() assert f() == 200 p[0] = rffi.cast(rffi.LONG, 300) assert f() == 300 From noreply at buildbot.pypy.org Wed Feb 8 16:30:21 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:30:21 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: kill more unused code Message-ID: <20120208153021.64A2382B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52238:18df2e1a3175 Date: 2012-02-08 16:30 +0100 http://bitbucket.org/pypy/pypy/changeset/18df2e1a3175/ Log: kill more unused code diff --git a/pypy/jit/backend/ppc/func_builder.py b/pypy/jit/backend/ppc/func_builder.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/func_builder.py +++ /dev/null @@ -1,160 +0,0 @@ -from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler -from pypy.jit.backend.ppc.symbol_lookup import lookup -from pypy.jit.backend.ppc.regname import * - -def load_arg(code, argi, typecode): - rD = r3+argi - code.lwz(rD, r4, 12 + 4*argi) - if typecode == 'i': - code.load_word(r0, lookup("PyInt_Type")) - code.lwz(r31, rD, 4) # XXX ick! - code.cmpw(r0, r31) - code.bne("argserror") - code.lwz(rD, rD, 8) - elif typecode == 'f': - code.load_word(r0, lookup("PyFloat_Type")) - code.lwz(r31, rD, 4) - code.cmpw(r0, r31) - code.bne("argserror") - code.lfd(rD-2, rD, 8) - elif typecode != "O": - raise Exception, "erk" - -FAST_ENTRY_LABEL = "FAST-ENTRY-LABEL" - -def make_func(code, retcode, signature, localwords=0): - """code shouldn't contain prologue/epilogue (or touch r31)""" - - stacksize = 80 + 4*localwords - - argcount = len(signature) - - ourcode = MyPPCAssembler() - ourcode.mflr(r0) - ourcode.stmw(r31, r1, -4) - ourcode.stw(r0, r1, 8) - ourcode.stwu(r1, r1, -stacksize) - - ourcode.lwz(r3, r4, 8) - ourcode.cmpwi(r3, argcount) - ourcode.bne("argserror") - - assert argcount < 9 - - if argcount > 0: - load_arg(ourcode, 0, signature[0]) - for i in range(2, argcount): - load_arg(ourcode, i, signature[i]) - if argcount > 1: - load_arg(ourcode, 1, signature[1]) - - ourcode.bl(FAST_ENTRY_LABEL) - - if retcode == 'i': - s = lookup("PyInt_FromLong") - ourcode.load_word(r0, s) - ourcode.mtctr(r0) - ourcode.bctrl() - elif retcode == 'f': - s = lookup("PyFloat_FromDouble") - ourcode.load_word(r0, s) - ourcode.mtctr(r0) - ourcode.bctrl() - - ourcode.label("epilogue") - ourcode.lwz(r0, r1, stacksize + 8) - ourcode.addi(r1, r1, stacksize) - ourcode.mtlr(r0) - ourcode.lmw(r31, r1, -4) - ourcode.blr() - - err_set = lookup("PyErr_SetObject") - exc = lookup("PyExc_TypeError") - - ourcode.label("argserror") - ourcode.load_word(r5, err_set) - ourcode.mtctr(r5) - ourcode.load_from(r3, exc) - ourcode.mr(r4, r3) - ourcode.bctrl() - - ourcode.li(r3, 0) - ourcode.b("epilogue") - - ourcode.label(FAST_ENTRY_LABEL) - # err, should be an Assembler method: - l = {} - for k in code.labels: - l[k] = code.labels[k] + 4*len(ourcode.insts) - r = code.rlabels.copy() - for k in code.rlabels: - r[k + 4*len(ourcode.insts)] = code.rlabels[k] - ourcode.insts.extend(code.insts) - ourcode.labels.update(l) - ourcode.rlabels.update(r) - - r = ourcode.assemble() - r.FAST_ENTRY_LABEL = ourcode.labels[FAST_ENTRY_LABEL] - return r - -def wrap(funcname, retcode, signature): - - argcount = len(signature) - - ourcode = MyPPCAssembler() - ourcode.mflr(r0) - ourcode.stmw(r31, r1, -4) - ourcode.stw(r0, r1, 8) - ourcode.stwu(r1, r1, -80) - - ourcode.lwz(r3, r4, 8) - ourcode.cmpwi(r3, argcount) - ourcode.bne("argserror") - - assert argcount < 9 - - if argcount > 0: - load_arg(ourcode, 0, signature[0]) - for i in range(2, argcount): - load_arg(ourcode, i, signature[i]) - if argcount > 1: - load_arg(ourcode, 1, signature[1]) - - - ourcode.load_word(r0, lookup(funcname)) - ourcode.mtctr(r0) - ourcode.bctrl() - - if retcode == 'i': - s = lookup("PyInt_FromLong") - ourcode.load_word(r0, s) - ourcode.mtctr(r0) - ourcode.bctrl() - elif retcode == 'f': - s = lookup("PyFloat_FromDouble") - ourcode.load_word(r0, s) - ourcode.mtctr(r0) - ourcode.bctrl() - - ourcode.label("epilogue") - ourcode.lwz(r0, r1, 88) - ourcode.addi(r1, r1, 80) - ourcode.mtlr(r0) - ourcode.lmw(r31, r1, -4) - ourcode.blr() - - err_set = lookup("PyErr_SetObject") - exc = lookup("PyExc_TypeError") - - ourcode.label("argserror") - ourcode.load_word(r5, err_set) - ourcode.mtctr(r5) - ourcode.load_from(r3, exc) - ourcode.mr(r4, r3) - ourcode.bctrl() - - ourcode.li(r3, 0) - ourcode.b("epilogue") - - return ourcode.assemble() - diff --git a/pypy/jit/backend/ppc/test/test_func_builder.py b/pypy/jit/backend/ppc/test/test_func_builder.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/test/test_func_builder.py +++ /dev/null @@ -1,87 +0,0 @@ -import py -import random, sys, os - -from pypy.jit.backend.ppc.ppc_assembler import MyPPCAssembler -from pypy.jit.backend.ppc.symbol_lookup import lookup -from pypy.jit.backend.ppc.func_builder import make_func -from pypy.jit.backend.ppc import form, func_builder -from pypy.jit.backend.ppc.regname import * -from pypy.jit.backend.detect_cpu import autodetect_main_model - -class TestFuncBuilderTest(object): - def setup_class(cls): - if autodetect_main_model() not in ["ppc", "ppc64"]: - py.test.skip("can't test all of ppcgen on non-PPC!") - - def test_simple(self): - a = MyPPCAssembler() - a.blr() - f = make_func(a, "O", "O") - assert f(1) == 1 - raises(TypeError, f) - raises(TypeError, f, 1, 2) - - def test_less_simple(self): - a = MyPPCAssembler() - s = lookup("PyNumber_Add") - a.load_word(r5, s) - a.mtctr(r5) - a.bctr() - f = make_func(a, "O", "OO") - raises(TypeError, f) - raises(TypeError, f, 1) - assert f(1, 2) == 3 - raises(TypeError, f, 1, 2, 3) - - def test_signature(self): - a = MyPPCAssembler() - a.add(r3, r3, r4) - a.blr() - f = make_func(a, "i", "ii") - raises(TypeError, f) - raises(TypeError, f, 1) - assert f(1, 2) == 3 - raises(TypeError, f, 1, 2, 3) - raises(TypeError, f, 1, "2") - - def test_signature2(self): - a = MyPPCAssembler() - a.add(r3, r3, r4) - a.add(r3, r3, r5) - a.add(r3, r3, r6) - a.add(r3, r3, r7) - s = lookup("PyInt_FromLong") - a.load_word(r0, s) - a.mtctr(r0) - a.bctr() - f = make_func(a, "O", "iiiii") - raises(TypeError, f) - raises(TypeError, f, 1) - assert f(1, 2, 3, 4, 5) == 1 + 2 + 3 + 4 + 5 - raises(TypeError, f, 1, 2, 3) - raises(TypeError, f, 1, "2", 3, 4, 5) - - def test_floats(self): - a = MyPPCAssembler() - a.fadd(fr1, fr1, fr2) - a.blr() - f = make_func(a, 'f', 'ff') - raises(TypeError, f) - raises(TypeError, f, 1.0) - assert f(1.0, 2.0) == 3.0 - raises(TypeError, f, 1.0, 2.0, 3.0) - raises(TypeError, f, 1.0, 2) - - def test_fast_entry(self): - a = MyPPCAssembler() - a.blr() - f = make_func(a, "O", "O") - assert f(1) == 1 - b = MyPPCAssembler() - from pypy.jit.backend.ppc import util - # eurgh!: - b.load_word(r0, util.access_at(id(f.code), 8) + f.FAST_ENTRY_LABEL) - b.mtctr(r0) - b.bctr() - g = make_func(b, "O", "O") - assert g(1) == 1 diff --git a/pypy/jit/backend/ppc/util.py b/pypy/jit/backend/ppc/util.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/util.py +++ /dev/null @@ -1,23 +0,0 @@ -from pypy.jit.codegen.ppc.ppc_assembler import MyPPCAssembler -from pypy.jit.codegen.ppc.func_builder import make_func - -from regname import * - -def access_at(): - a = MyPPCAssembler() - - a.lwzx(r3, r3, r4) - a.blr() - - return make_func(a, "i", "ii") - -access_at = access_at() - -def itoO(): - a = MyPPCAssembler() - - a.blr() - - return make_func(a, "O", "i") - -itoO = itoO() From noreply at buildbot.pypy.org Wed Feb 8 16:41:49 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:41:49 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix test_rassemblermaker.py Message-ID: <20120208154149.9833282B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52239:ff1d250ee0e3 Date: 2012-02-08 16:41 +0100 http://bitbucket.org/pypy/pypy/changeset/ff1d250ee0e3/ Log: fix test_rassemblermaker.py diff --git a/pypy/jit/backend/ppc/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py --- a/pypy/jit/backend/ppc/test/test_rassemblermaker.py +++ b/pypy/jit/backend/ppc/test/test_rassemblermaker.py @@ -1,11 +1,12 @@ from pypy.jit.backend.ppc.rassemblermaker import make_rassembler -from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler +#from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler +from pypy.jit.backend.ppc.codebuilder import PPCAssembler RPPCAssembler = make_rassembler(PPCAssembler) _a = PPCAssembler() _a.add(3, 3, 4) -add_r3_r3_r4 = _a.insts[0].assemble() +add_r3_r3_r4 = _a.insts[0] def test_simple(): ra = RPPCAssembler() @@ -28,7 +29,7 @@ a = A() a.mr(3, 4) mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] + assert mrs[0] == mrs[1] def test_spr_coding(): mrs = [] @@ -36,4 +37,4 @@ a = A() a.mtctr(3) mrs.append(a.insts[0]) - assert mrs[0].assemble() == mrs[1] + assert mrs[0] == mrs[1] From noreply at buildbot.pypy.org Wed Feb 8 16:49:15 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:49:15 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove comment Message-ID: <20120208154915.C54AA82B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52240:6c953037efff Date: 2012-02-08 16:44 +0100 http://bitbucket.org/pypy/pypy/changeset/6c953037efff/ Log: remove comment diff --git a/pypy/jit/backend/ppc/test/test_rassemblermaker.py b/pypy/jit/backend/ppc/test/test_rassemblermaker.py --- a/pypy/jit/backend/ppc/test/test_rassemblermaker.py +++ b/pypy/jit/backend/ppc/test/test_rassemblermaker.py @@ -1,5 +1,4 @@ from pypy.jit.backend.ppc.rassemblermaker import make_rassembler -#from pypy.jit.backend.ppc.ppc_assembler import PPCAssembler from pypy.jit.backend.ppc.codebuilder import PPCAssembler RPPCAssembler = make_rassembler(PPCAssembler) From noreply at buildbot.pypy.org Wed Feb 8 16:49:17 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:49:17 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: repair test_form.py Message-ID: <20120208154917.03FD382B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52241:95763b51d270 Date: 2012-02-08 16:48 +0100 http://bitbucket.org/pypy/pypy/changeset/95763b51d270/ Log: repair test_form.py diff --git a/pypy/jit/backend/ppc/test/test_form.py b/pypy/jit/backend/ppc/test/test_form.py --- a/pypy/jit/backend/ppc/test/test_form.py +++ b/pypy/jit/backend/ppc/test/test_form.py @@ -1,5 +1,5 @@ import autopath -from pypy.jit.backend.ppc.ppc_assembler import b +from pypy.jit.backend.ppc.codebuilder import b import random import sys @@ -25,9 +25,9 @@ def p(w): import struct + w = w.assemble() return struct.pack('>i', w) - class TestForm(Form): fieldmap = test_fieldmap From noreply at buildbot.pypy.org Wed Feb 8 16:56:55 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 16:56:55 +0100 (CET) Subject: [pypy-commit] pypy default: #1034 -- added __rpow__ to numpy boxes Message-ID: <20120208155655.114B682B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52242:5170d7047e5d Date: 2012-02-08 10:56 -0500 http://bitbucket.org/pypy/pypy/changeset/5170d7047e5d/ Log: #1034 -- added __rpow__ to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -92,6 +92,7 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rpow = _binop_right_impl("power") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") @@ -181,6 +182,7 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rpow__ = interp2app(W_GenericBox.descr_rpow), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -407,3 +407,4 @@ from _numpypy import float64, int_ assert truediv(int_(3), int_(2)) == float64(1.5) + assert 2 ** int_(3) == int_(8) From noreply at buildbot.pypy.org Wed Feb 8 16:57:07 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 16:57:07 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove obsolete test Message-ID: <20120208155707.1237682B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52243:9de5bd3caa11 Date: 2012-02-08 16:56 +0100 http://bitbucket.org/pypy/pypy/changeset/9de5bd3caa11/ Log: remove obsolete test diff --git a/pypy/jit/backend/ppc/test/test_rgenop.py b/pypy/jit/backend/ppc/test/test_rgenop.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/test/test_rgenop.py +++ /dev/null @@ -1,33 +0,0 @@ -import py -from pypy.jit.codegen.ppc.rgenop import RPPCGenOp -from pypy.rpython.lltypesystem import lltype -from pypy.jit.codegen.test.rgenop_tests import FUNC, FUNC2 -from pypy.jit.codegen.test.rgenop_tests import AbstractRGenOpTestsDirect -from pypy.jit.codegen.test.rgenop_tests import AbstractRGenOpTestsCompile -from ctypes import cast, c_int, c_void_p, CFUNCTYPE -from pypy.jit.codegen.ppc import instruction as insn - -# for the individual tests see -# ====> ../../test/rgenop_tests.py - -class FewRegisters(RPPCGenOp): - freeregs = { - insn.GP_REGISTER:insn.gprs[3:6], - insn.FP_REGISTER:insn.fprs, - insn.CR_FIELD:insn.crfs[:1], - insn.CT_REGISTER:[insn.ctr]} - -class FewRegistersAndScribble(FewRegisters): - DEBUG_SCRIBBLE = True - -class TestRPPCGenopDirect(AbstractRGenOpTestsDirect): - RGenOp = RPPCGenOp - -class TestRPPCGenopCompile(AbstractRGenOpTestsCompile): - RGenOp = RPPCGenOp - -class TestRPPCGenopNoRegs(AbstractRGenOpTestsDirect): - RGenOp = FewRegisters - -class TestRPPCGenopNoRegsAndScribble(AbstractRGenOpTestsDirect): - RGenOp = FewRegistersAndScribble From noreply at buildbot.pypy.org Wed Feb 8 17:01:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 17:01:22 +0100 (CET) Subject: [pypy-commit] pypy default: added __and__ to numpy boxes Message-ID: <20120208160122.8A6F682B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52244:4412e08f0167 Date: 2012-02-08 11:01 -0500 http://bitbucket.org/pypy/pypy/changeset/4412e08f0167/ Log: added __and__ to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -82,6 +82,8 @@ descr_div = _binop_impl("divide") descr_truediv = _binop_impl("true_divide") descr_pow = _binop_impl("power") + descr_and = _binop_right_impl("bitwise_and") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -178,6 +180,7 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __pow__ = interp2app(W_GenericBox.descr_pow), + __and__ = interp2app(W_GenericBox.descr_and), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -408,3 +408,5 @@ assert truediv(int_(3), int_(2)) == float64(1.5) assert 2 ** int_(3) == int_(8) + assert int_(3) & int_(1) == int_(1) + raises(TypeError, lambda: float64(3) & 1) From noreply at buildbot.pypy.org Wed Feb 8 17:04:46 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 8 Feb 2012 17:04:46 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove further obsolete test Message-ID: <20120208160446.8852F82B1E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52245:2a3244286b2a Date: 2012-02-08 17:04 +0100 http://bitbucket.org/pypy/pypy/changeset/2a3244286b2a/ Log: remove further obsolete test diff --git a/pypy/jit/backend/ppc/test/test_operation.py b/pypy/jit/backend/ppc/test/test_operation.py deleted file mode 100644 --- a/pypy/jit/backend/ppc/test/test_operation.py +++ /dev/null @@ -1,43 +0,0 @@ -from pypy.jit.codegen.test.operation_tests import OperationTests -from pypy.jit.codegen.ppc.rgenop import RPPCGenOp -from pypy.rpython.memory.lltypelayout import convert_offset_to_int -from pypy.rlib.objectmodel import specialize - -def conv(n): - if not isinstance(n, int): - n = convert_offset_to_int(n) - return n - - -class RGenOpPacked(RPPCGenOp): - """Like RPPCGenOp, but produces concrete offsets in the tokens - instead of llmemory.offsets. These numbers may not agree with - your C compiler's. - """ - - @staticmethod - @specialize.memo() - def fieldToken(T, name): - return tuple(map(conv, RPPCGenOp.fieldToken(T, name))) - - @staticmethod - @specialize.memo() - def arrayToken(A): - return tuple(map(conv, RPPCGenOp.arrayToken(A))) - - @staticmethod - @specialize.memo() - def allocToken(T): - return conv(RPPCGenOp.allocToken(T)) - - @staticmethod - @specialize.memo() - def varsizeAllocToken(A): - return tuple(map(conv, RPPCGenOp.varsizeAllocToken(A))) - - -class PPCTestMixin(object): - RGenOp = RGenOpPacked - -class TestOperation(PPCTestMixin, OperationTests): - pass From noreply at buildbot.pypy.org Wed Feb 8 17:24:25 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 17:24:25 +0100 (CET) Subject: [pypy-commit] pypy default: addded __pos__ and __invert__ to numpy boxes Message-ID: <20120208162425.EFCF682B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52246:69d95ab3a482 Date: 2012-02-08 11:24 -0500 http://bitbucket.org/pypy/pypy/changeset/69d95ab3a482/ Log: addded __pos__ and __invert__ to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -96,8 +96,10 @@ descr_rmul = _binop_right_impl("multiply") descr_rpow = _binop_right_impl("power") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -194,8 +196,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -410,3 +410,7 @@ assert 2 ** int_(3) == int_(8) assert int_(3) & int_(1) == int_(1) raises(TypeError, lambda: float64(3) & 1) + + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + From noreply at buildbot.pypy.org Wed Feb 8 17:36:00 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 17:36:00 +0100 (CET) Subject: [pypy-commit] pypy default: added several new methods to numpy boxes Message-ID: <20120208163600.5740382B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52247:886b1bb4c5ce Date: 2012-02-08 11:35 -0500 http://bitbucket.org/pypy/pypy/changeset/886b1bb4c5ce/ Log: added several new methods to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -81,8 +81,11 @@ descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_and = _binop_right_impl("bitwise_and") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -181,8 +184,11 @@ __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), __pow__ = interp2app(W_GenericBox.descr_pow), __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -383,9 +383,10 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -404,12 +404,16 @@ def test_operators(self): from operator import truediv - from _numpypy import float64, int_ + from _numpypy import float64, int_, True_, False_ assert truediv(int_(3), int_(2)) == float64(1.5) assert 2 ** int_(3) == int_(8) assert int_(3) & int_(1) == int_(1) raises(TypeError, lambda: float64(3) & 1) + assert int_(8) % int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -59,10 +59,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True @@ -253,6 +249,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -313,6 +313,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v From noreply at buildbot.pypy.org Wed Feb 8 19:03:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 19:03:30 +0100 (CET) Subject: [pypy-commit] pypy default: update the release announcement Message-ID: <20120208180330.BF21B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52248:70757df2dfad Date: 2012-02-08 20:01 +0200 http://bitbucket.org/pypy/pypy/changeset/70757df2dfad/ Log: update the release announcement diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -20,7 +20,8 @@ due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or -Windows 32. Windows 64 work is ongoing, but not yet natively supported. +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org @@ -52,11 +53,36 @@ * a lot of other minor changes + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + * Since the last release there was a significant breakthrough in PyPy's fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_ + and `py3k`_. We would like to thank again to everyone who donated. + It's also probably worth noting, we're considering donations for the STM + project. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Software Transactional Memory, you can read more about `our plans`_ + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html .. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html .. _`numpypy`: http://pypy.org/numpydonate.html .. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html From noreply at buildbot.pypy.org Wed Feb 8 19:03:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 19:03:32 +0100 (CET) Subject: [pypy-commit] pypy default: somehow document the jit hooks Message-ID: <20120208180332.0134082CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52249:a3356efe6bbd Date: 2012-02-08 20:02 +0200 http://bitbucket.org/pypy/pypy/changeset/a3356efe6bbd/ Log: somehow document the jit hooks diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html From noreply at buildbot.pypy.org Wed Feb 8 19:03:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 19:03:33 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120208180333.5073282B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52250:b161b2e423b2 Date: 2012-02-08 20:03 +0200 http://bitbucket.org/pypy/pypy/changeset/b161b2e423b2/ Log: merge diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -81,7 +81,12 @@ descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -92,9 +97,12 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rpow = _binop_right_impl("power") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -176,11 +184,16 @@ __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), __pow__ = interp2app(W_GenericBox.descr_pow), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rpow__ = interp2app(W_GenericBox.descr_rpow), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -189,8 +202,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -383,9 +383,10 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -404,6 +404,17 @@ def test_operators(self): from operator import truediv - from _numpypy import float64, int_ + from _numpypy import float64, int_, True_, False_ assert truediv(int_(3), int_(2)) == float64(1.5) + assert 2 ** int_(3) == int_(8) + assert int_(3) & int_(1) == int_(1) + raises(TypeError, lambda: float64(3) & 1) + assert int_(8) % int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -59,10 +59,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True @@ -253,6 +249,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -313,6 +313,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported From noreply at buildbot.pypy.org Wed Feb 8 19:07:25 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 8 Feb 2012 19:07:25 +0100 (CET) Subject: [pypy-commit] pypy default: small changes Message-ID: <20120208180725.7E4E182B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52251:dc8a9c1fa55f Date: 2012-02-08 13:07 -0500 http://bitbucket.org/pypy/pypy/changeset/dc8a9c1fa55f/ Log: small changes diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,11 +2,11 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As became a habit, this -release brings a lot of bugfixes, performance and memory improvements over +We're pleased to announce the 1.8 release of PyPy. As has become a habit, this +release brings a lot of bugfixes, and performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of list strategies which makes homogenous lists more efficient both in terms -of performance and memory. Otherwise it's "business as usual" in the sense +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2, you can read the details of this at XXX. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. You can download the PyPy 1.8 release here: From noreply at buildbot.pypy.org Wed Feb 8 19:13:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 19:13:16 +0100 (CET) Subject: [pypy-commit] pypy default: Mention ARM and PPC. Message-ID: <20120208181316.3089C82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52252:c8a0f2344ccd Date: 2012-02-08 19:11 +0100 http://bitbucket.org/pypy/pypy/changeset/c8a0f2344ccd/ Log: Mention ARM and PPC. diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -35,7 +35,7 @@ strategies for unicode and string lists. * As usual, numerous performance improvements. There are many examples - of python constructs that now should behave faster; too many to list them. + of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. @@ -73,6 +73,8 @@ As usual, there is quite a bit of ongoing work that either didn't make it to the release or is not ready yet. Highlights include: +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + * Specialized type instances - allocate instances as efficient as C structs, including type specialization From noreply at buildbot.pypy.org Wed Feb 8 19:13:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 8 Feb 2012 19:13:18 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120208181318.2C23382B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52253:97a0b50b35b1 Date: 2012-02-08 19:12 +0100 http://bitbucket.org/pypy/pypy/changeset/97a0b50b35b1/ Log: merge heads diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,11 +2,11 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As became a habit, this -release brings a lot of bugfixes, performance and memory improvements over +We're pleased to announce the 1.8 release of PyPy. As has become a habit, this +release brings a lot of bugfixes, and performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of list strategies which makes homogenous lists more efficient both in terms -of performance and memory. Otherwise it's "business as usual" in the sense +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2, you can read the details of this at XXX. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. You can download the PyPy 1.8 release here: From noreply at buildbot.pypy.org Wed Feb 8 20:45:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 20:45:00 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: (fijal, agaynor) start implementing record types Message-ID: <20120208194501.000E982B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52254:a3f0a909959f Date: 2012-02-08 20:44 +0100 http://bitbucket.org/pypy/pypy/changeset/a3f0a909959f/ Log: (fijal, agaynor) start implementing record types diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -68,9 +68,10 @@ 'float64': 'interp_boxes.W_Float64Box', 'intp': 'types.IntP.BoxType', 'uintp': 'types.UIntP.BoxType', + 'flexible': 'interp_boxes.W_FlexibleBox', #'str_': 'interp_boxes.W_StringBox', #'unicode_': 'interp_boxes.W_UnicodeBox', - #'void': 'interp_boxes.W_VoidBox', + 'void': 'interp_boxes.W_VoidBox', } # ufuncs diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -163,6 +163,12 @@ descr__new__, get_dtype = new_dtype_getter("float64") +class W_FlexibleBox(W_GenericBox): + pass + +class W_VoidBox(W_FlexibleBox): + pass + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -285,3 +291,12 @@ __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) + + +W_FlexibleBox.typedef = TypeDef("flexible", W_GenericBox.typedef, + __module__ = "numpypy", +) + +W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, + __module__ = "numpypy", +) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -2,7 +2,7 @@ import sys from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, interp_attrproperty_w) from pypy.module.micronumpy import types, interp_boxes @@ -15,7 +15,7 @@ SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" -VOID = 'V' +VOIDLTR = 'V' VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) @@ -23,7 +23,8 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[], + fields=None, fieldnames=None): self.itemtype = itemtype self.num = num self.kind = kind @@ -32,6 +33,8 @@ self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors self.aliases = aliases + self.fields = fields + self.fieldnames = fieldnames def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -85,6 +88,29 @@ def descr_ne(self, space, w_other): return space.wrap(not self.eq(space, w_other)) + def descr_get_fields(self, space): + if self.fields is None: + return space.w_None + w_d = space.newdict() + for name, (offset, subdtype) in self.fields.iteritems(): + space.setitem(w_d, space.wrap(name), space.newtuple([subdtype, + space.wrap(offset)])) + return w_d + + def descr_get_names(self, space): + if self.fieldnames is None: + return space.w_None + return space.newtuple([space.wrap(name) for name in self.fieldnames]) + + @unwrap_spec(item=str) + def descr_getitem(self, item): + if self.fields is None: + raise OperationError(space.w_KeyError, space.wrap("There are no keys in dtypes %s" % self.name)) + try: + return self.fields[item][1] + except KeyError: + raise OperationError(space.w_KeyError, space.wrap("Field named %s not found" % item)) + def is_int_type(self): return (self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR or self.kind == BOOLLTR) @@ -94,15 +120,24 @@ def dtype_from_list(space, w_lst): lst_w = space.listview(w_lst) - fieldlist = [] + fields = {} offset = 0 + ofs_and_items = [] + fieldnames = [] for w_elem in lst_w: w_fldname, w_flddesc = space.fixedview(w_elem, 2) - subdtype = descr__new__(space.gettypefor(W_Dtype), w_flddesc) - align = subdtype.alignment - offset = (offset + (align-1)) & ~ (align-1) - fieldlist.append((offset, space.str_w(w_fldname), subdtype)) - xxx + subdtype = descr__new__(space, space.gettypefor(W_Dtype), w_flddesc) + fldname = space.str_w(w_fldname) + if fldname in fields: + raise OperationError(space.w_ValueError, space.wrap("two fields with the same name")) + fields[fldname] = (offset, subdtype) + ofs_and_items.append((offset, subdtype.itemtype)) + offset += subdtype.itemtype.get_element_size() + fieldnames.append(fldname) + itemtype = types.RecordType(ofs_and_items) + return W_Dtype(itemtype, 20, VOIDLTR, "void" + str(8 * itemtype.get_element_size()), + "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, + fieldnames=fieldnames) def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) @@ -135,14 +170,18 @@ __repr__ = interp2app(W_Dtype.descr_repr), __eq__ = interp2app(W_Dtype.descr_eq), __ne__ = interp2app(W_Dtype.descr_ne), + __getitem__ = interp2app(W_Dtype.descr_getitem), num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), + char = interp_attrproperty("char", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), alignment = GetSetProperty(W_Dtype.descr_get_alignment), shape = GetSetProperty(W_Dtype.descr_get_shape), name = interp_attrproperty('name', cls=W_Dtype), + fields = GetSetProperty(W_Dtype.descr_get_fields), + names = GetSetProperty(W_Dtype.descr_get_names), ) W_Dtype.typedef.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -13,6 +13,7 @@ assert dtype(d) is d assert dtype(None) is dtype(float) assert dtype('int8').name == 'int8' + assert dtype(int).fields is None raises(TypeError, dtype, 1042) def test_dtype_eq(self): @@ -454,3 +455,21 @@ def test_alignment(self): from _numpypy import dtype assert dtype('i4').alignment == 4 + +class AppTestRecordDtypes(BaseNumpyAppTest): + def test_create(self): + from _numpypy import dtype, void + + raises(ValueError, "dtype([('x', int), ('x', float)])") + d = dtype([("x", "int32"), ("y", "int32"), ("z", "int32"), ("value", float)]) + assert d.fields['x'] == (dtype('int32'), 0) + assert d.fields['value'] == (dtype(float), 12) + assert d['x'] == dtype('int32') + assert d.name == "void160" + assert d.num == 20 + assert d.itemsize == 20 + assert d.kind == 'V' + assert d.type is void + assert d.char == 'V' + assert d.names == ("x", "y", "z", "value") + diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -594,6 +594,18 @@ BoxType = interp_boxes.W_Float64Box format_code = "d" +class CompositeType(BaseType): + def __init__(self, offsets_and_types): + self.offsets_and_types = offsets_and_types + last_item = offsets_and_types[-1] + self.size = last_item[0] + last_item[1].get_element_size() + + def get_element_size(self): + return self.size + +class RecordType(CompositeType): + pass + for tp in [Int32, Int64]: if tp.T == lltype.Signed: IntP = tp From noreply at buildbot.pypy.org Wed Feb 8 21:04:08 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 21:04:08 +0100 (CET) Subject: [pypy-commit] pypy json-decoder-speedups: further speedups Message-ID: <20120208200408.1FD9882B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: json-decoder-speedups Changeset: r52255:afb4c5c2ffbd Date: 2012-02-08 21:56 +0200 http://bitbucket.org/pypy/pypy/changeset/afb4c5c2ffbd/ Log: further speedups diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -110,14 +110,14 @@ raise ValueError( errmsg("Unterminated string starting at", s, begin)) end = chunk.end() - content, terminator = chunk.groups() - del chunk + content = s[chunk.start(1):chunk.end(1)] + terminator = s[chunk.start(2):chunk.end(2)] + #content, terminator = chunk.groups() # Content is contains zero or more unescaped string characters if content: if not isinstance(content, unicode): content = unicode(content, encoding) chunks.append(content) - del content # Terminator is the end of string, a literal control character, # or a backslash denoting that an escape sequence follows if terminator == '"': @@ -166,6 +166,8 @@ char = unichr(uni) end = next_end # Append the unescaped character + del chunk + del content chunks.append(char) return u''.join(chunks), end From noreply at buildbot.pypy.org Wed Feb 8 21:16:22 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 8 Feb 2012 21:16:22 +0100 (CET) Subject: [pypy-commit] pypy default: Issue #1035: os.listdir(someUnicode) returns byte strings for Message-ID: <20120208201622.D56E882B1E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52256:c567905f8478 Date: 2012-02-08 21:14 +0100 http://bitbucket.org/pypy/pypy/changeset/c567905f8478/ Log: Issue #1035: os.listdir(someUnicode) returns byte strings for filenames that cannot be decoded by the filesystem encoding. diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -543,10 +543,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' From noreply at buildbot.pypy.org Wed Feb 8 21:58:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 21:58:02 +0100 (CET) Subject: [pypy-commit] pypy default: kill XXX I don't think you can read about it anywhere Message-ID: <20120208205802.4FAAA82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52257:11d854db3e60 Date: 2012-02-08 22:57 +0200 http://bitbucket.org/pypy/pypy/changeset/11d854db3e60/ Log: kill XXX I don't think you can read about it anywhere diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -6,7 +6,7 @@ release brings a lot of bugfixes, and performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of list strategies which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2, you can read the details of this at XXX. Otherwise it's "business as usual" in the sense +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. You can download the PyPy 1.8 release here: @@ -67,6 +67,8 @@ It's also probably worth noting, we're considering donations for the STM project. +* Standard library upgrade from 2.7.1 to 2.7.2. + Ongoing work ============ From noreply at buildbot.pypy.org Wed Feb 8 22:04:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 22:04:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: what's not tested is broken Message-ID: <20120208210437.A414382B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52258:e16a35f9d150 Date: 2012-02-08 23:04 +0200 http://bitbucket.org/pypy/pypy/changeset/e16a35f9d150/ Log: what's not tested is broken diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -103,7 +103,7 @@ return space.newtuple([space.wrap(name) for name in self.fieldnames]) @unwrap_spec(item=str) - def descr_getitem(self, item): + def descr_getitem(self, space, item): if self.fields is None: raise OperationError(space.w_KeyError, space.wrap("There are no keys in dtypes %s" % self.name)) try: diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -14,7 +14,9 @@ assert dtype(None) is dtype(float) assert dtype('int8').name == 'int8' assert dtype(int).fields is None + assert dtype(int).names is None raises(TypeError, dtype, 1042) + raises(KeyError, 'dtype(int)["asdasd"]') def test_dtype_eq(self): from _numpypy import dtype @@ -472,4 +474,5 @@ assert d.type is void assert d.char == 'V' assert d.names == ("x", "y", "z", "value") - + raises(KeyError, 'd["xyz"]') + raises(KeyError, 'd.fields["xyz"]') From noreply at buildbot.pypy.org Wed Feb 8 22:50:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 22:50:51 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: (fijal, agaynor) write test Message-ID: <20120208215051.24E6982CE3@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52260:c6a8797f0138 Date: 2012-02-08 22:46 +0100 http://bitbucket.org/pypy/pypy/changeset/c6a8797f0138/ Log: (fijal, agaynor) write test diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -457,12 +457,24 @@ from _numpypy import dtype assert dtype('i4').alignment == 4 +class AppTestStrUnicodeDtypes(BaseNumpyAppTest): def test_str_unicode(self): from _numpypy import str_, unicode_, character, flexible, generic assert str_.mro() == [str_, str, basestring, character, flexible, generic, object] assert unicode_.mro() == [unicode_, unicode, basestring, character, flexible, generic, object] + def test_str_dtype(self): + from _numpypy import dtype, str_ + + d = dtype('S8') + assert d.itemsize == 8 + assert dtype(str) == dtype('S') + assert d.kind == 'S' + assert d.type is str_ + assert d.name == "string64" + assert d.num == 18 + class AppTestRecordDtypes(BaseNumpyAppTest): def test_create(self): from _numpypy import dtype, void From noreply at buildbot.pypy.org Wed Feb 8 22:50:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 8 Feb 2012 22:50:49 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: (fijal, agaynor) boilerplate - export string/unicode Message-ID: <20120208215049.E860782B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52259:80adf83f04d7 Date: 2012-02-08 22:40 +0100 http://bitbucket.org/pypy/pypy/changeset/80adf83f04d7/ Log: (fijal, agaynor) boilerplate - export string/unicode diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -69,8 +69,9 @@ 'intp': 'types.IntP.BoxType', 'uintp': 'types.UIntP.BoxType', 'flexible': 'interp_boxes.W_FlexibleBox', - #'str_': 'interp_boxes.W_StringBox', - #'unicode_': 'interp_boxes.W_UnicodeBox', + 'character': 'interp_boxes.W_CharacterBox', + 'str_': 'interp_boxes.W_StringBox', + 'unicode_': 'interp_boxes.W_UnicodeBox', 'void': 'interp_boxes.W_VoidBox', } diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -3,6 +3,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef +from pypy.objspace.std.stringtype import str_typedef +from pypy.objspace.std.unicodetype import unicode_typedef from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -169,6 +171,14 @@ class W_VoidBox(W_FlexibleBox): pass +class W_CharacterBox(W_FlexibleBox): + pass + +class W_StringBox(W_CharacterBox): + pass + +class W_UnicodeBox(W_CharacterBox): + pass W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -300,3 +310,16 @@ W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, __module__ = "numpypy", ) + +W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, + __module__ = "numpypy", +) + +W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), + __module__ = "numpypy", +) + +W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), + __module__ = "numpypy", +) + diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -440,7 +440,6 @@ numpy.integer, numpy.number, numpy.generic, object) assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) - #assert numpy.str_.__mro__ == def test_alternate_constructs(self): from _numpypy import dtype @@ -458,6 +457,12 @@ from _numpypy import dtype assert dtype('i4').alignment == 4 + def test_str_unicode(self): + from _numpypy import str_, unicode_, character, flexible, generic + + assert str_.mro() == [str_, str, basestring, character, flexible, generic, object] + assert unicode_.mro() == [unicode_, unicode, basestring, character, flexible, generic, object] + class AppTestRecordDtypes(BaseNumpyAppTest): def test_create(self): from _numpypy import dtype, void @@ -476,3 +481,4 @@ assert d.names == ("x", "y", "z", "value") raises(KeyError, 'd["xyz"]') raises(KeyError, 'd.fields["xyz"]') + diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -603,6 +603,21 @@ def get_element_size(self): return self.size +class BaseStringType(object): + _mixin_ = True + + def __init__(self, size): + self.size = size + + def get_element_size(self): + return self.size * rffi.sizeof(self.T) + +class StringType(BaseType, BaseStringType): + T = lltype.Char + +class UnicodeType(BaseType, BaseStringType): + T = lltype.UniChar + class RecordType(CompositeType): pass diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -228,6 +228,7 @@ (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), (lltype.Bool, _unsigned_type_for(lltype.Bool)), + (lltype.Char, _signed_type_for(lltype.Char)), ] __float_type_map = [ From noreply at buildbot.pypy.org Thu Feb 9 03:09:47 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 03:09:47 +0100 (CET) Subject: [pypy-commit] pypy default: put all the numpy constants in one place Message-ID: <20120209020947.D3B6482B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52261:b15db1836bfa Date: 2012-02-08 21:07 -0500 http://bitbucket.org/pypy/pypy/changeset/b15db1836bfa/ Log: put all the numpy constants in one place diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -311,6 +312,11 @@ little_endian = (sys.byteorder == 'little') Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 nan = NaN = NAN = float('nan') False_ = bool_(False) True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -111,8 +111,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -579,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -600,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,7 +367,7 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() From noreply at buildbot.pypy.org Thu Feb 9 03:09:49 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 03:09:49 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20120209020949.1554D82CE3@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52262:3d143ab5260b Date: 2012-02-08 21:09 -0500 http://bitbucket.org/pypy/pypy/changeset/3d143ab5260b/ Log: merged upstream diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -6,7 +6,7 @@ release brings a lot of bugfixes, and performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of list strategies which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2, you can read the details of this at XXX. Otherwise it's "business as usual" in the sense +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. You can download the PyPy 1.8 release here: @@ -35,7 +35,7 @@ strategies for unicode and string lists. * As usual, numerous performance improvements. There are many examples - of python constructs that now should behave faster; too many to list them. + of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. @@ -67,12 +67,16 @@ It's also probably worth noting, we're considering donations for the STM project. +* Standard library upgrade from 2.7.1 to 2.7.2. + Ongoing work ============ As usual, there is quite a bit of ongoing work that either didn't make it to the release or is not ready yet. Highlights include: +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + * Specialized type instances - allocate instances as efficient as C structs, including type specialization diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -543,10 +543,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' From noreply at buildbot.pypy.org Thu Feb 9 12:02:42 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:02:42 +0100 (CET) Subject: [pypy-commit] pypy py3k: use the proper future flags for python 3.2 Message-ID: <20120209110242.C0D9B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52263:6d5e59ba8b8d Date: 2012-02-09 10:33 +0100 http://bitbucket.org/pypy/pypy/changeset/6d5e59ba8b8d/ Log: use the proper future flags for python 3.2 diff --git a/pypy/interpreter/pyparser/future.py b/pypy/interpreter/pyparser/future.py --- a/pypy/interpreter/pyparser/future.py +++ b/pypy/interpreter/pyparser/future.py @@ -330,3 +330,4 @@ futureFlags_2_4 = FutureFlags((2, 4, 4, 'final', 0)) futureFlags_2_5 = FutureFlags((2, 5, 0, 'final', 0)) futureFlags_2_7 = FutureFlags((2, 7, 0, 'final', 0)) +futureFlags_3_2 = FutureFlags((3, 2, 0, 'final', 0)) diff --git a/pypy/interpreter/pyparser/pyparse.py b/pypy/interpreter/pyparser/pyparse.py --- a/pypy/interpreter/pyparser/pyparse.py +++ b/pypy/interpreter/pyparser/pyparse.py @@ -88,7 +88,7 @@ class PythonParser(parser.Parser): - def __init__(self, space, future_flags=future.futureFlags_2_7, + def __init__(self, space, future_flags=future.futureFlags_3_2, grammar=pygram.python_grammar): parser.Parser.__init__(self, grammar) self.space = space From noreply at buildbot.pypy.org Thu Feb 9 12:02:44 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:02:44 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax but skip this test Message-ID: <20120209110244.0B2F482B69@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52264:10482d044224 Date: 2012-02-09 10:43 +0100 http://bitbucket.org/pypy/pypy/changeset/10482d044224/ Log: fix the syntax but skip this test diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -611,14 +611,21 @@ assert space.eq_w(w_res, space.wrap("var")) def test_dont_inherit_flag(self): + # this test checks that compile() don't inherit the __future__ flags + # of the hosting code. However, in Python3 we don't have any + # meaningful __future__ flag to check that (they are all enabled). The + # only candidate could be barry_as_FLUFL, but it's not implemented yet + # (and not sure it'll ever be) + py.test.skip("we cannot actually check the result of this test (see comment)") space = self.space s1 = str(py.code.Source(""" from __future__ import division - exec compile('x = 1/2', '?', 'exec', 0, 1) + exec(compile('x = 1/2', '?', 'exec', 0, 1)) """)) w_result = space.appexec([space.wrap(s1)], """(s1): - exec s1 - return x + ns = {} + exec(s1, ns) + return ns['x'] """) assert space.float_w(w_result) == 0 From noreply at buildbot.pypy.org Thu Feb 9 12:02:45 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:02:45 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, we need to skip this one too, for the same reasone Message-ID: <20120209110245.6C1FE82CE3@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52265:92ed25e52bf8 Date: 2012-02-09 10:44 +0100 http://bitbucket.org/pypy/pypy/changeset/92ed25e52bf8/ Log: bah, we need to skip this one too, for the same reasone diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -630,6 +630,8 @@ assert space.float_w(w_result) == 0 def test_dont_inherit_across_import(self): + # see the comment for test_dont_inherit_flag + py.test.skip("we cannot actually check the result of this test (see comment)") from pypy.tool.udir import udir udir.join('test_dont_inherit_across_import.py').write('x = 1/2\n') space = self.space From noreply at buildbot.pypy.org Thu Feb 9 12:02:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:02:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax, and make sure that we use bytestring, because we are talking about the encoded data here (thanks to G2P for the pointer) Message-ID: <20120209110246.A2B0382CE4@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52266:d8007b59c678 Date: 2012-02-09 12:02 +0100 http://bitbucket.org/pypy/pypy/changeset/d8007b59c678/ Log: fix the syntax, and make sure that we use bytestring, because we are talking about the encoded data here (thanks to G2P for the pointer) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -715,9 +715,9 @@ class AppTestCompiler: def test_bom_with_future(self): - s = '\xef\xbb\xbffrom __future__ import division\nx = 1/2' + s = b'\xef\xbb\xbffrom __future__ import division\nx = 1/2' ns = {} - exec s in ns + exec(s, ns) assert ns["x"] == .5 def test_values_of_different_types(self): From noreply at buildbot.pypy.org Thu Feb 9 12:24:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: 1) fix syntax; 2) we no longer have a long type and a L suffix for literals; 3) exec() cannot modify the local scope Message-ID: <20120209112427.5B2BC82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52267:ebeed1e0ea4e Date: 2012-02-09 12:05 +0100 http://bitbucket.org/pypy/pypy/changeset/ebeed1e0ea4e/ Log: 1) fix syntax; 2) we no longer have a long type and a L suffix for literals; 3) exec() cannot modify the local scope diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -721,18 +721,18 @@ assert ns["x"] == .5 def test_values_of_different_types(self): - exec "a = 0; b = 0L; c = 0.0; d = 0j" - assert type(a) is int - assert type(b) is long - assert type(c) is float - assert type(d) is complex + ns = {} + exec("a = 0; c = 0.0; d = 0j", ns) + assert type(ns['a']) is int + assert type(ns['c']) is float + assert type(ns['d']) is complex def test_values_of_different_types_in_tuples(self): - exec "a = ((0,),); b = ((0L,),); c = ((0.0,),); d = ((0j,),)" - assert type(a[0][0]) is int - assert type(b[0][0]) is long - assert type(c[0][0]) is float - assert type(d[0][0]) is complex + ns = {} + exec("a = ((0,),); c = ((0.0,),); d = ((0j,),)", ns) + assert type(ns['a'][0][0]) is int + assert type(ns['c'][0][0]) is float + assert type(ns['d'][0][0]) is complex def test_zeros_not_mixed(self): import math From noreply at buildbot.pypy.org Thu Feb 9 12:24:28 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:28 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax, and exec cannot modify the local scope Message-ID: <20120209112428.93A0382B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52268:e4a3b7ef9cbc Date: 2012-02-09 12:09 +0100 http://bitbucket.org/pypy/pypy/changeset/e4a3b7ef9cbc/ Log: fix syntax, and exec cannot modify the local scope diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -742,19 +742,20 @@ assert isinstance(x, float) and isinstance(y, float) assert math.copysign(1, x) != math.copysign(1, y) ns = {} - exec "z1, z2 = 0j, -0j" in ns + exec("z1, z2 = 0j, -0j", ns) assert math.atan2(ns["z1"].imag, -1.) == math.atan2(0., -1.) assert math.atan2(ns["z2"].imag, -1.) == math.atan2(-0., -1.) def test_zeros_not_mixed_in_tuples(self): import math - exec "a = (0.0, 0.0); b = (-0.0, 0.0); c = (-0.0, -0.0)" - assert math.copysign(1., a[0]) == 1.0 - assert math.copysign(1., a[1]) == 1.0 - assert math.copysign(1., b[0]) == -1.0 - assert math.copysign(1., b[1]) == 1.0 - assert math.copysign(1., c[0]) == -1.0 - assert math.copysign(1., c[1]) == -1.0 + ns = {} + exec("a = (0.0, 0.0); b = (-0.0, 0.0); c = (-0.0, -0.0)", ns) + assert math.copysign(1., ns['a'][0]) == 1.0 + assert math.copysign(1., ns['a'][1]) == 1.0 + assert math.copysign(1., ns['b'][0]) == -1.0 + assert math.copysign(1., ns['b'][1]) == 1.0 + assert math.copysign(1., ns['c'][0]) == -1.0 + assert math.copysign(1., ns['c'][1]) == -1.0 class AppTestOptimizer: From noreply at buildbot.pypy.org Thu Feb 9 12:24:29 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax of few more tests Message-ID: <20120209112429.CAED582B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52269:8543f5fac60c Date: 2012-02-09 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/8543f5fac60c/ Log: fix the syntax of few more tests diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -934,7 +934,7 @@ y """ try: - exec source + exec(source) except IndentationError: pass else: @@ -949,8 +949,8 @@ z """ try: - exec source - except IndentationError, e: + exec(source) + except IndentationError as e: assert e.msg == 'unindent does not match any outer indentation level' else: raise Exception("DID NOT RAISE") @@ -960,14 +960,14 @@ source1 = "x = (\n" source2 = "x = (\n\n" try: - exec source1 - except SyntaxError, err1: + exec(source1) + except SyntaxError as err1: pass else: raise Exception("DID NOT RAISE") try: - exec source2 - except SyntaxError, err2: + exec(source2) + except SyntaxError as err2: pass else: raise Exception("DID NOT RAISE") From noreply at buildbot.pypy.org Thu Feb 9 12:24:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k-ify this test (syntax, exec and local scope, StringIO) Message-ID: <20120209112431.0E58082B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52270:5547097bed40 Date: 2012-02-09 12:15 +0100 http://bitbucket.org/pypy/pypy/changeset/5547097bed40/ Log: py3k-ify this test (syntax, exec and local scope, StringIO) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -912,11 +912,13 @@ source = """def _f(a): return a.f(a=a) """ - exec source - code = _f.func_code + ns = {} + exec(source, ns) + code = ns['_f'].__code__ - import StringIO, sys, dis - s = StringIO.StringIO() + import sys, dis + from io import StringIO + s = StringIO() out = sys.stdout sys.stdout = s try: From noreply at buildbot.pypy.org Thu Feb 9 12:24:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k-ify this test as well (syntax, exec and local scope, StringIO) Message-ID: <20120209112432.4583782B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52271:f64d232c96c0 Date: 2012-02-09 12:19 +0100 http://bitbucket.org/pypy/pypy/changeset/f64d232c96c0/ Log: py3k-ify this test as well (syntax, exec and local scope, StringIO) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -868,15 +868,16 @@ def test_dis_stopcode(self): source = """def _f(a): - print a + print(a) return 1 """ + ns = {} + exec(source, ns) + code = ns['_f'].func_code - exec source - code = _f.func_code - - import StringIO, sys, dis - s = StringIO.StringIO() + import sys, dis + from io import StringIO + s = StringIO() save_stdout = sys.stdout sys.stdout = s try: From noreply at buildbot.pypy.org Thu Feb 9 12:24:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k-ify this test as well (again :-) (syntax, exec and local scope, StringIO) Message-ID: <20120209112433.7D79F82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52272:534d3371e9a7 Date: 2012-02-09 12:20 +0100 http://bitbucket.org/pypy/pypy/changeset/534d3371e9a7/ Log: py3k-ify this test as well (again :-) (syntax, exec and local scope, StringIO) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -891,11 +891,13 @@ source = """def _f(a): return [x for x in a if None] """ - exec source - code = _f.func_code + ns = {} + exec(source, ns) + code = ns['_f'].func_code - import StringIO, sys, dis - s = StringIO.StringIO() + import sys, dis + from io import StringIO + s = StringIO() out = sys.stdout sys.stdout = s try: From noreply at buildbot.pypy.org Thu Feb 9 12:24:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: co_code is a bytestring, so we don't need ord() Message-ID: <20120209112434.B5B9682B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52273:abe40b46a9f1 Date: 2012-02-09 12:22 +0100 http://bitbucket.org/pypy/pypy/changeset/abe40b46a9f1/ Log: co_code is a bytestring, so we don't need ord() diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -782,7 +782,7 @@ co = compile("def f(): return None", "", "exec").co_consts[0] assert "None" not in co.co_names co = co.co_code - op = ord(co[0]) + (ord(co[1]) << 8) + op = co[0] + (co[1] << 8) assert op == opcode.opmap["LOAD_CONST"] def test_tuple_constants(self): From noreply at buildbot.pypy.org Thu Feb 9 12:24:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k-ify this test as well (again*2 :-)) (syntax, exec and local scope, StringIO) Message-ID: <20120209112435.EBD9282B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52274:9ee901b301cb Date: 2012-02-09 12:23 +0100 http://bitbucket.org/pypy/pypy/changeset/9ee901b301cb/ Log: py3k-ify this test as well (again*2 :-)) (syntax, exec and local scope, StringIO) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -764,10 +764,12 @@ source = """def f(): return 3 """ - exec source - code = f.func_code - import dis, sys, StringIO - s = StringIO.StringIO() + ns = {} + exec(source, ns) + code = ns['f'].func_code + import dis, sys + from io import StringIO + s = StringIO() so = sys.stdout sys.stdout = s try: From noreply at buildbot.pypy.org Thu Feb 9 12:24:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 12:24:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax, and forget about longs Message-ID: <20120209112437.2E75D82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52275:55591785a2fb Date: 2012-02-09 12:24 +0100 http://bitbucket.org/pypy/pypy/changeset/55591785a2fb/ Log: fix the syntax, and forget about longs diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -789,9 +789,9 @@ def test_tuple_constants(self): ns = {} - exec "x = (1, 0); y = (1L, 0L)" in ns + exec("x = (1, 0); y = (1., 0.)", ns) assert isinstance(ns["x"][0], int) - assert isinstance(ns["y"][0], long) + assert isinstance(ns["y"][0], float) def test_division_folding(self): def code(source): From noreply at buildbot.pypy.org Thu Feb 9 13:54:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 13:54:47 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: unicode and string dtypes Message-ID: <20120209125447.C86D682B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52276:18ad78aa8ed4 Date: 2012-02-09 14:54 +0200 http://bitbucket.org/pypy/pypy/changeset/18ad78aa8ed4/ Log: unicode and string dtypes diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -16,7 +16,8 @@ BOOLLTR = "b" FLOATINGLTR = "f" VOIDLTR = 'V' - +STRINGLTR = 'S' +UNICODELTR = 'U' VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) @@ -139,6 +140,41 @@ "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, fieldnames=fieldnames) +def variable_dtype(space, name): + if name[0] in '<>': + # ignore byte order, not sure if it's worth it for unicode only + if name[0] != byteorder_prefix and name[1] == 'U': + xxx + name = name[1:] + char = name[0] + if len(name) == 1: + size = 0 + else: + try: + size = int(name[1:]) + except ValueError: + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + if char == 'S': + itemtype = types.StringType(size) + basename = 'string' + num = 18 + w_box_type = space.gettypefor(interp_boxes.W_StringBox) + elif char == 'V': + num = 20 + basename = 'void' + w_box_type = space.gettypefor(interp_boxes.W_VoidBox) + xxx + else: + assert char == 'U' + basename = 'unicode' + itemtype = types.UnicodeType(size) + num = 19 + w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox) + return W_Dtype(itemtype, num, char, + basename + str(8 * itemtype.get_element_size()), + char, w_box_type) + + def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) @@ -148,12 +184,18 @@ return w_dtype elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) + if ',' in name: + return dtype_from_spec(space, name) try: return cache.dtypes_by_name[name] except KeyError: pass + if name[0] in 'VSU' or name[0] in '<>' and name[1] in 'VSU': + return variable_dtype(space, name) elif space.isinstance_w(w_dtype, space.w_list): return dtype_from_list(space, w_dtype) + elif space.isinstance_w(w_dtype, space.w_dict): + return dtype_from_dict(space, w_dtype) else: for dtype in cache.builtin_dtypes: if w_dtype in dtype.alternate_constructors: @@ -323,13 +365,42 @@ char='Q', w_box_type = space.gettypefor(interp_boxes.W_ULongLongBox), ) + self.w_stringdtype = W_Dtype( + types.StringType(0), + num=18, + kind=STRINGLTR, + name='string', + char='S', + w_box_type = space.gettypefor(interp_boxes.W_StringBox), + alternate_constructors=[space.w_str], + ) + self.w_unicodedtype = W_Dtype( + types.UnicodeType(0), + num=19, + kind=UNICODELTR, + name='unicode', + char='U', + w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox), + alternate_constructors=[space.w_unicode], + ) + self.w_voiddtype = W_Dtype( + types.VoidType(0), + num=20, + kind=VOIDLTR, + name='void', + char='V', + w_box_type = space.gettypefor(interp_boxes.W_VoidBox), + #alternate_constructors=[space.w_buffer], + # XXX no buffer in space + ) self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, self.w_longlongdtype, self.w_ulonglongdtype, self.w_float32dtype, - self.w_float64dtype + self.w_float64dtype, self.w_stringdtype, self.w_unicodedtype, + self.w_voiddtype, ] self.dtypes_by_num_bytes = sorted( (dtype.itemtype.get_element_size(), dtype) @@ -343,8 +414,9 @@ self.dtypes_by_name[byteorder_prefix + can_name] = dtype new_name = nonnative_byteorder_prefix + can_name itemtypename = dtype.itemtype.__class__.__name__ + itemtype = getattr(types, 'NonNative' + itemtypename)() self.dtypes_by_name[new_name] = W_Dtype( - getattr(types, 'NonNative' + itemtypename)(), + itemtype, dtype.num, dtype.kind, new_name, dtype.char, dtype.w_box_type) for alias in dtype.aliases: self.dtypes_by_name[alias] = dtype diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -467,6 +467,7 @@ def test_str_dtype(self): from _numpypy import dtype, str_ + raises(TypeError, "dtype('Sx')") d = dtype('S8') assert d.itemsize == 8 assert dtype(str) == dtype('S') @@ -475,6 +476,18 @@ assert d.name == "string64" assert d.num == 18 + def test_unicode_dtype(self): + from _numpypy import dtype, unicode_ + + raises(TypeError, "dtype('Ux')") + d = dtype('U8') + assert d.itemsize == 8 * 4 + assert dtype(unicode) == dtype('U') + assert d.kind == 'U' + assert d.type is unicode_ + assert d.name == "unicode256" + assert d.num == 19 + class AppTestRecordDtypes(BaseNumpyAppTest): def test_create(self): from _numpypy import dtype, void diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -606,7 +606,7 @@ class BaseStringType(object): _mixin_ = True - def __init__(self, size): + def __init__(self, size=0): self.size = size def get_element_size(self): @@ -614,10 +614,15 @@ class StringType(BaseType, BaseStringType): T = lltype.Char +VoidType = StringType # why not? +NonNativeVoidType = VoidType +NonNativeStringType = StringType class UnicodeType(BaseType, BaseStringType): T = lltype.UniChar +NonNativeUnicodeType = UnicodeType + class RecordType(CompositeType): pass From noreply at buildbot.pypy.org Thu Feb 9 14:38:26 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 14:38:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: adapt this test to the new division semantics Message-ID: <20120209133826.2454D82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52277:806db631a1ef Date: 2012-02-09 14:35 +0100 http://bitbucket.org/pypy/pypy/changeset/806db631a1ef/ Log: adapt this test to the new division semantics diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -800,10 +800,8 @@ assert len(co.co_consts) == 2 assert co.co_consts[0] == 2 co = code("x = 10/4") - assert len(co.co_consts) == 3 - assert co.co_consts[:2] == (10, 4) - co = code("from __future__ import division\nx = 10/4") - assert co.co_consts[2] == 2.5 + assert len(co.co_consts) == 2 + assert co.co_consts[0] == 2.5 def test_tuple_folding(self): co = compile("x = (1, 2, 3)", "", "exec") From noreply at buildbot.pypy.org Thu Feb 9 14:38:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 14:38:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k-ify this test Message-ID: <20120209133827.5AEEB82B69@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52278:2fe5fa84aab4 Date: 2012-02-09 14:38 +0100 http://bitbucket.org/pypy/pypy/changeset/2fe5fa84aab4/ Log: py3k-ify this test diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -821,7 +821,7 @@ def test_folding_of_binops_on_constants(self): def disassemble(func): - from StringIO import StringIO + from io import StringIO import sys, dis f = StringIO() tmp = sys.stdout @@ -853,7 +853,7 @@ ('a = 13 | 7', '(15)'), # binary or ): asm = dis_single(line) - print asm + print(asm) assert elem in asm, 'ELEMENT not in asm' assert 'BINARY_' not in asm, 'BINARY_in_asm' From noreply at buildbot.pypy.org Thu Feb 9 14:56:21 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 14:56:21 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend-rpythonization: close merged branch Message-ID: <20120209135621.1CC7182B1E@wyvern.cs.uni-duesseldorf.de> Author: fijal Branch: ppc-jit-backend-rpythonization Changeset: r52279:bb928c63c548 Date: 2012-02-09 05:55 -0800 http://bitbucket.org/pypy/pypy/changeset/bb928c63c548/ Log: close merged branch From noreply at buildbot.pypy.org Thu Feb 9 15:01:49 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:01:49 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: fix tests that had gotten out of sync Message-ID: <20120209140149.39FB082B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52280:df8f56e47a2d Date: 2012-02-03 13:25 +0100 http://bitbucket.org/pypy/pypy/changeset/df8f56e47a2d/ Log: fix tests that had gotten out of sync diff --git a/pypy/jit/backend/arm/test/test_regalloc_mov.py b/pypy/jit/backend/arm/test/test_regalloc_mov.py --- a/pypy/jit/backend/arm/test/test_regalloc_mov.py +++ b/pypy/jit/backend/arm/test/test_regalloc_mov.py @@ -1,15 +1,17 @@ from pypy.rlib.objectmodel import instantiate from pypy.jit.backend.arm.assembler import AssemblerARM -from pypy.jit.backend.arm.locations import imm, ImmLocation, ConstFloatLoc,\ +from pypy.jit.backend.arm.locations import imm, ConstFloatLoc,\ RegisterLocation, StackLocation, \ - VFPRegisterLocation + VFPRegisterLocation, get_fp_offset from pypy.jit.backend.arm.registers import lr, ip, fp, vfp_ip from pypy.jit.backend.arm.conditions import AL -from pypy.jit.metainterp.history import INT, FLOAT, REF +from pypy.jit.backend.arm.arch import WORD +from pypy.jit.metainterp.history import FLOAT import py from pypy.jit.backend.arm.test.support import skip_unless_arm skip_unless_arm() + class MockInstr(object): def __init__(self, name, *args, **kwargs): self.name = name @@ -31,21 +33,30 @@ and self.args == other.args and self.kwargs == other.kwargs) mi = MockInstr + + # helper method for tests def r(i): return RegisterLocation(i) + def vfp(i): return VFPRegisterLocation(i) -stack = StackLocation -def stack_float(i): - return stack(i, num_words=2, type=FLOAT) + +def stack(i, **kwargs): + return StackLocation(i, get_fp_offset(i), **kwargs) + + +def stack_float(i, **kwargs): + return StackLocation(i, get_fp_offset(i + 1), type=FLOAT) + def imm_float(value): - addr = int(value) # whatever + addr = int(value) # whatever return ConstFloatLoc(addr) + class MockBuilder(object): def __init__(self): self.instrs = [] @@ -55,6 +66,7 @@ self.instrs.append(i) return i + class BaseMovTest(object): def setup_method(self, method): self.builder = MockBuilder() @@ -62,7 +74,7 @@ self.asm.mc = self.builder def validate(self, expected): - result =self.builder.instrs + result = self.builder.instrs assert result == expected @@ -89,8 +101,8 @@ s = stack(7) expected = [ mi('PUSH', [lr.value], cond=AL), - mi('gen_load_int', lr.value, 100, cond=AL), - mi('STR_ri', lr.value, fp.value, imm=-28, cond=AL), + mi('gen_load_int', lr.value, 100, cond=AL), + mi('STR_ri', lr.value, fp.value, imm=-s.value, cond=AL), mi('POP', [lr.value], cond=AL)] self.mov(val, s, expected) @@ -99,18 +111,19 @@ s = stack(7) expected = [ mi('PUSH', [lr.value], cond=AL), - mi('gen_load_int', lr.value, 65536, cond=AL), - mi('STR_ri', lr.value, fp.value, imm=-28, cond=AL), + mi('gen_load_int', lr.value, 65536, cond=AL), + mi('STR_ri', lr.value, fp.value, imm=-s.value, cond=AL), mi('POP', [lr.value], cond=AL)] self.mov(val, s, expected) + def test_mov_imm_to_big_stacklock(self): val = imm(100) s = stack(8191) expected = [mi('PUSH', [lr.value], cond=AL), mi('gen_load_int', lr.value, 100, cond=AL), mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, -32764, cond=AL), + mi('gen_load_int', ip.value, -s.value, cond=AL), mi('STR_rr', lr.value, fp.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL), mi('POP', [lr.value], cond=AL)] @@ -122,7 +135,7 @@ expected = [mi('PUSH', [lr.value], cond=AL), mi('gen_load_int', lr.value, 65536, cond=AL), mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, -32764, cond=AL), + mi('gen_load_int', ip.value, -s.value, cond=AL), mi('STR_rr', lr.value, fp.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL), mi('POP', [lr.value], cond=AL)] @@ -137,14 +150,14 @@ def test_mov_reg_to_stack(self): s = stack(10) r6 = r(6) - expected = [mi('STR_ri', r6.value, fp.value, imm=-40, cond=AL)] + expected = [mi('STR_ri', r6.value, fp.value, imm=-s.value, cond=AL)] self.mov(r6, s, expected) def test_mov_reg_to_big_stackloc(self): s = stack(8191) r6 = r(6) expected = [mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, -32764, cond=AL), + mi('gen_load_int', ip.value, -s.value, cond=AL), mi('STR_rr', r6.value, fp.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(r6, s, expected) @@ -152,7 +165,7 @@ def test_mov_stack_to_reg(self): s = stack(10) r6 = r(6) - expected = [mi('LDR_ri', r6.value, fp.value, imm=-40, cond=AL)] + expected = [mi('LDR_ri', r6.value, fp.value, imm=-s.value, cond=AL)] self.mov(s, r6, expected) def test_mov_big_stackloc_to_reg(self): @@ -160,7 +173,7 @@ r6 = r(6) expected = [ mi('PUSH', [lr.value], cond=AL), - mi('gen_load_int', lr.value, -32764, cond=AL), + mi('gen_load_int', lr.value, -s.value, cond=AL), mi('LDR_rr', r6.value, fp.value, lr.value, cond=AL), mi('POP', [lr.value], cond=AL)] self.mov(s, r6, expected) @@ -185,7 +198,7 @@ reg = vfp(7) s = stack_float(3) expected = [mi('PUSH', [ip.value], cond=AL), - mi('SUB_ri', ip.value, fp.value, 12, cond=AL), + mi('SUB_ri', ip.value, fp.value, s.value, cond=AL), mi('VSTR', reg.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(reg, s, expected) @@ -194,7 +207,7 @@ reg = vfp(7) s = stack_float(800) expected = [mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, 3200, cond=AL), + mi('gen_load_int', ip.value, s.value, cond=AL), mi('SUB_rr', ip.value, fp.value, ip.value, cond=AL), mi('VSTR', reg.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] @@ -204,7 +217,7 @@ reg = vfp(7) s = stack_float(3) expected = [mi('PUSH', [ip.value], cond=AL), - mi('SUB_ri', ip.value, fp.value, 12, cond=AL), + mi('SUB_ri', ip.value, fp.value, s.value, cond=AL), mi('VLDR', reg.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(s, reg, expected) @@ -213,41 +226,68 @@ reg = vfp(7) s = stack_float(800) expected = [mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, 3200, cond=AL), + mi('gen_load_int', ip.value, s.value, cond=AL), mi('SUB_rr', ip.value, fp.value, ip.value, cond=AL), mi('VSTR', reg.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(reg, s, expected) def test_unsopported_cases(self): - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm(1), vfp(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm(1), stack_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm_float(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm_float(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm_float(1), r(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm_float(1), stack(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(imm_float(1), stack_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(r(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(r(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(r(1), stack_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(r(1), vfp(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), stack(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), stack_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), vfp(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack(1), lr)') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack_float(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack_float(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack_float(1), r(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack_float(1), stack(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(stack_float(1), stack_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(vfp(1), imm(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(vfp(1), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(vfp(1), r(2))') - py.test.raises(AssertionError, 'self.asm.regalloc_mov(vfp(1), stack(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm(1), vfp(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm(1), stack_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm_float(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm_float(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm_float(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(imm_float(1), stack(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(r(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(r(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(r(1), stack_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(r(1), vfp(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), stack(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), stack_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), vfp(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack(1), lr)') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack_float(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack_float(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack_float(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack_float(1), stack(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(stack_float(1), stack_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(vfp(1), imm(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(vfp(1), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(vfp(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.regalloc_mov(vfp(1), stack(2))') + class TestMovFromVFPLoc(BaseMovTest): def mov(self, a, b, c, expected=None): @@ -261,14 +301,13 @@ e = [mi('VMOV_rc', r1.value, r2.value, vr.value, cond=AL)] self.mov(vr, r1, r2, e) - def test_from_vfp_stack(self): s = stack_float(4) r1 = r(1) r2 = r(2) e = [ - mi('LDR_ri', r1.value, fp.value, imm=-16, cond=AL), - mi('LDR_ri', r2.value, fp.value, imm=-12, cond=AL)] + mi('LDR_ri', r1.value, fp.value, imm=-s.value, cond=AL), + mi('LDR_ri', r2.value, fp.value, imm=-s.value + WORD, cond=AL)] self.mov(s, r1, r2, e) def test_from_big_vfp_stack(self): @@ -277,9 +316,9 @@ r2 = r(2) e = [ mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, -2049*4, cond=AL), - mi('LDR_rr', r1.value, fp.value, ip.value,cond=AL), - mi('ADD_ri', ip.value, ip.value, imm=4, cond=AL), + mi('gen_load_int', ip.value, -s.value, cond=AL), + mi('LDR_rr', r1.value, fp.value, ip.value, cond=AL), + mi('ADD_ri', ip.value, ip.value, imm=WORD, cond=AL), mi('LDR_rr', r2.value, fp.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(s, r1, r2, e) @@ -297,10 +336,15 @@ self.mov(i, r1, r2, e) def test_unsupported(self): - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(vfp(1), r(5), r(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(stack(1), r(1), r(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(imm(1), r(1), r(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(1), r(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(vfp(1), r(5), r(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(stack(1), r(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(imm(1), r(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(1), r(1), r(2))') + class TestMoveToVFPLoc(BaseMovTest): def mov(self, r1, r2, vfp, expected): @@ -319,8 +363,8 @@ r1 = r(1) r2 = r(2) e = [ - mi('STR_ri', r1.value, fp.value, imm=-16, cond=AL), - mi('STR_ri', r2.value, fp.value, imm=-12, cond=AL)] + mi('STR_ri', r1.value, fp.value, imm=-s.value, cond=AL), + mi('STR_ri', r2.value, fp.value, imm=-s.value + WORD, cond=AL)] self.mov(r1, r2, s, e) def test_from_big_vfp_stack(self): @@ -329,19 +373,25 @@ r2 = r(2) e = [ mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, -2049*4, cond=AL), - mi('STR_rr', r1.value, fp.value, ip.value,cond=AL), + mi('gen_load_int', ip.value, -s.value, cond=AL), + mi('STR_rr', r1.value, fp.value, ip.value, cond=AL), mi('ADD_ri', ip.value, ip.value, imm=4, cond=AL), mi('STR_rr', r2.value, fp.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.mov(r1, r2, s, e) def unsupported(self): - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(5), r(2), vfp(4))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(1), r(2), stack(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(1), r(2), imm(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(1), r(2), imm_float(2))') - py.test.raises(AssertionError, 'self.asm.mov_from_vfp_loc(r(1), r(1), r(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(5), r(2), vfp(4))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(1), r(2), stack(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(1), r(2), imm(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(1), r(2), imm_float(2))') + py.test.raises(AssertionError, + 'self.asm.mov_from_vfp_loc(r(1), r(1), r(2))') + class TestRegallocPush(BaseMovTest): def push(self, v, e): @@ -371,7 +421,7 @@ def test_push_stack(self): s = stack(7) - e = [mi('LDR_ri', ip.value, fp.value, imm=-28, cond=AL), + e = [mi('LDR_ri', ip.value, fp.value, imm=-s.value, cond=AL), mi('PUSH', [ip.value], cond=AL) ] self.push(s, e) @@ -379,7 +429,7 @@ def test_push_big_stack(self): s = stack(1025) e = [mi('PUSH', [lr.value], cond=AL), - mi('gen_load_int', lr.value, -4100, cond=AL), + mi('gen_load_int', lr.value, -s.value, cond=AL), mi('LDR_rr', ip.value, fp.value, lr.value, cond=AL), mi('POP', [lr.value], cond=AL), mi('PUSH', [ip.value], cond=AL) @@ -395,7 +445,7 @@ sf = stack_float(4) e = [ mi('PUSH', [ip.value], cond=AL), - mi('SUB_ri', ip.value, fp.value, 16, cond=AL), + mi('SUB_ri', ip.value, fp.value, sf.value, cond=AL), mi('VLDR', vfp_ip.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL), mi('VPUSH', [vfp_ip.value], cond=AL), @@ -406,7 +456,7 @@ sf = stack_float(100) e = [ mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, 400, cond=AL), + mi('gen_load_int', ip.value, sf.value, cond=AL), mi('SUB_rr', ip.value, fp.value, ip.value, cond=AL), mi('VLDR', vfp_ip.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL), @@ -414,6 +464,7 @@ ] self.push(sf, e) + class TestRegallocPop(BaseMovTest): def pop(self, loc, e): self.asm.regalloc_pop(loc) @@ -433,7 +484,7 @@ s = stack(12) e = [ mi('POP', [ip.value], cond=AL), - mi('STR_ri', ip.value, fp.value, imm=-48, cond=AL)] + mi('STR_ri', ip.value, fp.value, imm=-s.value, cond=AL)] self.pop(s, e) def test_pop_big_stackloc(self): @@ -441,7 +492,7 @@ e = [ mi('POP', [ip.value], cond=AL), mi('PUSH', [lr.value], cond=AL), - mi('gen_load_int', lr.value, -1200*4, cond=AL), + mi('gen_load_int', lr.value, -s.value, cond=AL), mi('STR_rr', ip.value, fp.value, lr.value, cond=AL), mi('POP', [lr.value], cond=AL) ] @@ -452,7 +503,7 @@ e = [ mi('VPOP', [vfp_ip.value], cond=AL), mi('PUSH', [ip.value], cond=AL), - mi('SUB_ri', ip.value, fp.value, 48, cond=AL), + mi('SUB_ri', ip.value, fp.value, s.value, cond=AL), mi('VSTR', vfp_ip.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] self.pop(s, e) @@ -462,7 +513,7 @@ e = [ mi('VPOP', [vfp_ip.value], cond=AL), mi('PUSH', [ip.value], cond=AL), - mi('gen_load_int', ip.value, 4800, cond=AL), + mi('gen_load_int', ip.value, s.value, cond=AL), mi('SUB_rr', ip.value, fp.value, ip.value, cond=AL), mi('VSTR', vfp_ip.value, ip.value, cond=AL), mi('POP', [ip.value], cond=AL)] @@ -471,4 +522,3 @@ def test_unsupported(self): py.test.raises(AssertionError, 'self.asm.regalloc_pop(imm(1))') py.test.raises(AssertionError, 'self.asm.regalloc_pop(imm_float(1))') - From noreply at buildbot.pypy.org Thu Feb 9 15:01:50 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:01:50 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove an obsolete translation test, import another translation test from the Message-ID: <20120209140150.8763582B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52281:85839a0e54fe Date: 2012-02-03 16:42 +0100 http://bitbucket.org/pypy/pypy/changeset/85839a0e54fe/ Log: remove an obsolete translation test, import another translation test from the x86 backend and add a conftest option to explicitly run translation tests diff --git a/pypy/jit/backend/arm/test/conftest.py b/pypy/jit/backend/arm/test/conftest.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/arm/test/conftest.py @@ -0,0 +1,12 @@ +""" +This conftest adds an option to run the translation tests which by default will +be disabled. +""" + +def pytest_addoption(parser): + group = parser.getgroup('translation test options') + group.addoption('--run-translation-tests', + action="store_true", + default=False, + dest="run_translation_tests", + help="run tests that translate code") diff --git a/pypy/jit/backend/arm/test/support.py b/pypy/jit/backend/arm/test/support.py --- a/pypy/jit/backend/arm/test/support.py +++ b/pypy/jit/backend/arm/test/support.py @@ -1,5 +1,6 @@ import os import py +import pytest from pypy.rpython.lltypesystem import lltype, rffi from pypy.jit.backend.detect_cpu import getcpuclass @@ -29,6 +30,11 @@ def skip_unless_arm(): check_skip(os.uname()[4]) +def skip_unless_run_translation(): + if not pytest.config.option.run_translation_tests: + py.test.skip("Test skipped beause --run-translation-tests option is not set") + + def requires_arm_as(): import commands i = commands.getoutput("%s -version &1" % AS) diff --git a/pypy/jit/backend/arm/test/test_zrpy_gc.py b/pypy/jit/backend/arm/test/test_zrpy_gc.py --- a/pypy/jit/backend/arm/test/test_zrpy_gc.py +++ b/pypy/jit/backend/arm/test/test_zrpy_gc.py @@ -15,7 +15,10 @@ from pypy.tool.udir import udir from pypy.config.translationoption import DEFL_GC from pypy.jit.backend.arm.test.support import skip_unless_arm +from pypy.jit.backend.arm.test.support import skip_unless_run_translation skip_unless_arm() +skip_unless_run_translation() + class X(object): def __init__(self, x=0): diff --git a/pypy/jit/backend/arm/test/test_ztranslate_backend.py b/pypy/jit/backend/arm/test/test_ztranslate_backend.py deleted file mode 100644 --- a/pypy/jit/backend/arm/test/test_ztranslate_backend.py +++ /dev/null @@ -1,62 +0,0 @@ -import py -import os -from pypy.jit.metainterp.history import (AbstractFailDescr, - AbstractDescr, - BasicFailDescr, - BoxInt, Box, BoxPtr, - ConstInt, ConstPtr, - BoxObj, Const, - ConstObj, BoxFloat, ConstFloat) -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop -from pypy.rpython.test.test_llinterp import interpret -from pypy.jit.backend.detect_cpu import getcpuclass -from pypy.jit.backend.arm.runner import ArmCPU -from pypy.tool.udir import udir -from pypy.jit.backend.arm.test.support import skip_unless_arm -skip_unless_arm() - -class FakeStats(object): - pass -cpu = getcpuclass()(rtyper=None, stats=FakeStats(), translate_support_code=True) -class TestBackendTranslation(object): - def test_compile_bridge(self): - def loop(): - i0 = BoxInt() - i1 = BoxInt() - i2 = BoxInt() - faildescr1 = BasicFailDescr(1) - faildescr2 = BasicFailDescr(2) - looptoken = JitCellToken() - operations = [ - ResOperation(rop.INT_ADD, [i0, ConstInt(1)], i1), - ResOperation(rop.INT_LE, [i1, ConstInt(9)], i2), - ResOperation(rop.GUARD_TRUE, [i2], None, descr=faildescr1), - ResOperation(rop.JUMP, [i1], None, descr=looptoken), - ] - inputargs = [i0] - operations[2].setfailargs([i1]) - cpu.setup_once() - cpu.compile_loop(inputargs, operations, looptoken) - - i1b = BoxInt() - i3 = BoxInt() - bridge = [ - ResOperation(rop.INT_LE, [i1b, ConstInt(19)], i3), - ResOperation(rop.GUARD_TRUE, [i3], None, descr=faildescr2), - ResOperation(rop.JUMP, [i1b], None, descr=looptoken), - ] - bridge[1].setfailargs([i1b]) - assert looptoken._arm_func_addr != 0 - assert looptoken._arm_loop_code != 0 - cpu.compile_bridge(faildescr1, [i1b], bridge, looptoken, True) - - fail = cpu.execute_token(looptoken, 2) - res = cpu.get_latest_value_int(0) - return fail.identifier * 1000 + res - - logfile = udir.join('test_ztranslation.log') - os.environ['PYPYLOG'] = 'jit-log-opt:%s' % (logfile,) - res = interpret(loop, [], insist=True) - assert res == 2020 - diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/arm/test/test_ztranslation.py copy from pypy/jit/backend/x86/test/test_ztranslation.py copy to pypy/jit/backend/arm/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/arm/test/test_ztranslation.py @@ -8,18 +8,23 @@ from pypy.jit.backend.test.support import CCompiledMixin from pypy.jit.codewriter.policy import StopAtXPolicy from pypy.translator.translator import TranslationContext -from pypy.jit.backend.x86.arch import IS_X86_32, IS_X86_64 from pypy.config.translationoption import DEFL_GC -from pypy.rlib import rgc +from pypy.jit.backend.arm.test.support import skip_unless_arm +from pypy.jit.backend.arm.test.support import skip_unless_run_translation +skip_unless_arm() +skip_unless_run_translation() -class TestTranslationX86(CCompiledMixin): +class TestTranslationARM(CCompiledMixin): CPUClass = getcpuclass() + def _get_TranslationContext(self): + t = TranslationContext() + t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' + t.config.translation.gcrootfinder = 'shadowstack' + return t + def _check_cbuilder(self, cbuilder): - # We assume here that we have sse2. If not, the CPUClass - # needs to be changed to CPU386_NO_SSE2, but well. - assert '-msse2' in cbuilder.eci.compile_extra - assert '-mfpmath=sse' in cbuilder.eci.compile_extra + import pdb; pdb.set_trace() def test_stuff_translates(self): # this is a basic test that tries to hit a number of features and their @@ -168,88 +173,3 @@ bound = res & ~255 assert 1024 <= bound <= 131072 assert bound & (bound-1) == 0 # a power of two - - -class TestTranslationRemoveTypePtrX86(CCompiledMixin): - CPUClass = getcpuclass() - - def _get_TranslationContext(self): - t = TranslationContext() - t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' - t.config.translation.gcrootfinder = 'asmgcc' - t.config.translation.list_comprehension_operations = True - t.config.translation.gcremovetypeptr = True - return t - - def test_external_exception_handling_translates(self): - jitdriver = JitDriver(greens = [], reds = ['n', 'total']) - - class ImDone(Exception): - def __init__(self, resvalue): - self.resvalue = resvalue - - @dont_look_inside - def f(x, total): - if x <= 30: - raise ImDone(total * 10) - if x > 200: - return 2 - raise ValueError - @dont_look_inside - def g(x): - if x > 150: - raise ValueError - return 2 - class Base: - def meth(self): - return 2 - class Sub(Base): - def meth(self): - return 1 - @dont_look_inside - def h(x): - if x < 20000: - return Sub() - else: - return Base() - def myportal(i): - set_param(jitdriver, "threshold", 3) - set_param(jitdriver, "trace_eagerness", 2) - total = 0 - n = i - while True: - jitdriver.can_enter_jit(n=n, total=total) - jitdriver.jit_merge_point(n=n, total=total) - try: - total += f(n, total) - except ValueError: - total += 1 - try: - total += g(n) - except ValueError: - total -= 1 - n -= h(n).meth() # this is to force a GUARD_CLASS - def main(i): - try: - myportal(i) - except ImDone, e: - return e.resvalue - - # XXX custom fishing, depends on the exact env var and format - logfile = udir.join('test_ztranslation.log') - os.environ['PYPYLOG'] = 'jit-log-opt:%s' % (logfile,) - try: - res = self.meta_interp(main, [400]) - assert res == main(400) - finally: - del os.environ['PYPYLOG'] - - guard_class = 0 - for line in open(str(logfile)): - if 'guard_class' in line: - guard_class += 1 - # if we get many more guard_classes, it means that we generate - # guards that always fail (the following assert's original purpose - # is to catch the following case: each GUARD_CLASS is misgenerated - # and always fails with "gcremovetypeptr") - assert 0 < guard_class < 10 From noreply at buildbot.pypy.org Thu Feb 9 15:01:51 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:01:51 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add missing file Message-ID: <20120209140151.BC6FD82B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52282:86a7c3da8a7b Date: 2012-02-03 20:50 +0100 http://bitbucket.org/pypy/pypy/changeset/86a7c3da8a7b/ Log: add missing file diff --git a/pypy/jit/backend/arm/tool/__init__.py b/pypy/jit/backend/arm/tool/__init__.py new file mode 100644 From noreply at buildbot.pypy.org Thu Feb 9 15:01:52 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:01:52 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: fix test_random_mixed in test_jump.py Message-ID: <20120209140152.EECB182B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52283:3f0b37b09c6e Date: 2012-02-09 14:26 +0100 http://bitbucket.org/pypy/pypy/changeset/3f0b37b09c6e/ Log: fix test_random_mixed in test_jump.py diff --git a/pypy/jit/backend/arm/test/test_jump.py b/pypy/jit/backend/arm/test/test_jump.py --- a/pypy/jit/backend/arm/test/test_jump.py +++ b/pypy/jit/backend/arm/test/test_jump.py @@ -176,8 +176,8 @@ def test_random_mixed(): assembler = MockAssembler() - registers1 = [r0, r1, r2] - registers2 = [d0, d1, d2] + registers1 = all_regs + registers2 = all_vfp_regs VFPWORDS = 2 # def pick1(): @@ -233,18 +233,17 @@ elif loc.is_stack(): stack[loc.position] = 'value-width%d-%d' % (loc.width, i) if loc.width > WORD: - stack[loc.position-1] = 'value-hiword-%d' % i + stack[loc.position+1] = 'value-hiword-%d' % i else: assert loc.is_imm() or loc.is_imm_float() return regs1, regs2, stack # - for i in range(1):#range(500): + for i in range(500): seen = {} src_locations2 = [pick2() for i in range(4)] dst_locations2 = pick_dst(pick2, 4, seen) src_locations1 = [pick1c() for i in range(5)] dst_locations1 = pick_dst(pick1, 5, seen) - #import pdb; pdb.set_trace() assembler = MockAssembler() remap_frame_layout_mixed(assembler, src_locations1, dst_locations1, ip, @@ -263,7 +262,7 @@ elif loc.is_stack(): got = stack[loc.position] if loc.width > WORD: - got = (got, stack[loc.position-1]) + got = (got, stack[loc.position+1]) return got if loc.is_imm() or loc.is_imm_float(): return 'const-%d' % loc.value @@ -278,7 +277,7 @@ if loc.width > WORD: newval1, newval2 = newvalue stack[loc.position] = newval1 - stack[loc.position-1] = newval2 + stack[loc.position+1] = newval2 else: stack[loc.position] = newvalue else: From noreply at buildbot.pypy.org Thu Feb 9 15:02:07 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:02:07 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Update shadowstack header according to f0d095a1d379 Message-ID: <20120209140207.D625E82B69@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52285:6a23cecdada3 Date: 2012-02-09 14:56 +0100 http://bitbucket.org/pypy/pypy/changeset/6a23cecdada3/ Log: Update shadowstack header according to f0d095a1d379 diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -535,17 +535,20 @@ self.gen_shadowstack_header(gcrootmap) def gen_shadowstack_header(self, gcrootmap): - # we need to put two words into the shadowstack: the MARKER + # we need to put two words into the shadowstack: the MARKER_FRAME # and the address of the frame (fp, actually) rst = gcrootmap.get_root_stack_top_addr() self.mc.gen_load_int(r.ip.value, rst) self.mc.LDR_ri(r.r4.value, r.ip.value) # LDR r4, [rootstacktop] + # + MARKER = gcrootmap.MARKER_FRAME self.mc.ADD_ri(r.r5.value, r.r4.value, imm=2 * WORD) # ADD r5, r4 [2*WORD] - self.mc.gen_load_int(r.r6.value, gcrootmap.MARKER) - self.mc.STR_ri(r.r6.value, r.r4.value) - self.mc.STR_ri(r.fp.value, r.r4.value, WORD) - self.mc.STR_ri(r.r5.value, r.ip.value) + self.mc.gen_load_int(r.r6.value, MARKER) + self.mc.STR_ri(r.r6.value, r.r4.value, WORD) # STR MARKER, r4 [WORD] + self.mc.STR_ri(r.fp.value, r.r4.value) # STR fp, r4 + # + self.mc.STR_ri(r.r5.value, r.ip.value) # STR r5 [rootstacktop] def gen_footer_shadowstack(self, gcrootmap, mc): rst = gcrootmap.get_root_stack_top_addr() From noreply at buildbot.pypy.org Thu Feb 9 15:02:06 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:02:06 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120209140206.91F2482B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52284:619fa7780600 Date: 2012-02-09 14:29 +0100 http://bitbucket.org/pypy/pypy/changeset/619fa7780600/ Log: merge default diff too long, truncating to 10000 out of 155826 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
\n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::=
\n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Thu Feb 9 15:02:09 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 9 Feb 2012 15:02:09 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: implement changes from 458e381ff84d Message-ID: <20120209140209.1528582B1E@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52286:44d8b36be898 Date: 2012-02-09 15:00 +0100 http://bitbucket.org/pypy/pypy/changeset/44d8b36be898/ Log: implement changes from 458e381ff84d diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -505,6 +505,7 @@ def emit_op_debug_merge_point(self, op, arglocs, regalloc, fcond): return fcond emit_op_jit_debug = emit_op_debug_merge_point + emit_op_keepalive = emit_op_debug_merge_point def emit_op_cond_call_gc_wb(self, op, arglocs, regalloc, fcond): # Write code equivalent to write_barrier() in the GC: it checks diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -978,6 +978,7 @@ prepare_op_debug_merge_point = void prepare_op_jit_debug = void + prepare_keepalive = void def prepare_op_cond_call_gc_wb(self, op, fcond): assert op.result is None From noreply at buildbot.pypy.org Thu Feb 9 15:13:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:22 +0100 (CET) Subject: [pypy-commit] pypy default: added left shift to numpy Message-ID: <20120209141322.46FBA82B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52287:d89a66e9dc00 Date: 2012-02-09 08:48 -0500 http://bitbucket.org/pypy/pypy/changeset/d89a66e9dc00/ Log: added left shift to numpy diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -103,6 +103,7 @@ descr_div = _binop_impl("divide") descr_pow = _binop_impl("power") descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -1235,6 +1236,7 @@ __div__ = interp2app(BaseArray.descr_div), __pow__ = interp2app(BaseArray.descr_pow), __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -393,6 +393,8 @@ ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), ("less", "lt", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,16 @@ for i in range(5): assert b[i] == i / 5.0 + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + def test_pow(self): from _numpypy import array a = array(range(5), float) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -295,6 +295,10 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + @simple_unary_op def sign(self, v): if v > 0: From noreply at buildbot.pypy.org Thu Feb 9 15:13:23 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:23 +0100 (CET) Subject: [pypy-commit] pypy default: added divmod to ndarray Message-ID: <20120209141323.80DF482B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52288:9b833e2a9dae Date: 2012-02-09 08:53 -0500 http://bitbucket.org/pypy/pypy/changeset/9b833e2a9dae/ Log: added divmod to ndarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,8 +101,8 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") descr_lshift = _binop_impl("left_shift") descr_eq = _binop_impl("equal") @@ -115,6 +115,11 @@ descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -1234,8 +1239,9 @@ __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), __lshift__ = interp2app(BaseArray.descr_lshift), __radd__ = interp2app(BaseArray.descr_radd), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,13 @@ for i in range(5): assert b[i] == i / 5.0 + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + def test_lshift(self): from _numpypy import array From noreply at buildbot.pypy.org Thu Feb 9 15:13:24 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:24 +0100 (CET) Subject: [pypy-commit] pypy default: added rand to numarray Message-ID: <20120209141324.B751982B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52289:fef07794f3ec Date: 2012-02-09 08:56 -0500 http://bitbucket.org/pypy/pypy/changeset/fef07794f3ec/ Log: added rand to numarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -133,8 +133,10 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + + descr_rand = _binop_right_impl("bitwise_and") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1233,6 +1235,7 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), @@ -1244,12 +1247,17 @@ __pow__ = interp2app(BaseArray.descr_pow), __lshift__ = interp2app(BaseArray.descr_lshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rmod__ = interp2app(BaseArray.descr_rmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + + __rand__ = interp2app(BaseArray.descr_rand), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1258,10 +1266,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -695,6 +695,12 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) From noreply at buildbot.pypy.org Thu Feb 9 15:13:25 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:25 +0100 (CET) Subject: [pypy-commit] pypy default: rdivmod for ndarray Message-ID: <20120209141325.ED0E582B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52290:9f3109756878 Date: 2012-02-09 09:01 -0500 http://bitbucket.org/pypy/pypy/changeset/9f3109756878/ Log: rdivmod for ndarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -138,6 +138,11 @@ descr_rand = _binop_right_impl("bitwise_and") + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): @@ -1255,6 +1260,7 @@ __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), __rand__ = interp2app(BaseArray.descr_rand), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -632,6 +632,13 @@ assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + def test_lshift(self): from _numpypy import array From noreply at buildbot.pypy.org Thu Feb 9 15:13:27 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:27 +0100 (CET) Subject: [pypy-commit] pypy default: added rlshift to ndarray Message-ID: <20120209141327.2F4DF82B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52291:bb3bb626027b Date: 2012-02-09 09:03 -0500 http://bitbucket.org/pypy/pypy/changeset/bb3bb626027b/ Log: added rlshift to ndarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -135,6 +135,7 @@ descr_rdiv = _binop_right_impl("divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") descr_rand = _binop_right_impl("bitwise_and") @@ -1262,6 +1263,7 @@ __rmod__ = interp2app(BaseArray.descr_rmod), __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), + __rlshift__ = interp2app(BaseArray.descr_rlshift), __rand__ = interp2app(BaseArray.descr_rand), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -649,6 +649,12 @@ a = array([1.0]) raises(TypeError, lambda: a << 2) + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) From noreply at buildbot.pypy.org Thu Feb 9 15:13:28 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:28 +0100 (CET) Subject: [pypy-commit] pypy default: added ror to ndarray Message-ID: <20120209141328.6517982B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52292:8f4b87051095 Date: 2012-02-09 09:05 -0500 http://bitbucket.org/pypy/pypy/changeset/8f4b87051095/ Log: added ror to ndarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -138,6 +138,7 @@ descr_rlshift = _binop_right_impl("left_shift") descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") def descr_rdivmod(self, space, w_other): w_quotient = self.descr_rdiv(space, w_other) @@ -1266,6 +1267,7 @@ __rlshift__ = interp2app(BaseArray.descr_rlshift), __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -714,6 +714,13 @@ a = arange(5) assert (3 & a == [0, 1, 2, 3, 0]).all() + def test_ror(self): + from _numpypy import arange + + a = arange(5) + + assert (3 | a == [3, 3, 3, 3, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) From noreply at buildbot.pypy.org Thu Feb 9 15:13:29 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:29 +0100 (CET) Subject: [pypy-commit] pypy default: added rshift and rrshift to ndarray Message-ID: <20120209141329.9DF1482B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52293:ad83678913a0 Date: 2012-02-09 09:12 -0500 http://bitbucket.org/pypy/pypy/changeset/ad83678913a0/ Log: added rshift and rrshift to ndarray diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -104,6 +104,7 @@ descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -136,6 +137,7 @@ descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") descr_rand = _binop_right_impl("bitwise_and") descr_ror = _binop_right_impl("bitwise_or") @@ -1253,6 +1255,7 @@ __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), __and__ = interp2app(BaseArray.descr_and), __or__ = interp2app(BaseArray.descr_or), @@ -1265,6 +1268,7 @@ __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), __rand__ = interp2app(BaseArray.descr_rand), __ror__ = interp2app(BaseArray.descr_ror), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -392,8 +392,8 @@ ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), - ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -655,6 +655,22 @@ a = arange(3) assert (2 << a == [2, 4, 8]).all() + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -299,6 +299,10 @@ def lshift(self, v1, v2): return v1 << v2 + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: From noreply at buildbot.pypy.org Thu Feb 9 15:13:30 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:13:30 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20120209141330.DB61C82B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52294:3478844da220 Date: 2012-02-09 09:12 -0500 http://bitbucket.org/pypy/pypy/changeset/3478844da220/ Log: merged upstream diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -311,6 +312,11 @@ little_endian = (sys.byteorder == 'little') Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 nan = NaN = NAN = float('nan') False_ = bool_(False) True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,11 +2,11 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As became a habit, this -release brings a lot of bugfixes, performance and memory improvements over +We're pleased to announce the 1.8 release of PyPy. As has become a habit, this +release brings a lot of bugfixes, and performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of list strategies which makes homogenous lists more efficient both in terms -of performance and memory. Otherwise it's "business as usual" in the sense +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. You can download the PyPy 1.8 release here: @@ -20,7 +20,8 @@ due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or -Windows 32. Windows 64 work is ongoing, but not yet natively supported. +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org @@ -34,7 +35,7 @@ strategies for unicode and string lists. * As usual, numerous performance improvements. There are many examples - of python constructs that now should behave faster; too many to list them. + of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. @@ -52,11 +53,40 @@ * a lot of other minor changes + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + * Since the last release there was a significant breakthrough in PyPy's fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_ + and `py3k`_. We would like to thank again to everyone who donated. + It's also probably worth noting, we're considering donations for the STM + project. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Software Transactional Memory, you can read more about `our plans`_ + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html .. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html .. _`numpypy`: http://pypy.org/numpydonate.html .. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -111,8 +111,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -579,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -600,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,7 +367,7 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -543,10 +543,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' From noreply at buildbot.pypy.org Thu Feb 9 15:38:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:38:22 +0100 (CET) Subject: [pypy-commit] pypy default: added a ton of operators to numpy boxes Message-ID: <20120209143822.E3F3482B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52295:fc7d59dde5f3 Date: 2012-02-09 09:38 -0500 http://bitbucket.org/pypy/pypy/changeset/fc7d59dde5f3/ Log: added a ton of operators to numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,6 +83,8 @@ descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") descr_xor = _binop_impl("bitwise_xor") @@ -97,13 +99,29 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") descr_invert = _unaryop_impl("invert") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -185,7 +203,10 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), __and__ = interp2app(W_GenericBox.descr_and), __or__ = interp2app(W_GenericBox.descr_or), __xor__ = interp2app(W_GenericBox.descr_xor), @@ -193,7 +214,14 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -406,15 +406,26 @@ from operator import truediv from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) assert int_(3) & int_(1) == int_(1) - raises(TypeError, lambda: float64(3) & 1) - assert int_(8) % int_(3) == int_(2) + assert 2 & int_(3) == int_(2) assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + From noreply at buildbot.pypy.org Thu Feb 9 15:44:26 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 9 Feb 2012 15:44:26 +0100 (CET) Subject: [pypy-commit] pypy default: added xor in a few places Message-ID: <20120209144426.9024882B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52296:0792068a39d9 Date: 2012-02-09 09:44 -0500 http://bitbucket.org/pypy/pypy/changeset/0792068a39d9/ Log: added xor in a few places diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -106,6 +106,7 @@ descr_rrshift = _binop_right_impl("right_shift") descr_rand = _binop_right_impl("bitwise_and") descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") @@ -222,6 +223,7 @@ __rrshift__ = interp2app(W_GenericBox.descr_rrshift), __rand__ = interp2app(W_GenericBox.descr_rand), __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -105,6 +105,9 @@ descr_pow = _binop_impl("power") descr_lshift = _binop_impl("left_shift") descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -113,9 +116,6 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") - def descr_divmod(self, space, w_other): w_quotient = self.descr_div(space, w_other) w_remainder = self.descr_mod(space, w_other) @@ -138,9 +138,9 @@ descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") descr_rrshift = _binop_right_impl("right_shift") - descr_rand = _binop_right_impl("bitwise_and") descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") def descr_rdivmod(self, space, w_other): w_quotient = self.descr_rdiv(space, w_other) @@ -1256,9 +1256,9 @@ __pow__ = interp2app(BaseArray.descr_pow), __lshift__ = interp2app(BaseArray.descr_lshift), __rshift__ = interp2app(BaseArray.descr_rshift), - __and__ = interp2app(BaseArray.descr_and), __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), @@ -1269,9 +1269,9 @@ __rpow__ = interp2app(BaseArray.descr_rpow), __rlshift__ = interp2app(BaseArray.descr_rlshift), __rrshift__ = interp2app(BaseArray.descr_rrshift), - __rand__ = interp2app(BaseArray.descr_rand), __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -423,6 +423,7 @@ assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -734,8 +734,19 @@ from _numpypy import arange a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() - assert (3 | a == [3, 3, 3, 3, 7]).all() + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() def test_pos(self): from _numpypy import array From noreply at buildbot.pypy.org Thu Feb 9 16:12:08 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 16:12:08 +0100 (CET) Subject: [pypy-commit] pypy py3k: save the source of applevel direct tests in a temporary file: this way, we get nicer tracebacks Message-ID: <20120209151208.A21BE82B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52297:5c66b92f0617 Date: 2012-02-09 15:35 +0100 http://bitbucket.org/pypy/pypy/changeset/5c66b92f0617/ Log: save the source of applevel direct tests in a temporary file: this way, we get nicer tracebacks diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -213,8 +213,10 @@ raise AssertionError("DID NOT RAISE") """ source = py.code.Source(target)[1:].deindent() + pyfile = udir.join('src.py') + pyfile.write(helpers + str(source)) res, stdout, stderr = runsubprocess.run_subprocess( - python, ["-c", helpers + str(source)]) + python, [str(pyfile)]) print source print >> sys.stdout, stdout print >> sys.stderr, stderr From noreply at buildbot.pypy.org Thu Feb 9 16:12:09 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 9 Feb 2012 16:12:09 +0100 (CET) Subject: [pypy-commit] pypy default: review the release announcement. Tiny fixes, and move the paragraphs about the numpypy and py3k funded project in the 'ongoing work' section Message-ID: <20120209151209.D4FB782B1E@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52298:58f2412f38e7 Date: 2012-02-09 16:11 +0100 http://bitbucket.org/pypy/pypy/changeset/58f2412f38e7/ Log: review the release announcement. Tiny fixes, and move the paragraphs about the numpypy and py3k funded project in the 'ongoing work' section diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,16 +2,19 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As has become a habit, this -release brings a lot of bugfixes, and performance and memory improvements over +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms +`list strategies`_ which makes homogenous lists more efficient both in terms of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. + You can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -60,13 +63,6 @@ * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. -* Since the last release there was a significant breakthrough in PyPy's - fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_. We would like to thank again to everyone who donated. - - It's also probably worth noting, we're considering donations for the STM - project. - * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work @@ -82,7 +78,12 @@ * More numpy work -* Software Transactional Memory, you can read more about `our plans`_ +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html From noreply at buildbot.pypy.org Thu Feb 9 16:19:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 16:19:22 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: In-progress Message-ID: <20120209151922.1AE4E82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52299:298267a49c01 Date: 2012-02-09 12:04 +0100 http://bitbucket.org/pypy/pypy/changeset/298267a49c01/ Log: In-progress diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, llgroup, rffi from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase @@ -15,6 +15,7 @@ GCFLAG_GLOBAL = first_gcflag << 0 # keep in sync with et.c GCFLAG_WAS_COPIED = first_gcflag << 1 # keep in sync with et.c +GCFLAG_HAS_HASH = first_gcflag << 2 PRIMITIVE_SIZES = {1: lltype.Char, 2: rffi.SHORT, @@ -41,7 +42,7 @@ HDR = lltype.Struct('header', ('tid', lltype.Signed), ('version', llmemory.Address)) typeid_is_in_field = 'tid' - withhash_flag_is_in_field = 'tid', 'XXX' + withhash_flag_is_in_field = 'tid', GCFLAG_HAS_HASH GCTLS = lltype.Struct('GCTLS', ('nursery_free', llmemory.Address), ('nursery_top', llmemory.Address), @@ -80,12 +81,13 @@ self.declare_reader(size, TYPE) self.declare_write_barrier() + GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) + def setup(self): """Called at run-time to initialize the GC.""" GCBase.setup(self) - GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address],lltype.Signed)) self.stm_operations.setup_size_getter( - llhelper(GETSIZE, self._getsize_fn)) + llhelper(self.GETSIZE, self._getsize_fn)) self.main_thread_tls = self.setup_thread(True) self.mutex_lock = ll_thread.allocate_ll_lock() @@ -201,6 +203,11 @@ @always_inline + def get_type_id(self, obj): + tid = self.header(obj).tid + return llop.extract_ushort(llgroup.HALFWORD, tid) + + @always_inline def combine(self, typeid16, flags): return llop.combine_ushort(lltype.Signed, typeid16, flags) @@ -209,6 +216,10 @@ hdr = llmemory.cast_adr_to_ptr(addr, lltype.Ptr(self.HDR)) hdr.tid = self.combine(typeid16, flags) + def init_gc_object_immortal(self, addr, typeid16, flags=0): + flags |= GCFLAG_GLOBAL + self.init_gc_object(addr, typeid16, flags) + # ---------- def declare_reader(self, size, TYPE): @@ -317,6 +328,10 @@ def release(self, lock): ll_thread.c_thread_releaselock(lock) + # ---------- + + def identityhash(self, gcobj): + raise NotImplementedError("XXX") # ------------------------------------------------------------ diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -318,7 +318,7 @@ getfn(GCClass.writebarrier_before_copy.im_func, [s_gc] + [annmodel.SomeAddress()] * 2 + [annmodel.SomeInteger()] * 3, annmodel.SomeBool()) - elif GCClass.needs_write_barrier: + elif GCClass.needs_write_barrier and GCClass.needs_write_barrier != 'stm': raise NotImplementedError("GC needs write barrier, but does not provide writebarrier_before_copy functionality") # in some GCs we can inline the common case of diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -57,10 +57,10 @@ glob.done += 1 def run_me(): - debug_print("thread starting...") - arg = Arg() rstm.descriptor_init() try: + debug_print("thread starting...") + arg = Arg() for i in range(glob.LENGTH): arg.anchor = glob.anchor arg.value = i From noreply at buildbot.pypy.org Thu Feb 9 16:19:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 16:19:23 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Skip ExcData.exc_value. Message-ID: <20120209151923.528F482B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52300:ec5f429098a2 Date: 2012-02-09 12:15 +0100 http://bitbucket.org/pypy/pypy/changeset/ec5f429098a2/ Log: Skip ExcData.exc_value. diff --git a/pypy/rpython/memory/gctypelayout.py b/pypy/rpython/memory/gctypelayout.py --- a/pypy/rpython/memory/gctypelayout.py +++ b/pypy/rpython/memory/gctypelayout.py @@ -428,6 +428,13 @@ appendto = self.addresses_of_static_ptrs else: return + elif hasattr(TYPE, "_hints") and TYPE._hints.get('thread_local'): + # The exception data's value object is skipped: it's a thread- + # local data structure. We assume that objects are stored + # only temporarily there, so it is always cleared at the point + # where we collect the roots. + assert TYPE._name == 'ExcData' + return else: appendto = self.addresses_of_static_ptrs_in_nongc for a in gc_pointers_inside(value, adr, mutable_only=True): From noreply at buildbot.pypy.org Thu Feb 9 16:19:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 16:19:24 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: In-progress: hack at all files until targetdemo.py at least compiles. Message-ID: <20120209151924.9478B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52301:c14db797e8a3 Date: 2012-02-09 16:17 +0100 http://bitbucket.org/pypy/pypy/changeset/c14db797e8a3/ Log: In-progress: hack at all files until targetdemo.py at least compiles. Doesn't run at all so far. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -75,7 +75,7 @@ "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], "stmgc": [("translation.gctransformer", "framework"), - ("translation.gcrootfinder", "none")], # XXX + ("translation.gcrootfinder", "stm")], }, cmdline="--gc"), ChoiceOption("gctransformer", "GC transformer that is used - internal", @@ -93,7 +93,7 @@ default=IS_64_BITS, cmdline="--gcremovetypeptr"), ChoiceOption("gcrootfinder", "Strategy for finding GC Roots (framework GCs only)", - ["n/a", "shadowstack", "asmgcc", "none"], + ["n/a", "shadowstack", "asmgcc", "stm"], "shadowstack", cmdline="--gcrootfinder", requires={ diff --git a/pypy/objspace/flow/model.py b/pypy/objspace/flow/model.py --- a/pypy/objspace/flow/model.py +++ b/pypy/objspace/flow/model.py @@ -166,7 +166,7 @@ def show(self): from pypy.translator.tool.graphpage import try_show - try_show(self) + return try_show(self) class Block(object): @@ -241,7 +241,7 @@ def show(self): from pypy.translator.tool.graphpage import try_show - try_show(self) + return try_show(self) class Variable(object): diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,6 +1,7 @@ import thread from pypy.rlib.objectmodel import specialize, we_are_translated, keepalive_until_here from pypy.rpython.lltypesystem import rffi, lltype, rclass +from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, cast_instance_to_base_ptr, llhelper) @@ -45,12 +46,12 @@ def descriptor_init(): if not we_are_translated(): _global_lock.acquire() - _rffi_stm.stm_descriptor_init() + llop.stm_descriptor_init(lltype.Void) if not we_are_translated(): _global_lock.release() def descriptor_done(): if not we_are_translated(): _global_lock.acquire() - _rffi_stm.stm_descriptor_done() + llop.stm_descriptor_done(lltype.Void) if not we_are_translated(): _global_lock.release() def debug_get_state(): diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -396,12 +396,11 @@ # to keep them as operations until the genc stage) 'stm_getfield': LLOp(sideeffects=False, canrun=True), - 'stm_setfield': LLOp(), 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), - 'stm_setarrayitem': LLOp(), 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), - 'stm_setinteriorfield': LLOp(), 'stm_become_inevitable':LLOp(), + 'stm_descriptor_init': LLOp(), + 'stm_descriptor_done': LLOp(), # __________ address operations __________ diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -77,8 +77,8 @@ return self.get_size(obj) self._getsize_fn = _get_size # - for size, TYPE in PRIMITIVE_SIZES.items(): - self.declare_reader(size, TYPE) + ##for size, TYPE in PRIMITIVE_SIZES.items(): + ## self.declare_reader(size, TYPE) self.declare_write_barrier() GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) @@ -121,6 +121,9 @@ tls.malloc_flags = 0 return tls + def _setup_secondary_thread(self): + self.setup_thread(False) + @staticmethod def reset_nursery(tls): """Clear and forget all locally allocated objects.""" @@ -222,23 +225,25 @@ # ---------- - def declare_reader(self, size, TYPE): - # Reading functions. Defined here to avoid the extra burden of - # passing 'self' explicitly. - assert rffi.sizeof(TYPE) == size - PTYPE = rffi.CArrayPtr(TYPE) - stm_read_int = getattr(self.stm_operations, 'stm_read_int%d' % size) - # - @always_inline - def reader(obj, offset): - if self.header(obj).tid & GCFLAG_GLOBAL == 0: - adr = rffi.cast(PTYPE, obj + offset) - return adr[0] # local obj: read directly - else: - return stm_read_int(obj, offset) # else: call a helper - setattr(self, 'read_int%d' % size, reader) - # - # the following logic was moved to et.c to avoid a double call +## TURNED OFF, maybe temporarily: the following logic is now entirely +## done by C macros and functions. +## +## def declare_reader(self, size, TYPE): +## # Reading functions. Defined here to avoid the extra burden of +## # passing 'self' explicitly. +## assert rffi.sizeof(TYPE) == size +## PTYPE = rffi.CArrayPtr(TYPE) +## stm_read_int = getattr(self.stm_operations, 'stm_read_int%d' % size) +## # +## @always_inline +## def reader(obj, offset): +## if self.header(obj).tid & GCFLAG_GLOBAL == 0: +## adr = rffi.cast(PTYPE, obj + offset) +## return adr[0] # local obj: read directly +## else: +## return stm_read_int(obj, offset) # else: call a helper +## setattr(self, 'read_int%d' % size, reader) +## # ## @dont_inline ## def _read_word_global(obj, offset): ## hdr = self.header(obj) diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -138,7 +138,6 @@ def __init__(self, translator): from pypy.rpython.memory.gc.base import choose_gc_from_config from pypy.rpython.memory.gc.base import ARRAY_TYPEID_MAP - from pypy.rpython.memory.gc import inspector super(FrameworkGCTransformer, self).__init__(translator, inline=True) if hasattr(self, 'GC_PARAMS'): @@ -251,7 +250,47 @@ classdef = bk.getuniqueclassdef(GCClass) s_gc = annmodel.SomeInstance(classdef) + + self._declare_functions(GCClass, getfn, s_gc, s_typeid16) + + # thread support + if translator.config.translation.continuation: + root_walker.need_stacklet_support(self, getfn) + if translator.config.translation.thread: + root_walker.need_thread_support(self, getfn) + + self.layoutbuilder.encode_type_shapes_now() + + annhelper.finish() # at this point, annotate all mix-level helpers + annhelper.backend_optimize() + + self.collect_analyzer = CollectAnalyzer(self.translator) + self.collect_analyzer.analyze_all() + + s_gc = self.translator.annotator.bookkeeper.valueoftype(GCClass) + r_gc = self.translator.rtyper.getrepr(s_gc) + self.c_const_gc = rmodel.inputconst(r_gc, self.gcdata.gc) + s_gc_data = self.translator.annotator.bookkeeper.valueoftype( + gctypelayout.GCData) + r_gc_data = self.translator.rtyper.getrepr(s_gc_data) + self.c_const_gcdata = rmodel.inputconst(r_gc_data, self.gcdata) + self.malloc_zero_filled = GCClass.malloc_zero_filled + + HDR = self.HDR = self.gcdata.gc.gcheaderbuilder.HDR + + size_gc_header = self.gcdata.gc.gcheaderbuilder.size_gc_header + vtableinfo = (HDR, size_gc_header, self.gcdata.gc.typeid_is_in_field) + self.c_vtableinfo = rmodel.inputconst(lltype.Void, vtableinfo) + tig = self.layoutbuilder.type_info_group._as_ptr() + self.c_type_info_group = rmodel.inputconst(lltype.typeOf(tig), tig) + sko = llmemory.sizeof(gcdata.TYPE_INFO) + self.c_vtinfo_skip_offset = rmodel.inputconst(lltype.typeOf(sko), sko) + + + def _declare_functions(self, GCClass, getfn, s_gc, s_typeid16): s_gcref = annmodel.SomePtr(llmemory.GCREF) + gcdata = self.gcdata + translator = self.translator malloc_fixedsize_clear_meth = GCClass.malloc_fixedsize_clear.im_func self.malloc_fixedsize_clear_ptr = getfn( @@ -412,6 +451,7 @@ else: self.id_ptr = None + from pypy.rpython.memory.gc import inspector self.get_rpy_roots_ptr = getfn(inspector.get_rpy_roots, [s_gc], rgc.s_list_of_gcrefs(), @@ -488,39 +528,6 @@ [s_gc, annmodel.SomeInteger()], annmodel.SomeInteger()) - # thread support - if translator.config.translation.continuation: - root_walker.need_stacklet_support(self, getfn) - if translator.config.translation.thread: - root_walker.need_thread_support(self, getfn) - - self.layoutbuilder.encode_type_shapes_now() - - annhelper.finish() # at this point, annotate all mix-level helpers - annhelper.backend_optimize() - - self.collect_analyzer = CollectAnalyzer(self.translator) - self.collect_analyzer.analyze_all() - - s_gc = self.translator.annotator.bookkeeper.valueoftype(GCClass) - r_gc = self.translator.rtyper.getrepr(s_gc) - self.c_const_gc = rmodel.inputconst(r_gc, self.gcdata.gc) - s_gc_data = self.translator.annotator.bookkeeper.valueoftype( - gctypelayout.GCData) - r_gc_data = self.translator.rtyper.getrepr(s_gc_data) - self.c_const_gcdata = rmodel.inputconst(r_gc_data, self.gcdata) - self.malloc_zero_filled = GCClass.malloc_zero_filled - - HDR = self.HDR = self.gcdata.gc.gcheaderbuilder.HDR - - size_gc_header = self.gcdata.gc.gcheaderbuilder.size_gc_header - vtableinfo = (HDR, size_gc_header, self.gcdata.gc.typeid_is_in_field) - self.c_vtableinfo = rmodel.inputconst(lltype.Void, vtableinfo) - tig = self.layoutbuilder.type_info_group._as_ptr() - self.c_type_info_group = rmodel.inputconst(lltype.typeOf(tig), tig) - sko = llmemory.sizeof(gcdata.TYPE_INFO) - self.c_vtinfo_skip_offset = rmodel.inputconst(lltype.typeOf(sko), sko) - def build_root_walker(self): from pypy.rpython.memory.gctransform import shadowstack return shadowstack.ShadowStackRootWalker(self) diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py new file mode 100644 --- /dev/null +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -0,0 +1,38 @@ +from pypy.rpython.memory.gctransform.framework import FrameworkGCTransformer +from pypy.rpython.memory.gctransform.framework import BaseRootWalker +from pypy.annotation import model as annmodel + + +class StmFrameworkGCTransformer(FrameworkGCTransformer): + + def _declare_functions(self, GCClass, getfn, s_gc, *args): + super(StmFrameworkGCTransformer, self)._declare_functions( + GCClass, getfn, s_gc, *args) + self.setup_secondary_thread_ptr = getfn( + GCClass._setup_secondary_thread.im_func, + [s_gc], annmodel.s_None) + self.teardown_thread_ptr = getfn( + GCClass.teardown_thread.im_func, + [s_gc], annmodel.s_None) + + def push_roots(self, hop, keep_current_args=False): + pass + + def pop_roots(self, hop, livevars): + pass + + def build_root_walker(self): + return StmStackRootWalker(self) + + def gct_stm_descriptor_init(self, hop): + hop.genop("direct_call", [self.setup_secondary_thread_ptr, + self.c_const_gc]) + + def gct_stm_descriptor_done(self, hop): + hop.genop("direct_call", [self.teardown_thread_ptr, self.c_const_gc]) + + +class StmStackRootWalker(BaseRootWalker): + + def walk_stack_roots(self, collect_stack_root): + raise NotImplementedError diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -6,7 +6,7 @@ typeOf, Ptr, ContainerType, RttiStruct, \ RuntimeTypeInfo, getRuntimeTypeInfo, top_container from pypy.rpython.memory.gctransform import \ - refcounting, boehm, framework, asmgcroot + refcounting, boehm, framework, asmgcroot, stmframework from pypy.rpython.lltypesystem import lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -404,6 +404,9 @@ def OP_GC_STACK_BOTTOM(self, funcgen, op): return 'pypy_asm_stack_bottom();' +class StmFrameworkGcPolicy(FrameworkGcPolicy): + transformerclass = stmframework.StmFrameworkGCTransformer + name_to_gcpolicy = { 'boehm': BoehmGcPolicy, @@ -411,6 +414,5 @@ 'none': NoneGcPolicy, 'framework': FrameworkGcPolicy, 'framework+asmgcroot': AsmGcRootFrameworkGcPolicy, + 'framework+stm': StmFrameworkGcPolicy, } - - diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -199,8 +199,10 @@ def get_gcpolicyclass(self): if self.gcpolicy is None: name = self.config.translation.gctransformer - if self.config.translation.gcrootfinder == "asmgcc": - name = "%s+asmgcroot" % (name,) + extended_name = "%s+%s" % ( + name, self.config.translation.gcrootfinder) + if extended_name in gc.name_to_gcpolicy: + name = extended_name return gc.name_to_gcpolicy[name] return self.gcpolicy diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -4,7 +4,7 @@ from pypy.translator.stm.llstm import size_of_voidp -def _stm_generic_get(funcgen, op, expr, simple_struct=False): +def _stm_generic_get(funcgen, op, (expr_type, expr_ptr, expr_field)): T = funcgen.lltypemap(op.result) resulttypename = funcgen.db.gettype(T) cresulttypename = cdecl(resulttypename, '') @@ -12,117 +12,50 @@ # assert T is not lltype.Void # XXX fieldsize = rffi.sizeof(T) - if fieldsize >= size_of_voidp or T == lltype.SingleFloat: - assert 1 # xxx assert somehow that the field is aligned - if T == lltype.Float: - funcname = 'stm_read_double' - elif T == lltype.SingleFloat: - funcname = 'stm_read_float' - elif fieldsize == size_of_voidp: - funcname = 'stm_read_word' - elif fieldsize == 8: # 32-bit only: read a 64-bit field - funcname = 'stm_read_doubleword' - else: - raise NotImplementedError(fieldsize) - return '%s = (%s)%s((long*)&%s);' % ( - newvalue, cresulttypename, funcname, expr) + assert fieldsize in (1, 2, 4, 8) + if T == lltype.Float: + assert fieldsize == 8 + fieldsize = '8f' + elif T == lltype.SingleFloat: + assert fieldsize == 4 + fieldsize = '4f' + if expr_type is not None: # optimization for the common case + return '%s = RPY_STM_FIELD(%s, %s, %s, %s, %s);' % ( + newvalue, cresulttypename, fieldsize, + expr_type, expr_ptr, expr_field) else: - assert fieldsize in (1, 2, 4) - if simple_struct: - # assume that the object is aligned, and any possible misalignment - # comes from the field offset, so that it can be resolved at - # compile-time (by using C macros) - STRUCT = funcgen.lltypemap(op.args[0]).TO - structdef = funcgen.db.gettypedefnode(STRUCT) - basename = funcgen.expr(op.args[0]) - fieldname = op.args[1].value - trailing = '' - if T == lltype.Bool: - trailing = ' & 1' # needed in this case, otherwise casting - # a several-bytes value to bool_t would - # take into account all the several bytes - return '%s = (%s)(stm_fx_read_partial(%s, offsetof(%s, %s))%s);'% ( - newvalue, cresulttypename, basename, - cdecl(funcgen.db.gettype(STRUCT), ''), - structdef.c_struct_field_name(fieldname), - trailing) - # - else: - return '%s = (%s)stm_read_partial_%d(&%s);' % ( - newvalue, cresulttypename, fieldsize, expr) - -def _stm_generic_set(funcgen, op, targetexpr, T): - basename = funcgen.expr(op.args[0]) - newvalue = funcgen.expr(op.args[-1], special_case_void=False) - # - assert T is not lltype.Void # XXX - fieldsize = rffi.sizeof(T) - if fieldsize >= size_of_voidp or T == lltype.SingleFloat: - assert 1 # xxx assert somehow that the field is aligned - if T == lltype.Float: - funcname = 'stm_write_double' - newtype = 'double' - elif T == lltype.SingleFloat: - funcname = 'stm_write_float' - newtype = 'float' - elif fieldsize == size_of_voidp: - funcname = 'stm_write_word' - newtype = 'long' - elif fieldsize == 8: # 32-bit only: read a 64-bit field - funcname = 'stm_write_doubleword' - newtype = 'long long' - else: - raise NotImplementedError(fieldsize) - return '%s((long*)&%s, (%s)%s);' % ( - funcname, targetexpr, newtype, newvalue) - else: - assert fieldsize in (1, 2, 4) - return ('stm_write_partial_%d(&%s, (unsigned long)%s);' % ( - fieldsize, targetexpr, newvalue)) + return '%s = RPY_STM_ARRAY(%s, %s, %s, %s);' % ( + newvalue, cresulttypename, fieldsize, + expr_ptr, expr_field) def field_expr(funcgen, args): STRUCT = funcgen.lltypemap(args[0]).TO structdef = funcgen.db.gettypedefnode(STRUCT) - baseexpr_is_const = isinstance(args[0], Constant) - return structdef.ptr_access_expr(funcgen.expr(args[0]), - args[1].value, - baseexpr_is_const) + fldname = structdef.c_struct_field_name(args[1].value) + ptr = funcgen.expr(args[0]) + return ('%s %s' % (structdef.typetag, structdef.name), ptr, fldname) def stm_getfield(funcgen, op): - expr = field_expr(funcgen, op.args) - return _stm_generic_get(funcgen, op, expr, simple_struct=True) - -def stm_setfield(funcgen, op): - expr = field_expr(funcgen, op.args) - T = op.args[2].concretetype - return _stm_generic_set(funcgen, op, expr, T) + access_info = field_expr(funcgen, op.args) + return _stm_generic_get(funcgen, op, access_info) def array_expr(funcgen, args): ARRAY = funcgen.lltypemap(args[0]).TO ptr = funcgen.expr(args[0]) index = funcgen.expr(args[1]) arraydef = funcgen.db.gettypedefnode(ARRAY) - return arraydef.itemindex_access_expr(ptr, index) + return (None, ptr, arraydef.itemindex_access_expr(ptr, index)) def stm_getarrayitem(funcgen, op): - expr = array_expr(funcgen, op.args) - return _stm_generic_get(funcgen, op, expr) - -def stm_setarrayitem(funcgen, op): - expr = array_expr(funcgen, op.args) - T = op.args[2].concretetype - return _stm_generic_set(funcgen, op, expr, T) + access_info = array_expr(funcgen, op.args) + return _stm_generic_get(funcgen, op, access_info) def stm_getinteriorfield(funcgen, op): + xxx expr = funcgen.interior_expr(op.args) return _stm_generic_get(funcgen, op, expr) -def stm_setinteriorfield(funcgen, op): - expr = funcgen.interior_expr(op.args[:-1]) - T = op.args[-1].concretetype - return _stm_generic_set(funcgen, op, expr, T) - def stm_become_inevitable(funcgen, op): info = op.args[0].value diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -38,19 +38,13 @@ /************************************************************/ /* This is the same as the object header structure HDR - * declared in stmgc.py, and the same two flags */ + * declared in stmgc.py */ typedef struct { long tid; long version; } orec_t; -enum { - first_gcflag = 1L << (PYPY_LONG_BIT / 2), - GCFLAG_GLOBAL = first_gcflag << 0, - GCFLAG_WAS_COPIED = first_gcflag << 1 -}; - /************************************************************/ #define IS_LOCKED(num) ((num) < 0) @@ -681,7 +675,6 @@ return result; } -#if 0 void stm_try_inevitable(STM_CCHARP1(why)) { /* when a transaction is inevitable, its start_time is equal to @@ -689,7 +682,7 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; - if (!d->transaction_active) + if (is_main_thread(d)) return; #ifdef RPY_STM_ASSERT @@ -697,17 +690,16 @@ if (PYPY_HAVE_DEBUG_PRINTS) { fprintf(PYPY_DEBUG_FILE, "%s%s\n", why, - (!d->transaction_active) ? " (inactive)" : - is_inevitable(d) ? " (already inevitable)" : ""); + is_inevitable(d) ? "" : " <===="); } #endif - if (is_inevitable_or_inactive(d)) + if (is_inevitable(d)) { #ifdef RPY_STM_ASSERT PYPY_DEBUG_STOP("stm-inevitable"); #endif - return; /* I am already inevitable, or not in a transaction at all */ + return; /* I am already inevitable */ } while (1) @@ -738,7 +730,6 @@ PYPY_DEBUG_STOP("stm-inevitable"); #endif } -#endif void stm_abort_and_retry(void) { diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -22,11 +22,12 @@ void stm_tldict_add(void *, void *); void stm_tlidct_enum(void(*)(void*, void*)); -long stm_read_word(void *, long); +char stm_read_int1(void *, long); +short stm_read_int2(void *, long); +int stm_read_int4(void *, long); +long long stm_read_int8(void *, long); -#if 0 - #ifdef RPY_STM_ASSERT # define STM_CCHARP1(arg) char* arg # define STM_EXPLAIN1(info) info @@ -37,8 +38,6 @@ void* stm_perform_transaction(void*(*)(void*, long), void*); -long stm_read_word(long* addr); -void stm_write_word(long* addr, long val); void stm_try_inevitable(STM_CCHARP1(why)); void stm_abort_and_retry(void); long stm_debug_get_state(void); /* -1: descriptor_init() was not called @@ -48,33 +47,27 @@ long stm_thread_id(void); /* returns a unique thread id, or 0 if descriptor_init() was not called */ -// XXX little-endian only! -/* this macro is used if 'base' is a word-aligned pointer and 'offset' - is a compile-time constant */ -#define stm_fx_read_partial(base, offset) \ - (stm_read_word( \ - (long*)(((char*)(base)) + ((offset) & ~(sizeof(void*)-1)))) \ - >> (8 * ((offset) & (sizeof(void*)-1)))) -unsigned char stm_read_partial_1(void *addr); -unsigned short stm_read_partial_2(void *addr); -void stm_write_partial_1(void *addr, unsigned char nval); -void stm_write_partial_2(void *addr, unsigned short nval); -#if PYPY_LONG_BIT == 64 -unsigned int stm_read_partial_4(void *addr); -void stm_write_partial_4(void *addr, unsigned int nval); -#endif +/************************************************************/ -double stm_read_double(long *addr); -void stm_write_double(long *addr, double val); -float stm_read_float(long *addr); -void stm_write_float(long *addr, float val); -#if PYPY_LONG_BIT == 32 -long long stm_read_doubleword(long *addr); -void stm_write_doubleword(long *addr, long long val); -#endif +/* These are the same two flags as defined in stmgc.py */ -#endif /* 0 */ +enum { + first_gcflag = 1L << (PYPY_LONG_BIT / 2), + GCFLAG_GLOBAL = first_gcflag << 0, + GCFLAG_WAS_COPIED = first_gcflag << 1 +}; + + +#define RPY_STM_ARRAY(T, size, ptr, field) \ + _RPY_STM(T, size, ptr, ((char*)&field)-((char*)ptr), field) + +#define RPY_STM_FIELD(T, size, STRUCT, ptr, field) \ + _RPY_STM(T, size, ptr, offsetof(STRUCT, field), ptr->field) + +#define _RPY_STM(T, size, ptr, offset, field) \ + (*(long*)ptr & GCFLAG_GLOBAL ? field : \ + (T)stm_read_int##size(ptr, offset)) #endif /* _ET_H */ diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -141,11 +141,9 @@ elif (STRUCT._immutable_field(op.args[1].value) or 'stm_access_directly' in STRUCT._hints): op1 = op - elif STRUCT._gckind == 'raw': + else: turn_inevitable(newoperations, "setfield-raw") op1 = op - else: - op1 = SpaceOperation('stm_setfield', op.args, op.result) newoperations.append(op1) def stt_getarrayitem(self, newoperations, op): @@ -171,11 +169,9 @@ op1 = op #elif op.args[0] in self.access_directly: # op1 = op - elif ARRAY._gckind == 'raw': + else: turn_inevitable(newoperations, "setarrayitem-raw") op1 = op - else: - op1 = SpaceOperation('stm_setarrayitem', op.args, op.result) newoperations.append(op1) def stt_getinteriorfield(self, newoperations, op): diff --git a/pypy/translator/tool/graphpage.py b/pypy/translator/tool/graphpage.py --- a/pypy/translator/tool/graphpage.py +++ b/pypy/translator/tool/graphpage.py @@ -439,7 +439,7 @@ for y in gc.get_referrers(x): if isinstance(y, FunctionGraph): y.show() - return + return y elif isinstance(y, Link): block = y.prevblock if block not in seen: From noreply at buildbot.pypy.org Thu Feb 9 16:47:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:36 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: start writing a test Message-ID: <20120209154736.C32DB82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52302:842dac94f7e3 Date: 2012-02-09 15:39 +0200 http://bitbucket.org/pypy/pypy/changeset/842dac94f7e3/ Log: start writing a test diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -507,3 +507,6 @@ raises(KeyError, 'd["xyz"]') raises(KeyError, 'd.fields["xyz"]') + def test_create_from_dict(self): + from _numpypy import dtype + d = dtype({...}) From noreply at buildbot.pypy.org Thu Feb 9 16:47:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:38 +0100 (CET) Subject: [pypy-commit] pypy sse-vectorization: close long forgotten branch Message-ID: <20120209154738.2A1DF82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: sse-vectorization Changeset: r52303:028a592738b8 Date: 2012-02-09 16:19 +0200 http://bitbucket.org/pypy/pypy/changeset/028a592738b8/ Log: close long forgotten branch From noreply at buildbot.pypy.org Thu Feb 9 16:47:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:39 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: make assert_aligned a call Message-ID: <20120209154739.84DDD82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52304:e61829326b21 Date: 2012-02-09 16:30 +0200 http://bitbucket.org/pypy/pypy/changeset/e61829326b21/ Log: make assert_aligned a call diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -79,6 +79,8 @@ OS_LLONG_U_TO_FLOAT = 94 # OS_MATH_SQRT = 100 + # + OS_ASSERT_ALIGNED = 200 # for debugging: _OS_CANRAISE = set([OS_NONE, OS_STR2UNICODE, OS_LIBFFI_CALL]) diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -110,7 +110,7 @@ Optimization.emit_operation(self, op) def optimize_CALL(self, op): - oopspec = self._get_oopspec(op) + oopspec = self.get_oopspec(op) ops = [op] if oopspec == EffectInfo.OS_LIBFFI_PREPARE: ops = self.do_prepare_call(op) @@ -250,10 +250,6 @@ debug_print(self.logops.repr_of_resop(op)) dispatch_opt(self, op) - def _get_oopspec(self, op): - effectinfo = op.getdescr().get_extra_info() - return effectinfo.oopspecindex - def _get_funcval(self, op): return self.getvalue(op.getarg(1)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -330,6 +330,9 @@ def forget_numberings(self, box): self.optimizer.forget_numberings(box) + def get_oopspec(self, op): + effectinfo = op.getdescr().get_extra_info() + return effectinfo.oopspecindex class Optimizer(Optimization): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -1,15 +1,35 @@ from pypy.jit.metainterp.optimizeopt.test.test_optimizebasic import BaseTestBasic, LLtypeMixin +from pypy.rpython.lltypesystem import lltype +from pypy.jit.codewriter.effectinfo import EffectInfo class TestVectorize(BaseTestBasic, LLtypeMixin): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll:vectorize" + class namespace: + cpu = LLtypeMixin.cpu + FUNC = LLtypeMixin.FUNC + arraydescr = cpu.arraydescrof(lltype.GcArray(lltype.Signed)) + + def calldescr(cpu, FUNC, oopspecindex, extraeffect=None): + if extraeffect == EffectInfo.EF_RANDOM_EFFECTS: + f = None # means "can force all" really + else: + f = [] + einfo = EffectInfo(f, f, f, f, oopspecindex=oopspecindex, + extraeffect=extraeffect) + return cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, einfo) + # + assert_aligned = calldescr(cpu, FUNC, EffectInfo.OS_ASSERT_ALIGNED) + + namespace = namespace.__dict__ + def test_basic(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -39,9 +59,9 @@ def test_basic_sub(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_sub(f0, f1) @@ -73,9 +93,9 @@ def test_unfit_trees(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -111,9 +131,9 @@ def test_unfit_trees_2(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -143,9 +163,9 @@ def test_unfit_trees_3(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -179,9 +199,9 @@ def test_guard_forces(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -213,9 +233,9 @@ def test_guard_prevents(self): ops = """ [p0, p1, p2, i0, i1, i2] - assert_aligned(p0, i0) - assert_aligned(p1, i1) - assert_aligned(p1, i2) + call(p0, i0, descr=assert_aligned) + call(p1, i1, descr=assert_aligned) + call(p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -3,6 +3,7 @@ from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxVector +from pypy.jit.codewriter.effectinfo import EffectInfo VECTOR_SIZE = 2 VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD, @@ -97,9 +98,13 @@ def new(self): return OptVectorize() - def optimize_ASSERT_ALIGNED(self, op): - index = self.getvalue(op.getarg(1)) - self.tracked_indexes[index] = TrackIndex(index, 0) + def optimize_CALL(self, op): + oopspec = self.get_oopspec(op) + if oopspec == EffectInfo.OS_ASSERT_ALIGNED: + index = self.getvalue(op.getarg(1)) + self.tracked_indexes[index] = TrackIndex(index, 0) + else: + self.optimize_default(op) def optimize_GETARRAYITEM_RAW(self, op): arr = self.getvalue(op.getarg(0)) From noreply at buildbot.pypy.org Thu Feb 9 16:47:40 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:40 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: enable assert aligned from rlib.jit Message-ID: <20120209154740.BEC9782B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52305:c6740ea44dcc Date: 2012-02-09 16:38 +0200 http://bitbucket.org/pypy/pypy/changeset/c6740ea44dcc/ Log: enable assert aligned from rlib.jit diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -377,6 +377,8 @@ prepare = self._handle_jit_call elif oopspec_name.startswith('libffi_'): prepare = self._handle_libffi_call + elif oopspec_name.startswith('assert_aligned'): + prepare = self._handle_assert_aligned_call elif oopspec_name.startswith('math.sqrt'): prepare = self._handle_math_sqrt_call else: @@ -1692,6 +1694,10 @@ return self._handle_oopspec_call(op, args, EffectInfo.OS_MATH_SQRT, EffectInfo.EF_ELIDABLE_CANNOT_RAISE) + def _handle_assert_aligned_call(self, op, oopspec_name, args): + return self._handle_oopspec_call(op, args, EffectInfo.OS_ASSERT_ALIGNED, + EffectInfo.EF_CANNOT_RAISE) + def rewrite_op_jit_force_quasi_immutable(self, op): v_inst, c_fieldname = op.args descr1 = self.cpu.fielddescrof(v_inst.concretetype.TO, diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -871,3 +871,7 @@ v_cls = hop.inputarg(classrepr, arg=1) return hop.genop('jit_record_known_class', [v_inst, v_cls], resulttype=lltype.Void) + + at oopspec('assert_aligned(arg)') +def assert_aligned(arg): + pass From noreply at buildbot.pypy.org Thu Feb 9 16:47:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:42 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: merge default Message-ID: <20120209154742.DC75B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52306:c32fc826d45e Date: 2012-02-09 16:39 +0200 http://bitbucket.org/pypy/pypy/changeset/c32fc826d45e/ Log: merge default diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -309,3 +310,13 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,43 +227,57 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +518,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +733,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,92 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As has become a habit, this +release brings a lot of bugfixes, and performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +list strategies which makes homogenous lists more efficient both in terms +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are many examples + of python constructs that now should be faster; too many to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + * multi dimensional arrays + + * various sizes of dtypes + + * a lot of ufuncs + + * a lot of other minor changes + + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + + It's also probably worth noting, we're considering donations for the STM + project. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Software Transactional Memory, you can read more about `our plans`_ + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) @@ -1331,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1629,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1486,6 +1486,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -974,13 +974,13 @@ any_operation = len(self.metainterp.history.operations) > 0 jitdriver_sd = self.metainterp.staticdata.jitdrivers_sd[jdindex] self.verify_green_args(jitdriver_sd, greenboxes) - self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, + self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.portal_call_depth, greenboxes) if self.metainterp.seen_loop_header_for_jdindex < 0: if not any_operation: return - if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if self.metainterp.portal_call_depth or not self.metainterp.get_procedure_token(greenboxes, True): if not jitdriver_sd.no_loop_header: return # automatically add a loop_header if there is none @@ -992,7 +992,7 @@ self.metainterp.seen_loop_header_for_jdindex = -1 # - if not self.metainterp.in_recursion: + if not self.metainterp.portal_call_depth: assert jitdriver_sd is self.metainterp.jitdriver_sd # Set self.pc to point to jit_merge_point instead of just after: # if reached_loop_header() raises SwitchToBlackhole, then the @@ -1028,11 +1028,11 @@ assembler_call=True) raise ChangeFrame - def debug_merge_point(self, jitdriver_sd, jd_index, in_recursion, greenkey): + def debug_merge_point(self, jitdriver_sd, jd_index, portal_call_depth, greenkey): # debugging: produce a DEBUG_MERGE_POINT operation loc = jitdriver_sd.warmstate.get_location_str(greenkey) debug_print(loc) - args = [ConstInt(jd_index), ConstInt(in_recursion)] + greenkey + args = [ConstInt(jd_index), ConstInt(portal_call_depth)] + greenkey self.metainterp.history.record(rop.DEBUG_MERGE_POINT, args, None) @arguments("box", "label") @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -1552,7 +1556,7 @@ # ____________________________________________________________ class MetaInterp(object): - in_recursion = 0 + portal_call_depth = 0 cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): @@ -1587,7 +1591,7 @@ def newframe(self, jitcode, greenkey=None): if jitcode.is_portal: - self.in_recursion += 1 + self.portal_call_depth += 1 if greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (greenkey, len(self.history.operations))) @@ -1603,7 +1607,7 @@ frame = self.framestack.pop() jitcode = frame.jitcode if jitcode.is_portal: - self.in_recursion -= 1 + self.portal_call_depth -= 1 if frame.greenkey is not None and self.is_main_jitcode(jitcode): self.portal_trace_positions.append( (None, len(self.history.operations))) @@ -1662,17 +1666,17 @@ raise self.staticdata.ExitFrameWithExceptionRef(self.cpu, excvaluebox.getref_base()) def check_recursion_invariant(self): - in_recursion = -1 + portal_call_depth = -1 for frame in self.framestack: jitcode = frame.jitcode assert jitcode.is_portal == len([ jd for jd in self.staticdata.jitdrivers_sd if jd.mainjitcode is jitcode]) if jitcode.is_portal: - in_recursion += 1 - if in_recursion != self.in_recursion: - print "in_recursion problem!!!" - print in_recursion, self.in_recursion + portal_call_depth += 1 + if portal_call_depth != self.portal_call_depth: + print "portal_call_depth problem!!!" + print portal_call_depth, self.portal_call_depth for frame in self.framestack: jitcode = frame.jitcode if jitcode.is_portal: @@ -2183,11 +2187,11 @@ def initialize_state_from_start(self, original_boxes): # ----- make a new frame ----- - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.framestack = [] f = self.newframe(self.jitdriver_sd.mainjitcode) f.setup_call(original_boxes) - assert self.in_recursion == 0 + assert self.portal_call_depth == 0 self.virtualref_boxes = [] self.initialize_withgreenfields(original_boxes) self.initialize_virtualizable(original_boxes) @@ -2198,7 +2202,7 @@ # otherwise the jit_virtual_refs are left in a dangling state. rstack._stack_criticalcode_start() try: - self.in_recursion = -1 # always one portal around + self.portal_call_depth = -1 # always one portal around self.history = history.History() inputargs_and_holes = self.rebuild_state_after_failure(resumedescr) self.history.inputargs = [box for box in inputargs_and_holes if box] @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -508,6 +508,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): @@ -3695,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -98,6 +98,10 @@ ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), + ('logical_and', 'logical_and'), + ('logical_xor', 'logical_xor'), + ('logical_not', 'logical_not'), + ('logical_or', 'logical_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl @@ -107,8 +111,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -80,7 +80,13 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -91,9 +97,12 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rpow = _binop_right_impl("power") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -174,11 +183,17 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), __pow__ = interp2app(W_GenericBox.descr_pow), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rpow__ = interp2app(W_GenericBox.descr_rpow), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -187,8 +202,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -86,8 +86,9 @@ def apply_transformations(self, arr, transformations): v = self - for transform in transformations: - v = v.transform(arr, transform) + if transformations is not None: + for transform in transformations: + v = v.transform(arr, transform) return v def transform(self, arr, t): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,7 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, - signature, support) + signature, support, loop) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) @@ -12,39 +12,11 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, +from pypy.module.micronumpy.interp_iter import (ArrayIterator, SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.appbridge import get_appbridge_cache -numpy_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'], - get_printable_location=signature.new_printable_location('numpy'), - name='numpy', -) -all_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('all'), - name='numpy_all', -) -any_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['frame', 'self', 'dtype'], - get_printable_location=signature.new_printable_location('any'), - name='numpy_any', -) -slice_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self', 'frame', 'arr'], - get_printable_location=signature.new_printable_location('slice'), - name='numpy_slice', -) count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], @@ -173,6 +145,8 @@ descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") + descr_all = _reduce_ufunc_impl('logical_and') + descr_any = _reduce_ufunc_impl('logical_or') def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( @@ -212,40 +186,6 @@ return space.wrap(loop(self)) return func_with_new_name(impl, "reduce_arg%s_impl" % op_name) - def _all(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - all_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - dtype=dtype, frame=frame) - if not dtype.itemtype.bool(sig.eval(frame, self)): - return False - frame.next(shapelen) - return True - - def descr_all(self, space): - return space.wrap(self._all()) - - def _any(self): - dtype = self.find_dtype() - sig = self.find_sig() - frame = sig.create_frame(self) - shapelen = len(self.shape) - while not frame.done(): - any_driver.jit_merge_point(sig=sig, frame=frame, - shapelen=shapelen, self=self, - dtype=dtype) - if dtype.itemtype.bool(sig.eval(frame, self)): - return True - frame.next(shapelen) - return False - - def descr_any(self, space): - return space.wrap(self._any()) - descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") @@ -267,7 +207,7 @@ out_size = support.product(out_shape) result = W_NDimArray(out_size, out_shape, dtype) # This is the place to add fpypy and blas - return multidim_dot(space, self.get_concrete(), + return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, other_critical_dim) @@ -280,6 +220,12 @@ def descr_get_ndim(self, space): return space.wrap(len(self.shape)) + def descr_get_itemsize(self, space): + return space.wrap(self.find_dtype().itemtype.get_element_size()) + + def descr_get_nbytes(self, space): + return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -507,7 +453,7 @@ w_shape = space.newtuple(args_w) new_shape = get_shape_from_iterable(space, self.size, w_shape) return self.reshape(space, new_shape) - + def reshape(self, space, new_shape): concrete = self.get_concrete() # Since we got to here, prod(new_shape) == self.size @@ -679,6 +625,9 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def compute_first_step(self, sig, frame): + pass + def convert_to_array(space, w_obj): if isinstance(w_obj, BaseArray): return w_obj @@ -744,22 +693,9 @@ raise NotImplementedError def compute(self): - result = W_NDimArray(self.size, self.shape, self.find_dtype()) - shapelen = len(self.shape) - sig = self.find_sig() - frame = sig.create_frame(self) - ri = ArrayIterator(self.size) - while not ri.done(): - numpy_driver.jit_merge_point(sig=sig, - shapelen=shapelen, - result_size=self.size, - frame=frame, - ri=ri, - self=self, result=result) - result.setitem(ri.offset, sig.eval(frame, self)) - frame.next(shapelen) - ri = ri.next(shapelen) - return result + ra = ResultArray(self, self.size, self.shape, self.res_dtype) + loop.compute(ra) + return ra.left def force_if_needed(self): if self.forced_result is None: @@ -817,7 +753,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() - return signature.Call1(self.ufunc, self.name, self.values.create_sig()) + return signature.Call1(self.ufunc, self.name, self.calc_dtype, + self.values.create_sig()) class Call2(VirtualArray): """ @@ -858,6 +795,66 @@ return signature.Call2(self.ufunc, self.name, self.calc_dtype, self.left.create_sig(), self.right.create_sig()) +class ResultArray(Call2): + def __init__(self, child, size, shape, dtype, res=None, order='C'): + if res is None: + res = W_NDimArray(size, shape, dtype, order) + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + + def create_sig(self): + return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + self.right.create_sig()) + +def done_if_true(dtype, val): + return dtype.itemtype.bool(val) + +def done_if_false(dtype, val): + return not dtype.itemtype.bool(val) + +class ReduceArray(Call2): + def __init__(self, func, name, identity, child, dtype): + self.identity = identity + Call2.__init__(self, func, name, [1], dtype, dtype, None, child) + + def compute_first_step(self, sig, frame): + assert isinstance(sig, signature.ReduceSignature) + if self.identity is None: + frame.cur_value = sig.right.eval(frame, self.right).convert_to( + self.calc_dtype) + frame.next(len(self.right.shape)) + else: + frame.cur_value = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + if self.name == 'logical_and': + done_func = done_if_false + elif self.name == 'logical_or': + done_func = done_if_true + else: + done_func = None + return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig(), done_func) + +class AxisReduce(Call2): + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + self.identity = identity + + def compute_first_step(self, sig, frame): + if self.identity is not None: + frame.identity = self.identity.convert_to(self.calc_dtype) + + def create_sig(self): + return signature.AxisReduceSignature(self.ufunc, self.name, + self.res_dtype, + signature.ScalarSignature(self.res_dtype), + self.right.create_sig()) + class SliceArray(Call2): def __init__(self, shape, dtype, left, right, no_broadcast=False): self.no_broadcast = no_broadcast @@ -876,18 +873,6 @@ self.calc_dtype, lsig, rsig) -class AxisReduce(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make AxisReduce lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, dim): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.dim = dim - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -973,7 +958,7 @@ self._fast_setslice(space, w_value) else: arr = SliceArray(self.shape, self.dtype, self, w_value) - self._sliceloop(arr) + loop.compute(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -997,17 +982,6 @@ source.next() dest.next() - def _sliceloop(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(self.shape) - while not frame.done(): - slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - arr=arr, - shapelen=shapelen) - sig.eval(frame, arr) - frame.next(shapelen) - def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) @@ -1033,9 +1007,9 @@ parent.order, parent) self.start = start - def create_iter(self): + def create_iter(self, transforms=None): return ViewIterator(self.start, self.strides, self.backstrides, - self.shape) + self.shape).apply_transformations(self, transforms) def setshape(self, space, new_shape): if len(self.shape) < 1: @@ -1084,8 +1058,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_iter(self): - return ArrayIterator(self.size) + def create_iter(self, transforms=None): + return ArrayIterator(self.size).apply_transformations(self, transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1289,11 +1263,13 @@ BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), - item = interp2app(BaseArray.descr_item), + itemsize = GetSetProperty(BaseArray.descr_get_itemsize), + nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), + item = interp2app(BaseArray.descr_item), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), @@ -1345,12 +1321,15 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) def descr_coords(self, space): - coords, step, lngth = to_coords(space, self.base.shape, - self.base.size, self.base.order, + coords, step, lngth = to_coords(space, self.base.shape, + self.base.size, self.base.order, space.wrap(self.index)) return space.newtuple([space.wrap(c) for c in coords]) @@ -1380,7 +1359,7 @@ step=step, res=res, ri=ri, - ) + ) w_val = base.getitem(basei.offset) res.setitem(ri.offset,w_val) basei = basei.next_skip_x(shapelen, step) @@ -1408,7 +1387,7 @@ arr=arr, ai=ai, lngth=lngth, - ) + ) v = arr.getitem(ai).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done @@ -1419,22 +1398,29 @@ def create_sig(self): return signature.FlatSignature(self.base.dtype) + def create_iter(self, transforms=None): + return ViewIterator(self.base.start, self.base.strides, + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) + def descr_base(self, space): return space.wrap(self.base) W_FlatIterator.typedef = TypeDef( 'flatiter', - #__array__ = #MISSING __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - #__sizeof__ #MISSING + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -2,31 +2,10 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, support -from pypy.module.micronumpy.signature import (ReduceSignature, find_sig, - new_printable_location, AxisReduceSignature, ScalarSignature) -from pypy.rlib import jit +from pypy.module.micronumpy import interp_boxes, interp_dtype, support, loop from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - -reduce_driver = jit.JitDriver( - greens=['shapelen', "sig"], - virtualizables=["frame"], - reds=["frame", "self", "dtype", "value", "obj"], - get_printable_location=new_printable_location('reduce'), - name='numpy_reduce', -) - -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['self','arr', 'identity', 'frame'], - name='numpy_axisreduce', - get_printable_location=new_printable_location('axisreduce'), -) - - class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -140,7 +119,7 @@ def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + Scalar, ReduceArray if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -151,96 +130,37 @@ if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) - size = obj.size - dtype = find_unaryop_result_dtype( - space, obj.find_dtype(), - promote_to_float=self.promote_to_float, - promote_to_largest=promote_to_largest, - promote_bools=True - ) + if self.comparison_func: + dtype = interp_dtype.get_dtype_cache(space).w_booldtype + else: + dtype = find_unaryop_result_dtype( + space, obj.find_dtype(), + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True + ) shapelen = len(obj.shape) if self.identity is None and size == 0: raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim, keepdims) - return space.wrap(res) - scalarsig = ScalarSignature(dtype) - sig = find_sig(ReduceSignature(self.func, self.name, dtype, - scalarsig, - obj.create_sig()), obj) - frame = sig.create_frame(obj) - if self.identity is None: - value = sig.eval(frame, obj).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + return self.do_axis_reduce(obj, dtype, dim, keepdims) + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + return loop.compute(arr) def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - if keepdims: shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] result = W_NDimArray(support.product(shape), shape, dtype) - rightsig = obj.create_sig() - # note - this is just a wrapper so signature can fetch - # both left and right, nothing more, especially - # this is not a true virtual array, because shapes - # don't quite match - arr = AxisReduce(self.func, self.name, obj.shape, dtype, + arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) - scalarsig = ScalarSignature(dtype) - sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, - scalarsig, rightsig), arr) - assert isinstance(sig, AxisReduceSignature) - frame = sig.create_frame(arr) - shapelen = len(obj.shape) - if self.identity is not None: - identity = self.identity.convert_to(dtype) - else: - identity = None - self.reduce_axis_loop(frame, sig, shapelen, arr, identity) - return result - - def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): - # note - we can be advanterous here, depending on the exact field - # layout. For now let's say we iterate the original way and - # simply follow the original iteration order - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - sig=sig, - identity=identity, - shapelen=shapelen, arr=arr) - iterator = frame.get_final_iter() - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iterator.first_line: - if identity is not None: - value = self.func(sig.calc_dtype, identity, v) - else: - value = v - else: - cur = arr.left.getitem(iterator.offset) - value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iterator.offset, value) - frame.next(shapelen) - - def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): - while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, - shapelen=shapelen, self=self, - value=value, obj=obj, frame=frame, - dtype=dtype) - assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, - sig.eval(frame, obj).convert_to(dtype)) - frame.next(shapelen) - return value - + loop.compute(arr) + return arr.left class W_Ufunc1(W_Ufunc): argcount = 1 @@ -312,7 +232,6 @@ w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) )) - new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, @@ -464,9 +383,10 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), @@ -482,6 +402,13 @@ ("isnan", "isnan", 1, {"bool_result": True}), ("isinf", "isinf", 1, {"bool_result": True}), + ('logical_and', 'logical_and', 2, {'comparison_func': True, + 'identity': 1}), + ('logical_or', 'logical_or', 2, {'comparison_func': True, + 'identity': 0}), + ('logical_xor', 'logical_xor', 2, {'comparison_func': True}), + ('logical_not', 'logical_not', 1, {'bool_result': True}), + ("maximum", "max", 2), ("minimum", "min", 2), diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/loop.py @@ -0,0 +1,83 @@ + +""" This file is the main run loop as well as evaluation loops for various +signatures +""" + +from pypy.rlib.jit import JitDriver, hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator + +class NumpyEvalFrame(object): + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity', 'cur_value'] + + @unroll_safe + def __init__(self, iterators, arrays): + self = hint(self, access_directly=True, fresh_virtualizable=True) + self.iterators = iterators[:] + self.arrays = arrays[:] + for i in range(len(self.iterators)): + iter = self.iterators[i] + if not isinstance(iter, ConstantIterator): + self.final_iter = i + break + else: + self.final_iter = -1 + self.cur_value = None + self.identity = None + + def done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter].done() + + @unroll_safe + def next(self, shapelen): + for i in range(len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + +def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims]' % (shapelen,) + +numpy_driver = JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['frame', 'arr'], + get_printable_location=get_printable_location, + name='numpy', +) + +class ComputationDone(Exception): + def __init__(self, value): + self.value = value + +def compute(arr): + sig = arr.find_sig() + shapelen = len(arr.shape) + frame = sig.create_frame(arr) + try: + while not frame.done(): + numpy_driver.jit_merge_point(sig=sig, + shapelen=shapelen, + frame=frame, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + return frame.cur_value + except ComputationDone, e: + return e.value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,9 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask -from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform -from pypy.rlib.jit import hint, unroll_safe, promote +from pypy.module.micronumpy.interp_iter import ConstantIterator, AxisIterator,\ + ViewTransform, BroadcastTransform +from pypy.tool.pairtype import extendabletype +from pypy.module.micronumpy.loop import ComputationDone """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -54,50 +54,6 @@ known_sigs[sig] = sig return sig -class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity'] - - @unroll_safe - def __init__(self, iterators, arrays): - self = hint(self, access_directly=True, fresh_virtualizable=True) - self.iterators = iterators[:] - self.arrays = arrays[:] - for i in range(len(self.iterators)): - iter = self.iterators[i] - if not isinstance(iter, ConstantIterator): - self.final_iter = i - break - else: - self.final_iter = -1 - - def done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter].done() - - @unroll_safe - def next(self, shapelen): - for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - @unroll_safe - def next_from_second(self, shapelen): - """ Don't increase the first iterator - """ - for i in range(1, len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def next_first(self, shapelen): - self.iterators[0] = self.iterators[0].next(shapelen) - - def get_final_iter(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - assert False - return self.iterators[final_iter] - def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -113,6 +69,8 @@ return r_dict(sigeq_no_numbering, sighash) class Signature(object): + __metaclass_ = extendabletype + _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -138,11 +96,15 @@ self.iter_no = no def create_frame(self, arr): + from pypy.module.micronumpy.loop import NumpyEvalFrame + iterlist = [] arraylist = [] self._create_iter(iterlist, arraylist, arr, []) - return NumpyEvalFrame(iterlist, arraylist) - + f = NumpyEvalFrame(iterlist, arraylist) + # hook for cur_value being used by reduce + arr.compute_first_step(self, f) + return f class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -182,13 +144,10 @@ assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, transforms)) + iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, transforms): - return ArrayIterator(arr.size).apply_transformations(arr, transforms) - def eval(self, frame, arr): iter = frame.iterators[self.iter_no] return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) @@ -220,22 +179,10 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, transforms): - return ViewIterator(arr.start, arr.strides, arr.backstrides, - arr.shape).apply_transformations(arr, transforms) - class FlatSignature(ViewSignature): def debug_repr(self): return 'Flat' - def allocate_iter(self, arr, transforms): - from pypy.module.micronumpy.interp_numarray import W_FlatIterator - assert isinstance(arr, W_FlatIterator) - return ViewIterator(arr.base.start, arr.base.strides, - arr.base.backstrides, - arr.base.shape).apply_transformations(arr.base, - transforms) - class VirtualSliceSignature(Signature): def __init__(self, child): self.child = child @@ -269,12 +216,13 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] - def __init__(self, func, name, child): + def __init__(self, func, name, dtype, child): self.unfunc = func self.child = child self.name = name + self.dtype = dtype def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -359,6 +307,17 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class ResultSignature(Call2): + def __init__(self, dtype, left, right): + Call2.__init__(self, None, 'assign', dtype, left, right) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + offset = frame.get_final_iter().offset + arr.left.setitem(offset, self.right.eval(frame, arr.right)) + class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) @@ -400,20 +359,24 @@ self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, transforms): - self.right._create_iter(iterlist, arraylist, arr, transforms) - - def _invent_numbering(self, cache, allnumbers): - self.right._invent_numbering(cache, allnumbers) - - def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) - + _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', + 'left', 'right', 'done_func'] + + def __init__(self, func, name, calc_dtype, left, right, + done_func): + Call2.__init__(self, func, name, calc_dtype, left, right) + self.done_func = done_func + def eval(self, frame, arr): - return self.right.eval(frame, arr) + from pypy.module.micronumpy.interp_numarray import ReduceArray + assert isinstance(arr, ReduceArray) + rval = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if self.done_func is not None and self.done_func(self.calc_dtype, rval): + raise ComputationDone(rval) + frame.cur_value = self.binfunc(self.calc_dtype, frame.cur_value, rval) def debug_repr(self): - return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + return 'ReduceSig(%s)' % (self.name, self.right.debug_repr()) class SliceloopSignature(Call2): def eval(self, frame, arr): @@ -467,7 +430,17 @@ from pypy.module.micronumpy.interp_numarray import AxisReduce assert isinstance(arr, AxisReduce) - return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + iterator = frame.get_final_iter() + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + if iterator.first_line: + if frame.identity is not None: + value = self.binfunc(self.calc_dtype, frame.identity, v) + else: + value = v + else: + cur = arr.left.getitem(iterator.offset) + value = self.binfunc(self.calc_dtype, cur, v) + arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -401,3 +401,20 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_, True_, False_ + + assert truediv(int_(3), int_(2)) == float64(1.5) + assert 2 ** int_(3) == int_(8) + assert int_(3) & int_(1) == int_(1) + raises(TypeError, lambda: float64(3) & 1) + assert int_(8) % int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,11 +1,13 @@ import py + +from pypy.conftest import gettestobjspace, option +from pypy.interpreter.error import OperationError +from pypy.module.micronumpy import signature +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement -from pypy.module.micronumpy.interp_iter import Chunk -from pypy.module.micronumpy import signature -from pypy.interpreter.error import OperationError -from pypy.conftest import gettestobjspace class MockDtype(object): @@ -173,7 +175,7 @@ def _to_coords(index, order): return to_coords(self.space, [2, 3, 4], 24, order, self.space.wrap(index))[0] - + assert _to_coords(0, 'C') == [0, 0, 0] assert _to_coords(1, 'C') == [0, 0, 1] assert _to_coords(-1, 'C') == [1, 2, 3] @@ -306,7 +308,7 @@ from _numpypy import arange a = arange(15).reshape(3, 5) assert a[1, 3] == 8 - assert a.T[1, 2] == 11 + assert a.T[1, 2] == 11 def test_setitem(self): from _numpypy import array @@ -577,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -598,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array @@ -936,10 +938,9 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) assert (c == [[62, 214, 366], [518, 670, 822]]).all() - a = arange(3*4*5*6).reshape((3,4,5,6)) - b = arange(3*4*5*6)[::-1].reshape((5,4,6,3)) - assert dot(a, b)[2,3,2,1,2,2] == 499128 - assert sum(a[2,3,2,:] * b[1,2,:,2]) == 499128 + a = arange(3*2*6).reshape((3,2,6)) + b = arange(3*2*6)[::-1].reshape((2,6,3)) + assert dot(a, b)[2,0,1,2] == 1140 def test_dot_constant(self): from _numpypy import array, dot @@ -1121,14 +1122,14 @@ f1 = array([0,1]) f = concatenate((f1, [2], f1, [7])) assert (f == [0,1,2,0,1,7]).all() - + bad_axis = raises(ValueError, concatenate, (a1,a2), axis=1) assert str(bad_axis.value) == "bad axis argument" - + concat_zero = raises(ValueError, concatenate, ()) assert str(concat_zero.value) == \ "concatenation of zero-length sequences is impossible" - + dims_disagree = raises(ValueError, concatenate, (a1, b1), axis=0) assert str(dims_disagree.value) == \ "array dimensions must agree except for axis being concatenated" @@ -1163,6 +1164,25 @@ a = array([[1, 2], [3, 4]]) assert (a.T.flatten() == [1, 3, 2, 4]).all() + def test_itemsize(self): + from _numpypy import ones, dtype, array + + for obj in [float, bool, int]: + assert ones(1, dtype=obj).itemsize == dtype(obj).itemsize + assert (ones(1) + ones(1)).itemsize == 8 + assert array(1.0).itemsize == 8 + assert ones(1)[:].itemsize == 8 + + def test_nbytes(self): + from _numpypy import array, ones + + assert ones(1).nbytes == 8 + assert ones((2, 2)).nbytes == 32 + assert ones((2, 2))[1:,].nbytes == 16 + assert (ones(1) + ones(1)).nbytes == 8 + assert array(3.0).nbytes == 8 + + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): import _numpypy @@ -1458,35 +1478,37 @@ b = a.T.flat assert (b == [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).all() assert not (b != [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]).any() - assert ((b >= range(12)) == [True, True, True,False, True, True, + assert ((b >= range(12)) == [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b < range(12)) != [True, True, True,False, True, True, + assert ((b < range(12)) != [True, True, True,False, True, True, False, False, True, False, False, True]).all() - assert ((b <= range(12)) != [False, True, True,False, True, True, + assert ((b <= range(12)) != [False, True, True,False, True, True, False, False, True, False, False, False]).all() - assert ((b > range(12)) == [False, True, True,False, True, True, + assert ((b > range(12)) == [False, True, True,False, True, True, False, False, True, False, False, False]).all() def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros @@ -1740,10 +1762,11 @@ assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) -from pypy.module.micronumpy.appbridge import get_appbridge_cache class AppTestRepr(BaseNumpyAppTest): def setup_class(cls): + if option.runappdirect: + py.test.skip("Can't be run directly.") BaseNumpyAppTest.setup_class.im_func(cls) cache = get_appbridge_cache(cls.space) cls.old_array_repr = cache.w_array_repr @@ -1757,6 +1780,8 @@ assert str(array([1, 2, 3])) == 'array([1, 2, 3])' def teardown_class(cls): + if option.runappdirect: + return cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -347,8 +347,9 @@ raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): - from _numpypy import add, maximum + from _numpypy import add, maximum, less + assert less.reduce([5, 4, 3, 2, 1]) assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -366,7 +367,7 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array @@ -415,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() @@ -433,3 +434,14 @@ assert (isnan(array([0.2, float('inf'), float('nan')])) == [False, False, True]).all() assert (isinf(array([0.2, float('inf'), float('nan')])) == [False, True, False]).all() assert isinf(array([0.2])).dtype.kind == 'b' + + def test_logical_ops(self): + from _numpypy import logical_and, logical_or, logical_xor, logical_not + + assert (logical_and([True, False , True, True], [1, 1, 3, 0]) + == [True, False, True, False]).all() + assert (logical_or([True, False, True, False], [1, 2, 0, 0]) + == [True, True, True, False]).all() + assert (logical_xor([True, False, True, False], [1, 2, 0, 0]) + == [False, True, True, False]).all() + assert (logical_not([True, False]) == [False, True]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -84,7 +84,7 @@ def test_add(self): result = self.run("add") self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) assert result == 3 + 3 @@ -99,7 +99,7 @@ result = self.run("float_add") assert result == 3 + 3 self.check_simple_loop({"getinteriorfield_raw": 1, "float_add": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -198,7 +198,8 @@ result = self.run("any") assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, - "float_ne": 1, "int_add": 1, + "int_and": 1, "int_add": 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) @@ -239,7 +240,7 @@ assert result == -6 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, - "setinteriorfield_raw": 1, "int_add": 2, + "setinteriorfield_raw": 1, "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -321,7 +322,7 @@ # int_add might be 1 here if we try slightly harder with # reusing indexes or some optimization self.check_simple_loop({'float_add': 1, 'getinteriorfield_raw': 2, - 'guard_false': 1, 'int_add': 2, 'int_ge': 1, + 'guard_false': 1, 'int_add': 1, 'int_ge': 1, 'jump': 1, 'setinteriorfield_raw': 1, 'arraylen_gc': 1}) @@ -387,7 +388,7 @@ assert result == 4 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 2, + 'setinteriorfield_raw': 1, 'int_add': 1, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) def define_flat_iter(): @@ -403,7 +404,7 @@ assert result == 6 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, + 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'arraylen_gc': 1, 'jump': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -59,10 +59,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True @@ -181,6 +177,22 @@ def ge(self, v1, v2): return v1 >= v2 + @raw_binary_op + def logical_and(self, v1, v2): + return bool(v1) and bool(v2) + + @raw_binary_op + def logical_or(self, v1, v2): + return bool(v1) or bool(v2) + + @raw_unary_op + def logical_not(self, v): + return not bool(v) + + @raw_binary_op + def logical_xor(self, v1, v2): + return bool(v1) ^ bool(v2) + def bool(self, v): return bool(self.for_computation(self.unbox(v))) @@ -237,6 +249,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -297,6 +313,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -56,13 +56,13 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -543,12 +543,18 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +641,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +656,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +771,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +791,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +920,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1109,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1119,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -127,6 +127,7 @@ l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, + op.getarg(1).getint(), w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, @@ -163,14 +164,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + at unwrap_spec(repr=str, jd_name=str, call_depth=int) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, call_depth, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_greenkey) + repr, jd_name, call_depth, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,10 +206,11 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + def __init__(self, space, op, repr_of_resop, jd_name, call_depth, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) + self.jd_name = jd_name + self.call_depth = call_depth self.w_greenkey = w_greenkey - self.jd_name = jd_name def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: @@ -243,6 +245,7 @@ greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + call_depth = interp_attrproperty("call_depth", cls=DebugMergePoint), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -122,7 +122,8 @@ assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code assert dmp.greenkey == (self.f.func_code, 0, False) - #assert int_add.name == 'int_add' + assert dmp.call_depth == 0 + assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 @@ -223,11 +224,13 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num - op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + assert op.call_depth == 2 + op = DebugMergePoint([Box(0)], 'repr', 'notmain', 5, ('str',)) raises(AttributeError, 'op.pycode') + assert op.call_depth == 5 diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -1,7 +1,6 @@ # Package initialisation from pypy.interpreter.mixedmodule import MixedModule -import select import sys @@ -15,18 +14,13 @@ 'error' : 'space.fromcache(interp_select.Cache).w_error' } - # TODO: this doesn't feel right... - if hasattr(select, "epoll"): + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' - symbols = [ - "EPOLLIN", "EPOLLOUT", "EPOLLPRI", "EPOLLERR", "EPOLLHUP", - "EPOLLET", "EPOLLONESHOT", "EPOLLRDNORM", "EPOLLRDBAND", - "EPOLLWRNORM", "EPOLLWRBAND", "EPOLLMSG" - ] - for symbol in symbols: - if hasattr(select, symbol): - interpleveldefs[symbol] = "space.wrap(%s)" % getattr(select, symbol) - + from pypy.module.select.interp_epoll import cconfig, public_symbols + for symbol in public_symbols: + value = cconfig[symbol] + if value is not None: + interpleveldefs[symbol] = "space.wrap(%r)" % value def buildloaders(cls): from pypy.rlib import rpoll diff --git a/pypy/module/select/interp_epoll.py b/pypy/module/select/interp_epoll.py --- a/pypy/module/select/interp_epoll.py +++ b/pypy/module/select/interp_epoll.py @@ -29,8 +29,16 @@ ("data", CConfig.epoll_data) ]) +public_symbols = [ + "EPOLLIN", "EPOLLOUT", "EPOLLPRI", "EPOLLERR", "EPOLLHUP", + "EPOLLET", "EPOLLONESHOT", "EPOLLRDNORM", "EPOLLRDBAND", + "EPOLLWRNORM", "EPOLLWRBAND", "EPOLLMSG" + ] +for symbol in public_symbols: + setattr(CConfig, symbol, rffi_platform.DefinedConstantInteger(symbol)) + for symbol in ["EPOLL_CTL_ADD", "EPOLL_CTL_MOD", "EPOLL_CTL_DEL"]: - setattr(CConfig, symbol, rffi_platform.DefinedConstantInteger(symbol)) + setattr(CConfig, symbol, rffi_platform.ConstantInteger(symbol)) cconfig = rffi_platform.configure(CConfig) diff --git a/pypy/module/select/test/test_epoll.py b/pypy/module/select/test/test_epoll.py --- a/pypy/module/select/test/test_epoll.py +++ b/pypy/module/select/test/test_epoll.py @@ -1,23 +1,17 @@ import py +import sys from pypy.conftest import gettestobjspace class AppTestEpoll(object): def setup_class(cls): + # NB. we should ideally py.test.skip() if running on an old linux + # where the kernel doesn't support epoll() + if not sys.platform.startswith('linux'): + py.test.skip("test requires linux (assumed >= 2.6)") cls.space = gettestobjspace(usemodules=["select", "_socket", "posix"]) - import errno - import select - - if not hasattr(select, "epoll"): - py.test.skip("test requires linux 2.6") - try: - select.epoll() - except IOError, e: - if e.errno == errno.ENOSYS: - py.test.skip("kernel doesn't support epoll()") - def setup_method(self, meth): self.w_sockets = self.space.wrap([]) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -342,7 +344,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,9 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1040,13 +1040,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1079,6 +1074,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -39,12 +43,15 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -60,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix @@ -68,6 +75,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +93,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +310,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +328,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +342,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +363,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +388,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +527,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +622,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1060,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1072,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1184,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1251,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1264,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1293,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1370,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1392,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1410,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1433,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1446,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1463,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1485,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1498,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1512,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1564,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1577,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,43 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Thu Feb 9 16:47:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 16:47:44 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: enough to run vector ops on the simplest thing in numpy Message-ID: <20120209154744.2462382B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52307:922b890924eb Date: 2012-02-09 17:47 +0200 http://bitbucket.org/pypy/pypy/changeset/922b890924eb/ Log: enough to run vector ops on the simplest thing in numpy diff --git a/pypy/jit/codewriter/test/test_jtransform.py b/pypy/jit/codewriter/test/test_jtransform.py --- a/pypy/jit/codewriter/test/test_jtransform.py +++ b/pypy/jit/codewriter/test/test_jtransform.py @@ -139,12 +139,15 @@ EI.OS_UNIEQ_NONNULL_CHAR: ([PUNICODE, UNICHAR], INT), EI.OS_UNIEQ_CHECKNULL_CHAR: ([PUNICODE, UNICHAR], INT), EI.OS_UNIEQ_LENGTHOK: ([PUNICODE, PUNICODE], INT), + EI.OS_ASSERT_ALIGNED: ([INT], lltype.Void), } argtypes = argtypes[oopspecindex] assert argtypes[0] == [v.concretetype for v in op.args[1:]] assert argtypes[1] == op.result.concretetype if oopspecindex == EI.OS_STR2UNICODE: assert extraeffect == EI.EF_ELIDABLE_CAN_RAISE + elif oopspecindex == EI.OS_ASSERT_ALIGNED: + assert extraeffect == EI.EF_CANNOT_RAISE else: assert extraeffect == EI.EF_ELIDABLE_CANNOT_RAISE return 'calldescr-%d' % oopspecindex @@ -1079,6 +1082,18 @@ assert op1.args[2] == ListOfKind('ref', [v1]) assert op1.result == v2 +def test_assert_aligned(): + from pypy.rlib import jit + + v = varoftype(lltype.Signed) # does not matter + FUNC = lltype.FuncType([lltype.Signed], lltype.Void) + func = lltype.functionptr(FUNC, 'assert_aligned', + _callable=jit.assert_aligned) + op = SpaceOperation('direct_call', [const(func), v], varoftype(lltype.Void)) + tr = Transformer(FakeCPU(), FakeBuiltinCallControl()) + op1 = tr.rewrite_operation(op) + assert op1.args[1] == 'calldescr-%d' % effectinfo.EffectInfo.OS_ASSERT_ALIGNED + def test_unicode_eq_checknull_char(): # test that the oopspec is present and correctly transformed PUNICODE = lltype.Ptr(rstr.UNICODE) diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -22,9 +22,10 @@ ('earlyforce', OptEarlyForce), ('pure', OptPure), ('heap', OptHeap), + ('ffi', None), ('vectorize', OptVectorize), # XXX check if CPU supports that maybe - ('ffi', None), - ('unroll', None)] + ('unroll', None), + ] # no direct instantiation of unroll unroll_all_opts = unrolling_iterable(ALL_OPTS) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -116,6 +116,8 @@ self.ops_so_far.append(op) self.track[self.getvalue(op.result)] = Read(arr, track, op) + optimize_GETINTERIORFIELD_RAW = optimize_GETARRAYITEM_RAW + def optimize_INT_ADD(self, op): # only for += 1 one = self.getvalue(op.getarg(0)) @@ -159,6 +161,8 @@ self.full[arr] = [None] * VECTOR_SIZE self.full[arr][ti.index] = Write(arr, index, v, op) + optimize_SETINTERIORFIELD_RAW = optimize_SETARRAYITEM_RAW + def emit_vector_ops(self, forbidden_boxes): for arg in forbidden_boxes: if arg in self.track: @@ -184,7 +188,10 @@ if op.opnum in [rop.JUMP, rop.FINISH, rop.LABEL]: self.emit_vector_ops(op.getarglist()) elif op.is_guard(): - self.emit_vector_ops(op.getarglist() + op.getfailargs()) + lst = op.getarglist() + if op.getfailargs() is not None: + lst = lst + op.getfailargs() + self.emit_vector_ops(lst) elif op.is_always_pure(): # in theory no side effect ops, but stuff like malloc # can go in the way diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py --- a/pypy/module/micronumpy/loop.py +++ b/pypy/module/micronumpy/loop.py @@ -8,7 +8,7 @@ class NumpyEvalFrame(object): _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', - 'value', 'identity', 'cur_value'] + 'value', 'identity', 'cur_value', 'first_iteration'] @unroll_safe def __init__(self, iterators, arrays): @@ -24,6 +24,7 @@ self.final_iter = -1 self.cur_value = None self.identity = None + self.first_iteration = False def done(self): final_iter = promote(self.final_iter) @@ -76,6 +77,10 @@ numpy_driver.jit_merge_point(sig=sig, shapelen=shapelen, frame=frame, arr=arr) + frame.first_iteration = True # vectorization hint + sig.eval(frame, arr) + frame.first_iteration = False + frame.next(shapelen) sig.eval(frame, arr) frame.next(shapelen) return frame.cur_value diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -4,6 +4,7 @@ ViewTransform, BroadcastTransform from pypy.tool.pairtype import extendabletype from pypy.module.micronumpy.loop import ComputationDone +from pypy.rlib import jit """ Signature specifies both the numpy expression that has been constructed and the assembler to be compiled. This is a very important observation - @@ -150,7 +151,10 @@ def eval(self, frame, arr): iter = frame.iterators[self.iter_no] - return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) + offset = iter.offset + if frame.first_iteration: + jit.assert_aligned(offset) + return self.dtype.getitem(frame.arrays[self.array_no], offset) class ScalarSignature(ConcreteSignature): def debug_repr(self): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -874,4 +874,4 @@ @oopspec('assert_aligned(arg)') def assert_aligned(arg): - pass + keepalive_until_here(arg) From noreply at buildbot.pypy.org Thu Feb 9 16:54:51 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 16:54:51 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Hack hack hack. Message-ID: <20120209155451.1768682B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52308:ac82cd981a40 Date: 2012-02-09 16:54 +0100 http://bitbucket.org/pypy/pypy/changeset/ac82cd981a40/ Log: Hack hack hack. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -401,6 +401,7 @@ 'stm_become_inevitable':LLOp(), 'stm_descriptor_init': LLOp(), 'stm_descriptor_done': LLOp(), + 'stm_writebarrier': LLOp(sideeffects=False), # __________ address operations __________ diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -4,7 +4,7 @@ from pypy.rpython.memory.gc.base import GCBase from pypy.rpython.annlowlevel import llhelper from pypy.rlib.rarithmetic import LONG_BIT -from pypy.rlib.debug import ll_assert, debug_start, debug_stop +from pypy.rlib.debug import ll_assert, debug_start, debug_stop, fatalerror from pypy.module.thread import ll_thread @@ -35,7 +35,7 @@ _alloc_flavor_ = "raw" inline_simple_malloc = True inline_simple_malloc_varsize = True - needs_write_barrier = "stm" + #needs_write_barrier = "stm" prebuilt_gc_objects_are_static_roots = False malloc_zero_filled = True # xxx? @@ -265,11 +265,11 @@ stm_operations = self.stm_operations # @always_inline - def write_barrier(obj): + def stm_writebarrier(obj): if self.header(obj).tid & GCFLAG_GLOBAL != 0: obj = _stm_write_barrier_global(obj) return obj - self.write_barrier = write_barrier + self.stm_writebarrier = stm_writebarrier # @dont_inline def _stm_write_barrier_global(obj): diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -357,7 +357,7 @@ getfn(GCClass.writebarrier_before_copy.im_func, [s_gc] + [annmodel.SomeAddress()] * 2 + [annmodel.SomeInteger()] * 3, annmodel.SomeBool()) - elif GCClass.needs_write_barrier and GCClass.needs_write_barrier != 'stm': + elif GCClass.needs_write_barrier: raise NotImplementedError("GC needs write barrier, but does not provide writebarrier_before_copy functionality") # in some GCs we can inline the common case of diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -1,5 +1,6 @@ from pypy.rpython.memory.gctransform.framework import FrameworkGCTransformer from pypy.rpython.memory.gctransform.framework import BaseRootWalker +from pypy.rpython.lltypesystem import llmemory from pypy.annotation import model as annmodel @@ -14,6 +15,9 @@ self.teardown_thread_ptr = getfn( GCClass.teardown_thread.im_func, [s_gc], annmodel.s_None) + self.stm_writebarrier_ptr = getfn( + self.gcdata.gc.stm_writebarrier, + [annmodel.SomeAddress()], annmodel.SomeAddress()) def push_roots(self, hop, keep_current_args=False): pass @@ -31,6 +35,15 @@ def gct_stm_descriptor_done(self, hop): hop.genop("direct_call", [self.teardown_thread_ptr, self.c_const_gc]) + def gct_stm_writebarrier(self, hop): + op = hop.spaceop + v_adr = hop.genop('cast_ptr_to_adr', + [op.args[0]], resulttype=llmemory.Address) + v_localadr = hop.genop("direct_call", + [self.stm_writebarrier_ptr, v_adr], + resulttype=llmemory.Address) + hop.genop('cast_adr_to_ptr', [v_localadr], resultvar=op.result) + class StmStackRootWalker(BaseRootWalker): diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -134,6 +134,15 @@ op1 = SpaceOperation('stm_getfield', op.args, op.result) newoperations.append(op1) + def with_writebarrier(self, newoperations, op): + v_arg = op.args[0] + v_local = varoftype(v_arg.concretetype) + op0 = SpaceOperation('stm_writebarrier', [v_arg], v_local) + newoperations.append(op0) + op1 = SpaceOperation('bare_' + op.opname, [v_local] + op.args[1:], + op.result) + return op1 + def stt_setfield(self, newoperations, op): STRUCT = op.args[0].concretetype.TO if op.args[2].concretetype is lltype.Void: @@ -141,9 +150,11 @@ elif (STRUCT._immutable_field(op.args[1].value) or 'stm_access_directly' in STRUCT._hints): op1 = op - else: + elif STRUCT._gckind == 'raw': turn_inevitable(newoperations, "setfield-raw") op1 = op + else: + op1 = self.with_writebarrier(newoperations, op) newoperations.append(op1) def stt_getarrayitem(self, newoperations, op): @@ -169,9 +180,11 @@ op1 = op #elif op.args[0] in self.access_directly: # op1 = op - else: + elif ARRAY._gckind == 'raw': turn_inevitable(newoperations, "setarrayitem-raw") op1 = op + else: + op1 = self.with_writebarrier(newoperations, op) newoperations.append(op1) def stt_getinteriorfield(self, newoperations, op): @@ -197,7 +210,7 @@ turn_inevitable(newoperations, "setinteriorfield-raw") op1 = op else: - op1 = SpaceOperation('stm_setinteriorfield', op.args, op.result) + op1 = self.with_writebarrier(newoperations, op) newoperations.append(op1) ## def stt_stm_transaction_boundary(self, newoperations, op): From noreply at buildbot.pypy.org Thu Feb 9 17:09:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:09:42 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Added tag release-1.8 for changeset 48ebdce33e1b Message-ID: <20120209160942.DD31682B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52309:241dd7b92608 Date: 2012-02-09 18:09 +0200 http://bitbucket.org/pypy/pypy/changeset/241dd7b92608/ Log: Added tag release-1.8 for changeset 48ebdce33e1b diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -2,3 +2,4 @@ b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 +48ebdce33e1b33710f7336a1dbd2a9b0d32b1f89 release-1.8 From noreply at buildbot.pypy.org Thu Feb 9 17:13:44 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:13:44 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: "Fix" the tests. Message-ID: <20120209161344.A608182B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52310:9d9ea531a26d Date: 2012-02-09 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/9d9ea531a26d/ Log: "Fix" the tests. diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,3 +1,4 @@ +import py from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi from pypy.rpython.memory.gc.stmgc import StmGC, PRIMITIVE_SIZES, WORD from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED @@ -214,6 +215,7 @@ assert self.gc.header(obj).tid & GCFLAG_GLOBAL == 0 def test_reader_direct(self): + py.test.skip("xxx") s, s_adr = self.malloc(S) assert self.gc.header(s_adr).tid & GCFLAG_GLOBAL != 0 s.a = 42 @@ -229,6 +231,7 @@ assert value == 42 def test_reader_through_dict(self): + py.test.skip("xxx") s, s_adr = self.malloc(S) s.a = 42 # @@ -243,6 +246,7 @@ assert value == 84 def test_reader_sizes(self): + py.test.skip("xxx") for size, TYPE in PRIMITIVE_SIZES.items(): T = lltype.GcStruct('T', ('a', TYPE)) ofs_a = llmemory.offsetof(T, 'a') @@ -266,7 +270,7 @@ def test_write_barrier_exists(self): self.select_thread(1) t, t_adr = self.malloc(S) - obj = self.gc.write_barrier(t_adr) # local object + obj = self.gc.stm_writebarrier(t_adr) # local object assert obj == t_adr # self.select_thread(0) @@ -276,7 +280,7 @@ self.gc.header(s_adr).tid |= GCFLAG_WAS_COPIED self.gc.header(t_adr).tid |= GCFLAG_WAS_COPIED self.gc.stm_operations._tldicts[1][s_adr] = t_adr - obj = self.gc.write_barrier(s_adr) # global copied object + obj = self.gc.stm_writebarrier(s_adr) # global copied object assert obj == t_adr assert self.gc.stm_operations._transactional_copies == [] @@ -286,18 +290,18 @@ s.a = 12 s.b = 34 # - self.select_thread(1) - t_adr = self.gc.write_barrier(s_adr) # global object, not copied so far + self.select_thread(1) # global object, not copied so far + t_adr = self.gc.stm_writebarrier(s_adr) assert t_adr != s_adr t = t_adr.ptr assert t.a == 12 assert t.b == 34 assert self.gc.stm_operations._transactional_copies == [(s_adr, t_adr)] # - u_adr = self.gc.write_barrier(s_adr) # again + u_adr = self.gc.stm_writebarrier(s_adr) # again assert u_adr == t_adr # - u_adr = self.gc.write_barrier(u_adr) # local object + u_adr = self.gc.stm_writebarrier(u_adr) # local object assert u_adr == t_adr def test_commit_transaction_empty(self): @@ -312,7 +316,7 @@ s, s_adr = self.malloc(S) s.b = 12345 self.select_thread(1) - t_adr = self.gc.write_barrier(s_adr) # make a local copy + t_adr = self.gc.stm_writebarrier(s_adr) # make a local copy t = llmemory.cast_adr_to_ptr(t_adr, lltype.Ptr(S)) assert s != t assert self.gc.header(t_adr).version == s_adr @@ -333,7 +337,7 @@ assert sr.s1 == lltype.nullptr(S) assert sr.sr2 == lltype.nullptr(SR) self.select_thread(1) - tr_adr = self.gc.write_barrier(sr_adr) # make a local copy + tr_adr = self.gc.stm_writebarrier(sr_adr) # make a local copy tr = llmemory.cast_adr_to_ptr(tr_adr, lltype.Ptr(SR)) assert sr != tr t, t_adr = self.malloc(S) @@ -353,8 +357,8 @@ sr1, sr1_adr = self.malloc(SR) sr2, sr2_adr = self.malloc(SR) self.select_thread(1) - tr1_adr = self.gc.write_barrier(sr1_adr) # make a local copy - tr2_adr = self.gc.write_barrier(sr2_adr) # make a local copy + tr1_adr = self.gc.stm_writebarrier(sr1_adr) # make a local copy + tr2_adr = self.gc.stm_writebarrier(sr2_adr) # make a local copy tr1 = llmemory.cast_adr_to_ptr(tr1_adr, lltype.Ptr(SR)) tr2 = llmemory.cast_adr_to_ptr(tr2_adr, lltype.Ptr(SR)) tr3, tr3_adr = self.malloc(SR) From noreply at buildbot.pypy.org Thu Feb 9 17:13:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:13:45 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Piece together malloc_varsize_clear(). Message-ID: <20120209161345.DBD6F82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52311:1106dbbc343f Date: 2012-02-09 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/1106dbbc343f/ Log: Piece together malloc_varsize_clear(). diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -187,7 +187,20 @@ def malloc_varsize_clear(self, typeid, length, size, itemsize, offset_to_length): - raise NotImplementedError + # XXX blindly copied from malloc_fixedsize_clear() for now. + # XXX Be more subtle, e.g. detecting overflows, at least + tls = self.collector.get_tls() + flags = tls.malloc_flags + size_gc_header = self.gcheaderbuilder.size_gc_header + nonvarsize = size_gc_header + size + totalsize = nonvarsize + itemsize * length + totalsize = llarena.round_up_for_allocation(totalsize) + result = self._allocate_bump_pointer(tls, totalsize) + llarena.arena_reserve(result, totalsize) + obj = result + size_gc_header + self.init_gc_object(result, typeid, flags=flags) + (obj + offset_to_length).signed[0] = length + return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) def _malloc_local_raw(self, tls, size): From noreply at buildbot.pypy.org Thu Feb 9 17:13:47 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:13:47 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix the tests and add in_main_thread() as an stm call. Message-ID: <20120209161347.20F8B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52312:75b3f0c7b338 Date: 2012-02-09 17:12 +0100 http://bitbucket.org/pypy/pypy/changeset/75b3f0c7b338/ Log: Fix the tests and add in_main_thread() as an stm call. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -286,6 +286,8 @@ # @dont_inline def _stm_write_barrier_global(obj): + if stm_operations.in_main_thread(): + return obj # we need to find of make a local copy hdr = self.header(obj) if hdr.tid & GCFLAG_WAS_COPIED == 0: diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -26,6 +26,9 @@ def setup_size_getter(self, getsize_fn): self._getsize_fn = getsize_fn + def in_main_thread(self): + return self.threadnum == 0 + def set_tls(self, tls, in_main_thread): assert lltype.typeOf(tls) == llmemory.Address assert tls @@ -304,6 +307,11 @@ u_adr = self.gc.stm_writebarrier(u_adr) # local object assert u_adr == t_adr + def test_write_barrier_main_thread(self): + t, t_adr = self.malloc(S) + obj = self.gc.stm_writebarrier(t_adr) # main thread + assert obj == t_adr + def test_commit_transaction_empty(self): self.select_thread(1) s, s_adr = self.malloc(S) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -685,7 +685,7 @@ if (is_main_thread(d)) return; -#ifdef RPY_STM_ASSERT +#ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-inevitable"); if (PYPY_HAVE_DEBUG_PRINTS) { @@ -696,7 +696,7 @@ if (is_inevitable(d)) { -#ifdef RPY_STM_ASSERT +#ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_STOP("stm-inevitable"); #endif return; /* I am already inevitable */ @@ -726,7 +726,7 @@ mutex_unlock(); } d->setjmp_buf = NULL; /* inevitable from now on */ -#ifdef RPY_STM_ASSERT +#ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_STOP("stm-inevitable"); #endif } @@ -803,4 +803,10 @@ rpython_get_size = getsize_fn; } +long stm_in_main_thread(void) +{ + struct tx_descriptor *d = thread_descriptor; + return is_main_thread(d); +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -18,6 +18,8 @@ setup_size_getter = smexternal('stm_setup_size_getter', [GETSIZE], lltype.Void) + in_main_thread = smexternal('stm_in_main_thread', [], lltype.Signed) + set_tls = smexternal('stm_set_tls', [llmemory.Address, lltype.Signed], lltype.Void) get_tls = smexternal('stm_get_tls', [], llmemory.Address) diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -173,3 +173,10 @@ lltype.free(s2, flavor='raw') lltype.free(s1, flavor='raw') test_stm_copy_transactional_to_raw.in_main_thread = False + + def test_in_main_thread(self): + assert stm_operations.in_main_thread() + + def test_not_in_main_thread(self): + assert not stm_operations.in_main_thread() + test_not_in_main_thread.in_main_thread = False From noreply at buildbot.pypy.org Thu Feb 9 17:13:48 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:13:48 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix Message-ID: <20120209161348.55C3C82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52313:13f64ca8b78c Date: 2012-02-09 17:13 +0100 http://bitbucket.org/pypy/pypy/changeset/13f64ca8b78c/ Log: Fix diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -687,11 +687,13 @@ #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-inevitable"); +# ifdef RPY_STM_ASSERT if (PYPY_HAVE_DEBUG_PRINTS) { fprintf(PYPY_DEBUG_FILE, "%s%s\n", why, is_inevitable(d) ? "" : " <===="); } +# endif #endif if (is_inevitable(d)) From noreply at buildbot.pypy.org Thu Feb 9 17:19:12 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:19:12 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fixes Message-ID: <20120209161912.B360C82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52314:de00dab5772e Date: 2012-02-09 17:18 +0100 http://bitbucket.org/pypy/pypy/changeset/de00dab5772e/ Log: Fixes diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -326,6 +326,7 @@ aroundstate._freeze_() class StackCounter: + _alloc_flavor_ = "raw" def _freeze_(self): self.stacks_counter = 1 # number of "stack pieces": callbacks return False # and threads increase it by one diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -682,7 +682,7 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; - if (is_main_thread(d)) + if (d == NULL || is_main_thread(d)) return; #ifdef RPY_STM_DEBUG_PRINT @@ -696,7 +696,7 @@ # endif #endif - if (is_inevitable(d)) + if (is_inevitable(d)) /* also when the transaction is inactive */ { #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_STOP("stm-inevitable"); From noreply at buildbot.pypy.org Thu Feb 9 17:21:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:21:58 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: update the website Message-ID: <20120209162158.BD7A782B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r311:d8d3269d1561 Date: 2012-02-09 18:21 +0200 http://bitbucket.org/pypy/pypy.org/changeset/d8d3269d1561/ Log: update the website diff --git a/compat.html b/compat.html --- a/compat.html +++ b/compat.html @@ -45,11 +45,11 @@

Python compatibility

-

PyPy implements the Python language version 2.7.1. It supports all of the core +

PyPy implements the Python language version 2.7.2. It supports all of the core language, passing Python test suite (with minor modifications that were already accepted in the main python in newer versions). It supports most of the commonly used Python standard library modules; details below.

-

PyPy has alpha/beta-level support for the CPython C API, however, as of 1.7 +

PyPy has alpha/beta-level support for the CPython C API, however, as of 1.8 release this feature is not yet complete. Many libraries will require a bit of effort to work, but there are known success stories. Check out PyPy blog for updates, as well as the Compatibility Wiki.

diff --git a/download.html b/download.html --- a/download.html +++ b/download.html @@ -48,7 +48,7 @@

There are nightly binary builds available. Those builds are not always as stable as the release, but they contain numerous bugfixes and performance improvements.

-

Here are the various binaries of PyPy 1.7 that we provide for x86 Linux, +

Here are the various binaries of PyPy 1.8 that we provide for x86 Linux, Mac OS/X or Windows.

@@ -94,7 +94,7 @@ complicated, this reduces a bit the level of confidence we can put in the result.) -

These versions are not officially part of the release 1.7, which focuses +

These versions are not officially part of the release 1.8, which focuses on the JIT. You can find prebuilt binaries for them on our nightly build, or translate them yourself.

@@ -104,7 +104,7 @@ uncompressed, they run in-place. For now you can uncompress them either somewhere in your home directory or, say, in /opt, and if you want, put a symlink from somewhere like -/usr/local/bin/pypy to /path/to/pypy-1.7/bin/pypy. Do +/usr/local/bin/pypy to /path/to/pypy-1.8/bin/pypy. Do not move or copy the executable pypy outside the tree – put a symlink to it, otherwise it will not find its libraries.

@@ -123,8 +123,8 @@
  • Get the source code. The following packages contain the source at the same revision as the above binaries:

    Or you can checkout the current trunk using Mercurial (the trunk usually works and is of course more up-to-date):

    @@ -192,13 +192,13 @@

    Checksums

    Here are the checksums for each of the downloads (md5 and sha1):

    -ceb8dfe7d9d1aeb558553b91b381a1a8  pypy-1.7-linux64.tar.bz2
    -8a6e2583902bc6f2661eb3c96b45f4e3  pypy-1.7-linux.tar.bz2
    -ff979054fc8e17b4973ffebb9844b159  pypy-1.7-osx64.tar.bz2
    +6dd134f20c0038f63f506a8192b1cfed  pypy-1.8-linux64.tar.bz2
    +b8563942704531374f121eca0b5f643b  pypy-1.8-linux.tar.bz2
    +65391dc681362bf9f911956d113ff79a  pypy-1.8-osx64.tar.bz2
     fd0ad58b92ca0933c087bb93a82fda9e  release-1.7.tar.bz2
    -d364e3aa0dd5e0e1ad7f1800a0bfa7e87250c8bb  pypy-1.7-linux64.tar.bz2
    -68554c4cbcc20b03ff56b6a1495a6ecf8f24b23a  pypy-1.7-linux.tar.bz2
    -cedeb1d6bf0431589f62e8c95b71fbfe6c4e7b96  pypy-1.7-osx64.tar.bz2
    +b9d2f69af4f8427685f49ad1558744f7a3f3f1b8  pypy-1.8-linux64.tar.bz2
    +eb5af839cfc22c625b77b645324c8bf3f1b7e03b  pypy-1.8-linux.tar.bz2
    +636ef5fd43478d23cf5faaffc92cb8dc187b2df1  pypy-1.8-osx64.tar.bz2
     b4be3a8dc69cd838a49382867db3c41864b9e8d9  release-1.7.tar.bz2
     
    diff --git a/features.html b/features.html --- a/features.html +++ b/features.html @@ -45,8 +45,8 @@

    Features

    -

    PyPy 1.7 implements Python 2.7.1 and runs on Intel -x86 (IA-32) and x86_64 platforms, with ARM being underway. +

    PyPy 1.8 implements Python 2.7.2 and runs on Intel +x86 (IA-32) and x86_64 platforms, with ARM and PPC being underway. It supports all of the core language, passing the Python test suite (with minor modifications that were already accepted in the main python in newer versions). It supports most of the commonly used Python diff --git a/index.html b/index.html --- a/index.html +++ b/index.html @@ -46,7 +46,7 @@

    PyPy

    PyPy is a fast, compliant alternative implementation of the Python -language (2.7.1). It has several advantages and distinct features:

    +language (2.7.2). It has several advantages and distinct features:

    • Speed: thanks to its Just-in-Time compiler, Python programs @@ -54,7 +54,7 @@
    • Memory usage: large, memory-hungry Python programs might end up taking less space than they do in CPython.
    • Compatibility: PyPy is highly compatible with existing python code. -It supports ctypes and can run popular python libraries like twisted +It supports ctypes and can run popular python libraries like twisted and django.
    • Sandboxing: PyPy provides the ability to run untrusted code in a fully secure way.
    • @@ -63,7 +63,7 @@
    • As well as other features.
    -

    Download and try out the PyPy release 1.7!

    +

    Download and try out the PyPy release 1.8!

    Want to know more? A good place to start is our detailed speed and compatibility reports!

    diff --git a/source/compat.txt b/source/compat.txt --- a/source/compat.txt +++ b/source/compat.txt @@ -3,12 +3,12 @@ title: Python compatibility --- -PyPy implements the Python language version 2.7.1. It supports all of the core +PyPy implements the Python language version 2.7.2. It supports all of the core language, passing Python test suite (with minor modifications that were already accepted in the main python in newer versions). It supports most of the commonly used Python `standard library modules`_; details below. -PyPy has **alpha/beta-level** support for the `CPython C API`_, however, as of 1.7 +PyPy has **alpha/beta-level** support for the `CPython C API`_, however, as of 1.8 release this feature is not yet complete. Many libraries will require a bit of effort to work, but there are known success stories. Check out PyPy blog for updates, as well as the `Compatibility Wiki`__. diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -12,7 +12,7 @@ as stable as the release, but they contain numerous bugfixes and performance improvements. -Here are the various binaries of **PyPy 1.7** that we provide for x86 Linux, +Here are the various binaries of **PyPy 1.8** that we provide for x86 Linux, Mac OS/X or Windows. .. class:: download_menu @@ -39,7 +39,7 @@ x86 CPUs that have the SSE2_ instruction set (most of them do, nowadays), or on x86-64 CPUs. They also contain `stackless`_ extensions, like `greenlets`_. -(This is the official release 1.7; +(This is the official release 1.8; for the most up-to-date version see below.) * `Linux binary (32bit)`__ (`openssl0.9.8 notes`_) @@ -47,10 +47,10 @@ * `Mac OS/X binary (64bit)`__ * `Windows binary (32bit)`__ -.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.7-linux.tar.bz2 -.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.7-linux64.tar.bz2 -.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.7-osx64.tar.bz2 -.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.7-win32.zip +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-linux.tar.bz2 +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-linux64.tar.bz2 +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-osx64.tar.bz2 +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-win32.zip .. VS 2010 runtime libraries: http://www.microsoft.com/downloads/en/details.aspx?familyid=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 If your CPU is really old, it may not have SSE2. In this case, you need @@ -75,7 +75,7 @@ complicated, this reduces a bit the level of confidence we can put in the result.) -These versions are not officially part of the release 1.7, which focuses +These versions are not officially part of the release 1.8, which focuses on the JIT. You can find prebuilt binaries for them on our `nightly build`_, or translate_ them yourself. @@ -89,7 +89,7 @@ uncompressed, they run in-place. For now you can uncompress them either somewhere in your home directory or, say, in ``/opt``, and if you want, put a symlink from somewhere like -``/usr/local/bin/pypy`` to ``/path/to/pypy-1.7/bin/pypy``. Do +``/usr/local/bin/pypy`` to ``/path/to/pypy-1.8/bin/pypy``. Do not move or copy the executable ``pypy`` outside the tree --- put a symlink to it, otherwise it will not find its libraries. @@ -117,11 +117,11 @@ 1. Get the source code. The following packages contain the source at the same revision as the above binaries: - * `pypy-1.7-src.tar.bz2`__ (sources, Unix line endings) - * `pypy-1.7-src.zip`__ (sources, Unix line endings too, sorry) + * `pypy-1.8-src.tar.bz2`__ (sources, Unix line endings) + * `pypy-1.8-src.zip`__ (sources, Unix line endings too, sorry) - .. __: https://bitbucket.org/pypy/pypy/get/release-1.7.tar.bz2 - .. __: https://bitbucket.org/pypy/pypy/get/release-1.7.zip + .. __: https://bitbucket.org/pypy/pypy/get/release-1.8.tar.bz2 + .. __: https://bitbucket.org/pypy/pypy/get/release-1.8.zip Or you can checkout the current trunk using Mercurial_ (the trunk usually works and is of course more up-to-date):: @@ -197,12 +197,14 @@ Here are the checksums for each of the downloads (md5 and sha1):: - ceb8dfe7d9d1aeb558553b91b381a1a8 pypy-1.7-linux64.tar.bz2 - 8a6e2583902bc6f2661eb3c96b45f4e3 pypy-1.7-linux.tar.bz2 - ff979054fc8e17b4973ffebb9844b159 pypy-1.7-osx64.tar.bz2 + 6dd134f20c0038f63f506a8192b1cfed pypy-1.8-linux64.tar.bz2 + b8563942704531374f121eca0b5f643b pypy-1.8-linux.tar.bz2 + 65391dc681362bf9f911956d113ff79a pypy-1.8-osx64.tar.bz2 + fd0ad58b92ca0933c087bb93a82fda9e release-1.7.tar.bz2 - d364e3aa0dd5e0e1ad7f1800a0bfa7e87250c8bb pypy-1.7-linux64.tar.bz2 - 68554c4cbcc20b03ff56b6a1495a6ecf8f24b23a pypy-1.7-linux.tar.bz2 - cedeb1d6bf0431589f62e8c95b71fbfe6c4e7b96 pypy-1.7-osx64.tar.bz2 + b9d2f69af4f8427685f49ad1558744f7a3f3f1b8 pypy-1.8-linux64.tar.bz2 + eb5af839cfc22c625b77b645324c8bf3f1b7e03b pypy-1.8-linux.tar.bz2 + 636ef5fd43478d23cf5faaffc92cb8dc187b2df1 pypy-1.8-osx64.tar.bz2 + b4be3a8dc69cd838a49382867db3c41864b9e8d9 release-1.7.tar.bz2 diff --git a/source/features.txt b/source/features.txt --- a/source/features.txt +++ b/source/features.txt @@ -6,8 +6,8 @@ PyPy features =========================================================== -**PyPy 1.7** implements **Python 2.7.1** and runs on Intel -`x86 (IA-32)`_ and `x86_64`_ platforms, with ARM being underway. +**PyPy 1.8** implements **Python 2.7.2** and runs on Intel +`x86 (IA-32)`_ and `x86_64`_ platforms, with ARM and PPC being underway. It supports all of the core language, passing the Python test suite (with minor modifications that were already accepted in the main python in newer versions). It supports most of the commonly used Python diff --git a/source/index.txt b/source/index.txt --- a/source/index.txt +++ b/source/index.txt @@ -4,7 +4,7 @@ --- PyPy is a `fast`_, `compliant`_ alternative implementation of the `Python`_ -language (2.7.1). It has several advantages and distinct features: +language (2.7.2). It has several advantages and distinct features: * **Speed:** thanks to its Just-in-Time compiler, Python programs often run `faster`_ on PyPy. `(What is a JIT compiler?)`_ @@ -26,7 +26,7 @@ .. class:: download -`Download and try out the PyPy release 1.7!`__ +`Download and try out the PyPy release 1.8!`__ .. __: download.html @@ -40,10 +40,10 @@ .. _`(What is a JIT compiler?)`: http://en.wikipedia.org/wiki/Just-in-time_compilation .. _`run untrusted code`: features.html#sandboxing .. _`compliant`: compat.html -.. _`Python docs`: http://docs.python.org/release/2.7.1/ +.. _`Python docs`: http://docs.python.org/release/2.7.2/ .. _`twisted`: http://twistedmatrix.com/ .. _`django`: http://www.djangoproject.com/ -.. _`ctypes`: http://docs.python.org/release/2.7.1/library/ctypes.html +.. _`ctypes`: http://docs.python.org/release/2.7.2/library/ctypes.html .. _`features`: features.html .. _`less space`: http://morepypy.blogspot.com/2009/10/gc-improvements.html .. _`highly compatible`: compat.html From noreply at buildbot.pypy.org Thu Feb 9 17:31:17 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:31:17 +0100 (CET) Subject: [pypy-commit] pypy default: update docs & version Message-ID: <20120209163117.E1ABC82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52315:a4aa0c1a1241 Date: 2012-02-09 18:27 +0200 http://bitbucket.org/pypy/pypy/changeset/a4aa0c1a1241/ Log: update docs & version diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,10 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.1 (48ebdce33e1b, Feb 09 2012, 00:55:31) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -75,14 +74,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h From noreply at buildbot.pypy.org Thu Feb 9 17:31:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:31:19 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: update docs & version Message-ID: <20120209163119.2370F82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52316:d2363496b90e Date: 2012-02-09 18:27 +0200 http://bitbucket.org/pypy/pypy/changeset/d2363496b90e/ Log: update docs & version diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,10 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.1 (48ebdce33e1b, Feb 09 2012, 00:55:31) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -75,14 +74,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 0, "final", 0) #XXX # sync patchlevel.h From noreply at buildbot.pypy.org Thu Feb 9 17:31:20 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:31:20 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Added tag release-1.8 for changeset d2363496b90e Message-ID: <20120209163120.5502582B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52317:3d0ca347cc21 Date: 2012-02-09 18:29 +0200 http://bitbucket.org/pypy/pypy/changeset/3d0ca347cc21/ Log: Added tag release-1.8 for changeset d2363496b90e diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -3,3 +3,5 @@ d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 48ebdce33e1b33710f7336a1dbd2a9b0d32b1f89 release-1.8 +48ebdce33e1b33710f7336a1dbd2a9b0d32b1f89 release-1.8 +d2363496b90e2cfd214a839f2030c2f12691a6bd release-1.8 From noreply at buildbot.pypy.org Thu Feb 9 17:31:21 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:31:21 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Added tag release-1.8 for changeset 3d0ca347cc21 Message-ID: <20120209163121.85EDF82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52318:79a9b6c10bec Date: 2012-02-09 18:29 +0200 http://bitbucket.org/pypy/pypy/changeset/79a9b6c10bec/ Log: Added tag release-1.8 for changeset 3d0ca347cc21 diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -5,3 +5,5 @@ 48ebdce33e1b33710f7336a1dbd2a9b0d32b1f89 release-1.8 48ebdce33e1b33710f7336a1dbd2a9b0d32b1f89 release-1.8 d2363496b90e2cfd214a839f2030c2f12691a6bd release-1.8 +d2363496b90e2cfd214a839f2030c2f12691a6bd release-1.8 +3d0ca347cc217e4346ccdb82eb100f8e0b06c761 release-1.8 From noreply at buildbot.pypy.org Thu Feb 9 17:31:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:31:22 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120209163122.C495782B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52319:0576ef4b9865 Date: 2012-02-09 18:30 +0200 http://bitbucket.org/pypy/pypy/changeset/0576ef4b9865/ Log: merge diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,16 +2,19 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As has become a habit, this -release brings a lot of bugfixes, and performance and memory improvements over +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory improvements over the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms +`list strategies`_ which makes homogenous lists more efficient both in terms of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense that performance improved roughly 10% on average since the previous release. + You can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -60,13 +63,6 @@ * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. -* Since the last release there was a significant breakthrough in PyPy's - fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_. We would like to thank again to everyone who donated. - - It's also probably worth noting, we're considering donations for the STM - project. - * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work @@ -82,7 +78,12 @@ * More numpy work -* Software Transactional Memory, you can read more about `our plans`_ +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,6 +83,8 @@ descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") descr_xor = _binop_impl("bitwise_xor") @@ -97,13 +99,30 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") descr_invert = _unaryop_impl("invert") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -185,7 +204,10 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), __and__ = interp2app(W_GenericBox.descr_and), __or__ = interp2app(W_GenericBox.descr_or), __xor__ = interp2app(W_GenericBox.descr_xor), @@ -193,7 +215,15 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,8 +101,13 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +116,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +134,18 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1227,21 +1244,34 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1250,10 +1280,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -392,6 +392,8 @@ ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -406,15 +406,27 @@ from operator import truediv from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) assert int_(3) & int_(1) == int_(1) - raises(TypeError, lambda: float64(3) & 1) - assert int_(8) % int_(3) == int_(2) + assert 2 & int_(3) == int_(2) assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,52 @@ for i in range(5): assert b[i] == i / 5.0 + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +724,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: From noreply at buildbot.pypy.org Thu Feb 9 17:34:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:34:36 +0100 (CET) Subject: [pypy-commit] pypy default: I think those are libraries we need on windows for libssl Message-ID: <20120209163436.7A16C82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52320:503715708fe9 Date: 2012-02-09 18:33 +0200 http://bitbucket.org/pypy/pypy/changeset/503715708fe9/ Log: I think those are libraries we need on windows for libssl diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) From noreply at buildbot.pypy.org Thu Feb 9 17:34:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:34:37 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: I think those are libraries we need on windows for libssl Message-ID: <20120209163437.AD6EC82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52321:ef564984db64 Date: 2012-02-09 18:33 +0200 http://bitbucket.org/pypy/pypy/changeset/ef564984db64/ Log: I think those are libraries we need on windows for libssl diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) From noreply at buildbot.pypy.org Thu Feb 9 17:36:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 17:36:04 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Ah bah Message-ID: <20120209163604.7525982B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52322:d5104b01f3ed Date: 2012-02-09 17:26 +0100 http://bitbucket.org/pypy/pypy/changeset/d5104b01f3ed/ Log: Ah bah diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -65,8 +65,8 @@ #define RPY_STM_FIELD(T, size, STRUCT, ptr, field) \ _RPY_STM(T, size, ptr, offsetof(STRUCT, field), ptr->field) -#define _RPY_STM(T, size, ptr, offset, field) \ - (*(long*)ptr & GCFLAG_GLOBAL ? field : \ +#define _RPY_STM(T, size, ptr, offset, field) \ + (((*(long*)ptr) & GCFLAG_GLOBAL) == 0 ? field : \ (T)stm_read_int##size(ptr, offset)) From noreply at buildbot.pypy.org Thu Feb 9 17:43:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 17:43:27 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Added tag release-1.8 for changeset ef564984db64 Message-ID: <20120209164327.EAB9382B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52323:0d846fa3ebba Date: 2012-02-09 18:42 +0200 http://bitbucket.org/pypy/pypy/changeset/0d846fa3ebba/ Log: Added tag release-1.8 for changeset ef564984db64 diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -7,3 +7,5 @@ d2363496b90e2cfd214a839f2030c2f12691a6bd release-1.8 d2363496b90e2cfd214a839f2030c2f12691a6bd release-1.8 3d0ca347cc217e4346ccdb82eb100f8e0b06c761 release-1.8 +3d0ca347cc217e4346ccdb82eb100f8e0b06c761 release-1.8 +ef564984db64d738aabf315c841e7697748c3c3d release-1.8 From noreply at buildbot.pypy.org Thu Feb 9 18:16:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 18:16:31 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix fix fix. Message-ID: <20120209171631.9DFD682B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52324:1509515879ae Date: 2012-02-09 18:16 +0100 http://bitbucket.org/pypy/pypy/changeset/1509515879ae/ Log: Fix fix fix. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -12,6 +12,7 @@ @specialize.memo() def _get_stm_callback(func, argcls): def _stm_callback(llarg, retry_counter): + llop.stm_start_transaction(lltype.Void) if we_are_translated(): llarg = rffi.cast(rclass.OBJECTPTR, llarg) arg = cast_base_ptr_to_instance(argcls, llarg) @@ -19,6 +20,7 @@ arg = lltype.TLS.stm_callback_arg res = func(arg, retry_counter) assert res is None + llop.stm_commit_transaction(lltype.Void) return lltype.nullptr(rffi.VOIDP.TO) return _stm_callback diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -395,13 +395,15 @@ # direct_calls and maybe several casts, but it looks less heavy-weight # to keep them as operations until the genc stage) - 'stm_getfield': LLOp(sideeffects=False, canrun=True), - 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), - 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), - 'stm_become_inevitable':LLOp(), - 'stm_descriptor_init': LLOp(), - 'stm_descriptor_done': LLOp(), - 'stm_writebarrier': LLOp(sideeffects=False), + 'stm_getfield': LLOp(sideeffects=False, canrun=True), + 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), + 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), + 'stm_become_inevitable': LLOp(), + 'stm_descriptor_init': LLOp(), + 'stm_descriptor_done': LLOp(), + 'stm_writebarrier': LLOp(sideeffects=False), + 'stm_start_transaction': LLOp(), + 'stm_commit_transaction': LLOp(), # __________ address operations __________ diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -22,6 +22,9 @@ 4: rffi.INT, 8: lltype.SignedLongLong} +CALLBACK = lltype.Ptr(lltype.FuncType([llmemory.Address] * 3, lltype.Void)) +GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) + def always_inline(fn): fn._always_inline_ = True @@ -81,13 +84,11 @@ ## self.declare_reader(size, TYPE) self.declare_write_barrier() - GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) - def setup(self): """Called at run-time to initialize the GC.""" GCBase.setup(self) self.stm_operations.setup_size_getter( - llhelper(self.GETSIZE, self._getsize_fn)) + llhelper(GETSIZE, self._getsize_fn)) self.main_thread_tls = self.setup_thread(True) self.mutex_lock = ll_thread.allocate_ll_lock() @@ -217,6 +218,12 @@ def collect(self, gen=0): raise NotImplementedError + def start_transaction(self): + self.collector.start_transaction() + + def commit_transaction(self): + self.collector.commit_transaction() + @always_inline def get_type_id(self, obj): @@ -428,20 +435,27 @@ tls.pending_list = NULL # Enumerate the roots, which are the local copies of global objects. # For each root, trace it. - self.stm_operations.enum_tldict_start() - while self.stm_operations.enum_tldict_find_next(): - globalobj = self.stm_operations.enum_tldict_globalobj() - localobj = self.stm_operations.enum_tldict_localobj() - # - localhdr = self.header(localobj) - ll_assert(localhdr.version == globalobj, - "in a root: localobj.version != globalobj") - ll_assert(localhdr.tid & GCFLAG_GLOBAL == 0, - "in a root: unexpected GCFLAG_GLOBAL") - ll_assert(localhdr.tid & GCFLAG_WAS_COPIED != 0, - "in a root: missing GCFLAG_WAS_COPIED") - # - self.trace_and_drag_out_of_nursery(tls, localobj) + callback = llhelper(CALLBACK, self._enum_entries) + # xxx hack hack hack! Stores 'self' in a global place... but it's + # pointless after translation because 'self' is a Void. + _global_collector.collector = self + self.stm_operations.tldict_enum(callback) + + + @staticmethod + def _enum_entries(tls_addr, globalobj, localobj): + self = _global_collector.collector + tls = llmemory.cast_adr_to_ptr(tls_addr, lltype.Ptr(StmGC.GCTLS)) + # + localhdr = self.header(localobj) + ll_assert(localhdr.version == globalobj, + "in a root: localobj.version != globalobj") + ll_assert(localhdr.tid & GCFLAG_GLOBAL == 0, + "in a root: unexpected GCFLAG_GLOBAL") + ll_assert(localhdr.tid & GCFLAG_WAS_COPIED != 0, + "in a root: missing GCFLAG_WAS_COPIED") + # + self.trace_and_drag_out_of_nursery(tls, localobj) def collect_from_pending_list(self, tls): @@ -519,3 +533,8 @@ # # Fix the original root.address[0] to point to the globalobj root.address[0] = globalobj + + +class _GlobalCollector(object): + pass +_global_collector = _GlobalCollector() diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,6 +1,6 @@ import py from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi -from pypy.rpython.memory.gc.stmgc import StmGC, PRIMITIVE_SIZES, WORD +from pypy.rpython.memory.gc.stmgc import StmGC, PRIMITIVE_SIZES, WORD, CALLBACK from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED @@ -37,7 +37,6 @@ assert not hasattr(self, '_tls_dict') self._tls_dict = {0: tls} self._tldicts = {0: {}} - self._tldicts_iterators = {} self._transactional_copies = [] else: assert in_main_thread == 0 @@ -64,32 +63,11 @@ assert obj not in tldict tldict[obj] = localobj - def enum_tldict_start(self): - it = self._tldicts[self.threadnum].iteritems() - self._tldicts_iterators[self.threadnum] = [it, None, None] - - def enum_tldict_find_next(self): - state = self._tldicts_iterators[self.threadnum] - try: - next_key, next_value = state[0].next() - except StopIteration: - state[1] = None - state[2] = None - del self._tldicts_iterators[self.threadnum] - return False - state[1] = next_key - state[2] = next_value - return True - - def enum_tldict_globalobj(self): - state = self._tldicts_iterators[self.threadnum] - assert state[1] is not None - return state[1] - - def enum_tldict_localobj(self): - state = self._tldicts_iterators[self.threadnum] - assert state[2] is not None - return state[2] + def tldict_enum(self, callback): + assert lltype.typeOf(callback) == CALLBACK + tls = self.get_tls() + for key, value in self._tldicts[self.threadnum].iteritems(): + callback(tls, key, value) def _get_stm_reader(size, TYPE): assert rffi.sizeof(TYPE) == size diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -18,6 +18,12 @@ self.stm_writebarrier_ptr = getfn( self.gcdata.gc.stm_writebarrier, [annmodel.SomeAddress()], annmodel.SomeAddress()) + self.stm_start_ptr = getfn( + self.gcdata.gc.start_transaction.im_func, + [s_gc], annmodel.s_None) + self.stm_commit_ptr = getfn( + self.gcdata.gc.commit_transaction.im_func, + [s_gc], annmodel.s_None) def push_roots(self, hop, keep_current_args=False): pass @@ -44,6 +50,12 @@ resulttype=llmemory.Address) hop.genop('cast_adr_to_ptr', [v_localadr], resultvar=op.result) + def gct_stm_start_transaction(self, hop): + hop.genop("direct_call", [self.stm_start_ptr, self.c_const_gc]) + + def gct_stm_commit_transaction(self, hop): + hop.genop("direct_call", [self.stm_commit_ptr, self.c_const_gc]) + class StmStackRootWalker(BaseRootWalker): diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -789,14 +789,15 @@ redolog_insert(&d->redolog, key, value); } -void stm_tldict_enum(void(*callback)(void*, void*)) +void stm_tldict_enum(void(*callback)(void*, void*, void*)) { struct tx_descriptor *d = thread_descriptor; wlog_t *item; + void *tls = stm_get_tls(); REDOLOG_LOOP_FORWARD(d->redolog, item) { - callback(item->addr, item->val); + callback(tls, item->addr, item->val); } REDOLOG_LOOP_END; } diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -20,7 +20,7 @@ void *stm_tldict_lookup(void *); void stm_tldict_add(void *, void *); -void stm_tlidct_enum(void(*)(void*, void*)); +void stm_tldict_enum(void(*)(void*, void*, void*)); char stm_read_int1(void *, long); short stm_read_int2(void *, long); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -1,14 +1,11 @@ from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.memory.gc.stmgc import PRIMITIVE_SIZES +from pypy.rpython.memory.gc.stmgc import PRIMITIVE_SIZES, GETSIZE, CALLBACK from pypy.translator.stm import _rffi_stm def smexternal(name, args, result): return staticmethod(_rffi_stm.llexternal(name, args, result)) -CALLBACK = lltype.Ptr(lltype.FuncType([llmemory.Address] * 2, lltype.Void)) -GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) - class StmOperations(object): diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -77,7 +77,8 @@ return content def get_callback(self): - def callback(key, value): + def callback(tls, key, value): + assert tls == llmemory.cast_ptr_to_adr(self.tls) seen.append((key, value)) seen = [] p_callback = llhelper(CALLBACK, callback) @@ -88,6 +89,19 @@ stm_operations.tldict_enum(p_callback) assert seen == [] + def test_enum_tldict_nonempty(self): + a1 = rffi.cast(llmemory.Address, 0x4020) + a2 = rffi.cast(llmemory.Address, 10002) + a3 = rffi.cast(llmemory.Address, 0x4028) + a4 = rffi.cast(llmemory.Address, 10004) + # + stm_operations.tldict_add(a1, a2) + stm_operations.tldict_add(a3, a4) + p_callback, seen = self.get_callback() + stm_operations.tldict_enum(p_callback) + assert (seen == [(a1, a2), (a3, a4)] or + seen == [(a3, a4), (a1, a2)]) + def stm_read_case(self, flags, copied=False): # doesn't test STM behavior, but just that it appears to work s1 = lltype.malloc(S1, flavor='raw') From noreply at buildbot.pypy.org Thu Feb 9 19:28:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 19:28:24 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: another update of versions Message-ID: <20120209182824.B6A3682B69@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52326:2346207d9946 Date: 2012-02-09 20:27 +0200 http://bitbucket.org/pypy/pypy/changeset/2346207d9946/ Log: another update of versions diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.0" From noreply at buildbot.pypy.org Thu Feb 9 19:28:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 19:28:23 +0100 (CET) Subject: [pypy-commit] pypy default: another update of versions Message-ID: <20120209182823.82EF082B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52325:84919a1a15d6 Date: 2012-02-09 20:27 +0200 http://bitbucket.org/pypy/pypy/changeset/84919a1a15d6/ Log: another update of versions diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" From noreply at buildbot.pypy.org Thu Feb 9 19:29:25 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 19:29:25 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Added tag release-1.8 for changeset 2346207d9946 Message-ID: <20120209182925.DC45F82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r52327:0e28b379d8b3 Date: 2012-02-09 20:28 +0200 http://bitbucket.org/pypy/pypy/changeset/0e28b379d8b3/ Log: Added tag release-1.8 for changeset 2346207d9946 diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -9,3 +9,5 @@ 3d0ca347cc217e4346ccdb82eb100f8e0b06c761 release-1.8 3d0ca347cc217e4346ccdb82eb100f8e0b06c761 release-1.8 ef564984db64d738aabf315c841e7697748c3c3d release-1.8 +ef564984db64d738aabf315c841e7697748c3c3d release-1.8 +2346207d99463f299f09f3e151c9d5fa9158f71b release-1.8 From noreply at buildbot.pypy.org Thu Feb 9 21:14:31 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 9 Feb 2012 21:14:31 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: minor cleanup Message-ID: <20120209201431.0914F82B1E@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52328:546264233337 Date: 2012-02-08 15:12 -0800 http://bitbucket.org/pypy/pypy/changeset/546264233337/ Log: minor cleanup diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -34,12 +34,10 @@ def __init__(self, space, extra): pass - @jit.dont_look_inside def _get_raw_address(self, space, w_obj, offset): rawobject = get_rawobject(space, w_obj) assert lltype.typeOf(rawobject) == capi.C_OBJECT if rawobject: - fieldptr = _direct_ptradd(rawobject, offset) else: fieldptr = rffi.cast(capi.C_OBJECT, offset) diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -112,10 +112,9 @@ @jit.unroll_safe def call(self, cppthis, w_type, args_w): assert lltype.typeOf(cppthis) == capi.C_OBJECT - if self.executor is None: - raise OperationError(self.space.w_TypeError, - self.space.wrap("return type not handled")) - if len(self.arg_defs) < len(args_w) or len(args_w) < self.args_required: + args_expected = len(self.arg_defs) + args_given = len(args_w) + if args_expected < args_given or args_given < self.args_required: raise OperationError(self.space.w_TypeError, self.space.wrap("wrong number of arguments")) if self.methgetter and cppthis: # only for methods From noreply at buildbot.pypy.org Thu Feb 9 21:14:39 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 9 Feb 2012 21:14:39 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) factored out direct_ptradd Message-ID: <20120209201439.9E5CC82B69@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52330:985018b0e02c Date: 2012-02-09 12:14 -0800 http://bitbucket.org/pypy/pypy/changeset/985018b0e02c/ Log: o) factored out direct_ptradd o) optimization to only calculate offsets if necessary (w/o guards) diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -15,6 +15,13 @@ C_METHPTRGETTER = lltype.FuncType([C_OBJECT], rffi.VOIDP) C_METHPTRGETTER_PTR = lltype.Ptr(C_METHPTRGETTER) +def direct_ptradd(ptr, offset): + offset = rffi.cast(rffi.SIZE_T, offset) + jit.promote(offset) + assert lltype.typeOf(ptr) == C_OBJECT + address = rffi.cast(rffi.CCHARP, ptr) + return rffi.cast(C_OBJECT, lltype.direct_ptradd(address, offset)) + c_load_dictionary = backend.c_load_dictionary c_get_typehandle = rffi.llexternal( @@ -48,6 +55,10 @@ [C_TYPEHANDLE], rffi.CCHARP, compilation_info=backend.eci) +c_has_complex_hierarchy = rffi.llexternal( + "cppyy_has_complex_hierarchy", + [C_TYPEHANDLE], rffi.INT, + compilation_info=backend.eci) c_num_bases = rffi.llexternal( "cppyy_num_bases", [C_TYPEHANDLE], rffi.INT, diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -19,11 +19,6 @@ return cppinstance.rawobject return capi.C_NULL_OBJECT -def _direct_ptradd(ptr, offset): # TODO: factor out with interp_cppyy.py - assert lltype.typeOf(ptr) == capi.C_OBJECT - address = rffi.cast(rffi.CCHARP, ptr) - return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) - class TypeConverter(object): _immutable_ = True @@ -38,7 +33,7 @@ rawobject = get_rawobject(space, w_obj) assert lltype.typeOf(rawobject) == capi.C_OBJECT if rawobject: - fieldptr = _direct_ptradd(rawobject, offset) + fieldptr = capi.direct_ptradd(rawobject, offset) else: fieldptr = rffi.cast(capi.C_OBJECT, offset) return fieldptr @@ -129,7 +124,7 @@ def to_memory(self, space, w_obj, w_value, offset): # copy only the pointer value rawobject = get_rawobject(space, w_obj) - byteptr = rffi.cast(rffi.CCHARPP, _direct_ptradd(rawobject, offset)) + byteptr = rffi.cast(rffi.CCHARPP, capi.direct_ptradd(rawobject, offset)) buf = space.buffer_w(w_value) try: byteptr[0] = buf.get_raw_address() @@ -174,7 +169,7 @@ x = rffi.cast(self.rffiptype, address) x[0] = self._unwrap_object(space, w_obj) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = self.typecode @@ -371,7 +366,7 @@ arg = space.str_w(w_obj) x[0] = rffi.cast(rffi.LONG, rffi.str2charp(arg)) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'a' def from_memory(self, space, w_obj, w_type, offset): @@ -390,7 +385,7 @@ x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'a' def convert_argument_libffi(self, space, w_obj, argchain): @@ -404,7 +399,7 @@ x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'p' @@ -415,7 +410,7 @@ x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'r' @@ -518,7 +513,7 @@ if capi.c_is_subtype(obj.cppclass.handle, self.cpptype.handle): offset = capi.c_base_offset( obj.cppclass.handle, self.cpptype.handle, obj.rawobject) - obj_address = _direct_ptradd(obj.rawobject, offset) + obj_address = capi.direct_ptradd(obj.rawobject, offset) return rffi.cast(capi.C_OBJECT, obj_address) raise OperationError(space.w_TypeError, space.wrap("cannot pass %s as %s" % ( @@ -530,7 +525,7 @@ x[0] = rffi.cast(rffi.VOIDP, self._unwrap_object(space, w_obj)) address = rffi.cast(capi.C_OBJECT, address) typecode = rffi.cast(rffi.CCHARP, - _direct_ptradd(address, capi.c_function_arg_typeoffset())) + capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) typecode[0] = 'o' def convert_argument_libffi(self, space, w_obj, argchain): diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -43,6 +43,7 @@ /* type/class reflection information -------------------------------------- */ char* cppyy_final_name(cppyy_typehandle_t handle); + int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle); int cppyy_num_bases(cppyy_typehandle_t handle); char* cppyy_base_name(cppyy_typehandle_t handle, int base_index); int cppyy_is_subtype(cppyy_typehandle_t dh, cppyy_typehandle_t bh); diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -16,10 +16,6 @@ class FastCallNotPossible(Exception): pass -def _direct_ptradd(ptr, offset): # TODO: factor out with convert.py - assert lltype.typeOf(ptr) == capi.C_OBJECT - address = rffi.cast(rffi.CCHARP, ptr) - return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) @unwrap_spec(name=str) def load_dictionary(space, name): @@ -48,8 +44,10 @@ final_name = capi.charp2str_free(capi.c_final_name(handle)) if capi.c_is_namespace(handle): cpptype = W_CPPNamespace(space, final_name, handle) - else: - cpptype = W_CPPType(space, final_name, handle) + elif capi.c_has_complex_hierarchy(handle): + cpptype = W_ComplexCPPType(space, final_name, handle) + else: + cpptype = W_CPPType(space, final_name, handle) state.cpptype_cache[name] = cpptype cpptype._find_methods() cpptype._find_data_members() @@ -249,22 +247,14 @@ def get_returntype(self): return self.space.wrap(self.functions[0].executor.name) - @jit.elidable_promote() - def _get_cppthis(self, cppinstance): - if cppinstance is not None: - cppinstance._nullcheck() - offset = capi.c_base_offset( - cppinstance.cppclass.handle, self.scope_handle, cppinstance.rawobject) - cppthis = _direct_ptradd(cppinstance.rawobject, offset) - assert lltype.typeOf(cppthis) == capi.C_OBJECT - else: - cppthis = capi.C_NULL_OBJECT - return cppthis - @jit.unroll_safe def call(self, w_cppinstance, w_type, args_w): cppinstance = self.space.interp_w(W_CPPInstance, w_cppinstance, can_be_None=True) - cppthis = self._get_cppthis(cppinstance) + if cppinstance is not None: + cppinstance._nullcheck() + cppthis = cppinstance.cppclass.get_cppthis(cppinstance, self.scope_handle) + else: + cppthis = capi.C_NULL_OBJECT assert lltype.typeOf(cppthis) == capi.C_OBJECT space = self.space @@ -491,6 +481,10 @@ data_member = W_CPPDataMember(self.space, self.handle, type_name, offset, is_static) self.data_members[data_member_name] = data_member + @jit.elidable_promote() + def get_cppthis(self, cppinstance, scope_handle): + return cppinstance.rawobject + def is_namespace(self): return self.space.w_False @@ -515,6 +509,26 @@ W_CPPType.typedef.acceptable_as_base_class = False +class W_ComplexCPPType(W_CPPType): + @jit.elidable_promote() + def get_cppthis(self, cppinstance, scope_handle): + offset = capi.c_base_offset( + cppinstance.cppclass.handle, scope_handle, cppinstance.rawobject) + return capi.direct_ptradd(cppinstance.rawobject, offset) + +W_ComplexCPPType.typedef = TypeDef( + 'ComplexCPPType', + type_name = interp_attrproperty('name', W_CPPType), + get_base_names = interp2app(W_ComplexCPPType.get_base_names, unwrap_spec=['self']), + get_method_names = interp2app(W_ComplexCPPType.get_method_names, unwrap_spec=['self']), + get_overload = interp2app(W_ComplexCPPType.get_overload, unwrap_spec=['self', str]), + get_data_member_names = interp2app(W_ComplexCPPType.get_data_member_names, unwrap_spec=['self']), + get_data_member = interp2app(W_ComplexCPPType.get_data_member, unwrap_spec=['self', str]), + is_namespace = interp2app(W_ComplexCPPType.is_namespace, unwrap_spec=['self']), +) +W_ComplexCPPType.typedef.acceptable_as_base_class = False + + class W_CPPTemplateType(Wrappable): _immutable_fields_ = ["name", "handle"] diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -300,6 +300,12 @@ return cppstring_to_cstring(cr.GetClassName()); } +int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle) { +// as long as no fast path is supported for CINT, calculating offsets (which +// are cached by the JIT) is not going to hurt + return 1; +} + int cppyy_num_bases(cppyy_typehandle_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetListOfBases() != 0) diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -9,7 +9,6 @@ #include "Reflex/PropertyList.h" #include "Reflex/TypeTemplate.h" -#include #include #include #include @@ -208,6 +207,30 @@ return cppstring_to_cstring(name); } +static int cppyy_has_complex_hierarchy(const Reflex::Type& t) { + int is_complex = 1; + + size_t nbases = t.BaseSize(); + if (1 < nbases) + is_complex = 1; + else if (nbases == 0) + is_complex = 0; + else { // one base class only + Reflex::Base b = t.BaseAt(0); + if (b.IsVirtual()) + is_complex = 1; // TODO: verify; can be complex, need not be. + else + is_complex = cppyy_has_complex_hierarchy(t.BaseAt(0).ToType()); + } + + return is_complex; +} + +int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle) { + Reflex::Type t = type_from_handle(handle); + return cppyy_has_complex_hierarchy(t); +} + int cppyy_num_bases(cppyy_typehandle_t handle) { Reflex::Type t = type_from_handle(handle); return t.BaseSize(); diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -32,7 +32,7 @@ def _opaque_direct_ptradd(ptr, offset): address = rffi.cast(rffi.CCHARP, ptr) return rffi.cast(capi.C_OBJECT, lltype.direct_ptradd(address, offset)) -interp_cppyy._direct_ptradd = _opaque_direct_ptradd +capi.direct_ptradd = _opaque_direct_ptradd class FakeUserDelAction(object): def __init__(self, space): From noreply at buildbot.pypy.org Thu Feb 9 21:14:38 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 9 Feb 2012 21:14:38 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120209201438.5642282B1E@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52329:6739ecca6d65 Date: 2012-02-08 15:18 -0800 http://bitbucket.org/pypy/pypy/changeset/6739ecca6d65/ Log: merge default into branch diff too long, truncating to 10000 out of 153011 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Thu Feb 9 21:38:06 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Thu, 9 Feb 2012 21:38:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add redirect call assembler. Message-ID: <20120209203806.A6E3A82B1E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52331:a8c0140a9443 Date: 2012-02-09 15:33 -0500 http://bitbucket.org/pypy/pypy/changeset/a8c0140a9443/ Log: Add redirect call assembler. diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1078,6 +1078,21 @@ self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.LT) + # ../x86/assembler.py:668 + def redirect_call_assembler(self, oldlooptoken, newlooptoken): + # some minimal sanity checking + old_nbargs = oldlooptoken.compiled_loop_token._debug_nbargs + new_nbargs = newlooptoken.compiled_loop_token._debug_nbargs + assert old_nbargs == new_nbargs + # we overwrite the instructions at the old _ppc_func_addr + # to start with a JMP to the new _ppc_func_addr. + # Ideally we should rather patch all existing CALLs, but well. + oldadr = oldlooptoken._ppc_func_addr + target = newlooptoken._ppc_func_addr + mc = PPCBuilder() + mc.b_abs(target) + mc.copy_to_raw_memory(oldadr) + def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD self.mc.alloc_scratch_reg() From noreply at buildbot.pypy.org Thu Feb 9 21:38:08 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Thu, 9 Feb 2012 21:38:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add redirect_call_assembler. Message-ID: <20120209203808.42B1282B1E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52332:9b7d5fc58a7f Date: 2012-02-09 15:34 -0500 http://bitbucket.org/pypy/pypy/changeset/9b7d5fc58a7f/ Log: Add redirect_call_assembler. diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -134,6 +134,9 @@ self.patch_list = None self.reg_map = None + def redirect_call_assembler(self, oldlooptoken, newlooptoken): + self.assembler.redirect_call_assembler(oldlooptoken, newlooptoken) + def invalidate_loop(self, looptoken): """Activate all GUARD_NOT_INVALIDATED in the loop and its attached bridges. Before this call, all GUARD_NOT_INVALIDATED do nothing; From noreply at buildbot.pypy.org Thu Feb 9 21:42:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 9 Feb 2012 21:42:37 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Improve error reporting. Message-ID: <20120209204237.1921B82B1E@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52333:77cae4242592 Date: 2012-02-09 18:21 +0100 http://bitbucket.org/pypy/pypy/changeset/77cae4242592/ Log: Improve error reporting. diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -34,6 +34,7 @@ def check_chained_list(node): seen = [0] * (glob.LENGTH+1) seen[-1] = glob.NUM_THREADS + errors = glob.LENGTH while node is not None: value = node.value #print value @@ -42,10 +43,14 @@ raise AssertionError seen[value] += 1 if seen[value] > seen[value-1]: - print "seen[%d] = %d, seen[%d] = %d" % (value-1, seen[value-1], - value, seen[value]) - raise AssertionError + errors = min(errors, value) node = node.next + if errors < glob.LENGTH: + value = errors + print "seen[%d] = %d, seen[%d] = %d" % (value-1, seen[value-1], + value, seen[value]) + raise AssertionError + if seen[glob.LENGTH-1] != glob.NUM_THREADS: print "seen[LENGTH-1] != NUM_THREADS" raise AssertionError From noreply at buildbot.pypy.org Thu Feb 9 22:00:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 22:00:36 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: a no-progress checkin Message-ID: <20120209210036.8B73A82B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52334:0901f6dc83de Date: 2012-02-09 22:59 +0200 http://bitbucket.org/pypy/pypy/changeset/0901f6dc83de/ Log: a no-progress checkin diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -140,6 +140,9 @@ "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, fieldnames=fieldnames) +def dtype_from_dict(space, w_dict): + xxx + def variable_dtype(space, name): if name[0] in '<>': # ignore byte order, not sure if it's worth it for unicode only diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -509,4 +509,5 @@ def test_create_from_dict(self): from _numpypy import dtype - d = dtype({...}) + d = dtype({'names': ['a', 'b', 'c'], + }) From noreply at buildbot.pypy.org Thu Feb 9 22:16:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 22:16:58 +0100 (CET) Subject: [pypy-commit] pypy default: update docs Message-ID: <20120209211658.AF43182B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52335:5e8590c7ed02 Date: 2012-02-09 23:16 +0200 http://bitbucket.org/pypy/pypy/changeset/5e8590c7ed02/ Log: update docs diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -55,11 +55,13 @@ $ tar xf pypy-1.8-linux.tar.bz2 $ ./pypy-1.8/bin/pypy - Python 2.7.1 (48ebdce33e1b, Feb 09 2012, 00:55:31) + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the From noreply at buildbot.pypy.org Thu Feb 9 22:23:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 9 Feb 2012 22:23:44 +0100 (CET) Subject: [pypy-commit] pypy default: ignore more ops Message-ID: <20120209212344.7FBD382B1E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52336:4b90bae5c842 Date: 2012-02-09 23:23 +0200 http://bitbucket.org/pypy/pypy/changeset/4b90bae5c842/ Log: ignore more ops diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', From noreply at buildbot.pypy.org Thu Feb 9 23:07:46 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Thu, 9 Feb 2012 23:07:46 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Change PPC64 redirect_call_assembler to overwrite function descriptor at old address. Message-ID: <20120209220746.7126082B1E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52337:4d8c2ac62599 Date: 2012-02-09 17:07 -0500 http://bitbucket.org/pypy/pypy/changeset/4d8c2ac62599/ Log: Change PPC64 redirect_call_assembler to overwrite function descriptor at old address. diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1084,14 +1084,22 @@ old_nbargs = oldlooptoken.compiled_loop_token._debug_nbargs new_nbargs = newlooptoken.compiled_loop_token._debug_nbargs assert old_nbargs == new_nbargs - # we overwrite the instructions at the old _ppc_func_addr - # to start with a JMP to the new _ppc_func_addr. - # Ideally we should rather patch all existing CALLs, but well. oldadr = oldlooptoken._ppc_func_addr target = newlooptoken._ppc_func_addr - mc = PPCBuilder() - mc.b_abs(target) - mc.copy_to_raw_memory(oldadr) + if IS_PPC_64: + # PPC64 trampolines are data so overwrite the code address + # in the function descriptor at the old address + # (TOC and static chain pointer are the same). + odata = rffi.cast(rffi.CArrayPtr(lltype.Signed), oldadr) + tdata = rffi.cast(rffi.CArrayPtr(lltype.Signed), target) + odata[0] = tdata[0] + else: + # we overwrite the instructions at the old _ppc_func_addr + # to start with a JMP to the new _ppc_func_addr. + # Ideally we should rather patch all existing CALLs, but well. + mc = PPCBuilder() + mc.b_abs(target) + mc.copy_to_raw_memory(oldadr) def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD From noreply at buildbot.pypy.org Fri Feb 10 00:59:30 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 10 Feb 2012 00:59:30 +0100 (CET) Subject: [pypy-commit] pypy default: Added truediv to arrays. Message-ID: <20120209235930.BEEC682B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52338:70d922c2ef77 Date: 2012-02-09 18:59 -0500 http://bitbucket.org/pypy/pypy/changeset/70d922c2ef77/ Log: Added truediv to arrays. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,6 +101,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") descr_lshift = _binop_impl("left_shift") @@ -134,6 +135,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -1251,6 +1253,7 @@ __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), __mod__ = interp2app(BaseArray.descr_mod), __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), @@ -1264,6 +1267,7 @@ __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), __rmod__ = interp2app(BaseArray.descr_rmod), __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,13 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + def test_divmod(self): from _numpypy import arange From noreply at buildbot.pypy.org Fri Feb 10 01:05:27 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 10 Feb 2012 01:05:27 +0100 (CET) Subject: [pypy-commit] pypy default: expose bitwise xor ufunc Message-ID: <20120210000527.26AFD82B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52339:402f6ef96ef0 Date: 2012-02-09 19:05 -0500 http://bitbucket.org/pypy/pypy/changeset/402f6ef96ef0/ Log: expose bitwise xor ufunc diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -368,14 +368,14 @@ assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): From noreply at buildbot.pypy.org Fri Feb 10 01:19:02 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 10 Feb 2012 01:19:02 +0100 (CET) Subject: [pypy-commit] pypy default: rtruediv on numpy boxes Message-ID: <20120210001902.28E4982B1E@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52340:a33b00bbf1cb Date: 2012-02-09 19:18 -0500 http://bitbucket.org/pypy/pypy/changeset/a33b00bbf1cb/ Log: rtruediv on numpy boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -100,6 +100,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -216,6 +217,7 @@ __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), __rmod__ = interp2app(W_GenericBox.descr_rmod), __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -408,6 +408,7 @@ assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) assert int_(8) % int_(3) == int_(2) assert 8 % int_(3) == int_(2) assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) From noreply at buildbot.pypy.org Fri Feb 10 01:22:05 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 10 Feb 2012 01:22:05 +0100 (CET) Subject: [pypy-commit] pypy default: datetime.utcfromtimestamp() used to store microseconds as floats. Message-ID: <20120210002205.735B582B1E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52341:8a9843bb99ad Date: 2012-02-10 01:19 +0100 http://bitbucket.org/pypy/pypy/changeset/8a9843bb99ad/ Log: datetime.utcfromtimestamp() used to store microseconds as floats. diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1520,7 +1520,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, @@ -2125,3 +2125,7 @@ pretty bizarre, and a tzinfo subclass can override fromutc() if it is. """ +if not hasattr(_time, '__datetime__dict'): + _time.__datetime__dict = globals() +else: + globals().update(_time.__datetime__dict) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -22,3 +22,7 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) From noreply at buildbot.pypy.org Fri Feb 10 01:22:06 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 10 Feb 2012 01:22:06 +0100 (CET) Subject: [pypy-commit] pypy default: Skip this test on Windows Message-ID: <20120210002206.A19E582B1E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52342:12a8ddcd4671 Date: 2012-02-10 01:20 +0100 http://bitbucket.org/pypy/pypy/changeset/12a8ddcd4671/ Log: Skip this test on Windows diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) From noreply at buildbot.pypy.org Fri Feb 10 01:24:44 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 10 Feb 2012 01:24:44 +0100 (CET) Subject: [pypy-commit] pypy default: Oops, did not mean to commit this Message-ID: <20120210002444.1B11182B1E@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52343:44bceade8267 Date: 2012-02-10 01:24 +0100 http://bitbucket.org/pypy/pypy/changeset/44bceade8267/ Log: Oops, did not mean to commit this diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -2125,7 +2125,3 @@ pretty bizarre, and a tzinfo subclass can override fromutc() if it is. """ -if not hasattr(_time, '__datetime__dict'): - _time.__datetime__dict = globals() -else: - globals().update(_time.__datetime__dict) From noreply at buildbot.pypy.org Fri Feb 10 01:50:20 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 10 Feb 2012 01:50:20 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Fix typo. Message-ID: <20120210005020.EB4CC82B1E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52344:bd11f83e6f9f Date: 2012-02-09 19:50 -0500 http://bitbucket.org/pypy/pypy/changeset/bd11f83e6f9f/ Log: Fix typo. diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -135,7 +135,7 @@ self.reg_map = None def redirect_call_assembler(self, oldlooptoken, newlooptoken): - self.assembler.redirect_call_assembler(oldlooptoken, newlooptoken) + self.asm.redirect_call_assembler(oldlooptoken, newlooptoken) def invalidate_loop(self, looptoken): """Activate all GUARD_NOT_INVALIDATED in the loop and its attached From noreply at buildbot.pypy.org Fri Feb 10 03:03:12 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 10 Feb 2012 03:03:12 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use IS_PPC_32 instead of IS_PPC_64. Message-ID: <20120210020312.3051F82B1E@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52345:d73e22ec0e6a Date: 2012-02-09 21:02 -0500 http://bitbucket.org/pypy/pypy/changeset/d73e22ec0e6a/ Log: Use IS_PPC_32 instead of IS_PPC_64. diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1086,20 +1086,20 @@ assert old_nbargs == new_nbargs oldadr = oldlooptoken._ppc_func_addr target = newlooptoken._ppc_func_addr - if IS_PPC_64: + if IS_PPC_32: + # we overwrite the instructions at the old _ppc_func_addr + # to start with a JMP to the new _ppc_func_addr. + # Ideally we should rather patch all existing CALLs, but well. + mc = PPCBuilder() + mc.b_abs(target) + mc.copy_to_raw_memory(oldadr) + else: # PPC64 trampolines are data so overwrite the code address # in the function descriptor at the old address # (TOC and static chain pointer are the same). odata = rffi.cast(rffi.CArrayPtr(lltype.Signed), oldadr) tdata = rffi.cast(rffi.CArrayPtr(lltype.Signed), target) odata[0] = tdata[0] - else: - # we overwrite the instructions at the old _ppc_func_addr - # to start with a JMP to the new _ppc_func_addr. - # Ideally we should rather patch all existing CALLs, but well. - mc = PPCBuilder() - mc.b_abs(target) - mc.copy_to_raw_memory(oldadr) def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD From noreply at buildbot.pypy.org Fri Feb 10 10:37:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 10 Feb 2012 10:37:39 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: update checksums Message-ID: <20120210093739.B8F6B8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r312:f176c6f50ec6 Date: 2012-02-10 11:37 +0200 http://bitbucket.org/pypy/pypy.org/changeset/f176c6f50ec6/ Log: update checksums diff --git a/download.html b/download.html --- a/download.html +++ b/download.html @@ -192,14 +192,14 @@

    Checksums

    Here are the checksums for each of the downloads (md5 and sha1):

    -6dd134f20c0038f63f506a8192b1cfed  pypy-1.8-linux64.tar.bz2
    -b8563942704531374f121eca0b5f643b  pypy-1.8-linux.tar.bz2
    -65391dc681362bf9f911956d113ff79a  pypy-1.8-osx64.tar.bz2
    -fd0ad58b92ca0933c087bb93a82fda9e  release-1.7.tar.bz2
    -b9d2f69af4f8427685f49ad1558744f7a3f3f1b8  pypy-1.8-linux64.tar.bz2
    -eb5af839cfc22c625b77b645324c8bf3f1b7e03b  pypy-1.8-linux.tar.bz2
    -636ef5fd43478d23cf5faaffc92cb8dc187b2df1  pypy-1.8-osx64.tar.bz2
    -b4be3a8dc69cd838a49382867db3c41864b9e8d9  release-1.7.tar.bz2
    +3b81363ccbc042dfdda2fabbf419e788  pypy-1.8-linux64.tar.bz2
    +c4a1d11e0283a390d9e9b801a4633b9f  pypy-1.8-linux.tar.bz2
    +1c293253e8e4df411c3dd59dff82a663  pypy-1.8-osx64.tar.bz2
    +1af8ee722721e9f5fd06b61af530ecb3  pypy-1.8-win32.zip
    +a6bb7b277d5186385fd09b71ec4e35c9e93b380d  pypy-1.8-linux64.tar.bz2
    +089f4269a6079da2eabdeabd614f668f56c4121a  pypy-1.8-linux.tar.bz2
    +15b99f780b9714e3ebd82b2e41577afab232d148  pypy-1.8-osx64.tar.bz2
    +77a565b1cfa4874a0079c17edd1b458b20e67bfd  pypy-1.8-win32.zip
     
    diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -197,14 +197,12 @@ Here are the checksums for each of the downloads (md5 and sha1):: - 6dd134f20c0038f63f506a8192b1cfed pypy-1.8-linux64.tar.bz2 - b8563942704531374f121eca0b5f643b pypy-1.8-linux.tar.bz2 - 65391dc681362bf9f911956d113ff79a pypy-1.8-osx64.tar.bz2 + 3b81363ccbc042dfdda2fabbf419e788 pypy-1.8-linux64.tar.bz2 + c4a1d11e0283a390d9e9b801a4633b9f pypy-1.8-linux.tar.bz2 + 1c293253e8e4df411c3dd59dff82a663 pypy-1.8-osx64.tar.bz2 + 1af8ee722721e9f5fd06b61af530ecb3 pypy-1.8-win32.zip - fd0ad58b92ca0933c087bb93a82fda9e release-1.7.tar.bz2 - - b9d2f69af4f8427685f49ad1558744f7a3f3f1b8 pypy-1.8-linux64.tar.bz2 - eb5af839cfc22c625b77b645324c8bf3f1b7e03b pypy-1.8-linux.tar.bz2 - 636ef5fd43478d23cf5faaffc92cb8dc187b2df1 pypy-1.8-osx64.tar.bz2 - - b4be3a8dc69cd838a49382867db3c41864b9e8d9 release-1.7.tar.bz2 + a6bb7b277d5186385fd09b71ec4e35c9e93b380d pypy-1.8-linux64.tar.bz2 + 089f4269a6079da2eabdeabd614f668f56c4121a pypy-1.8-linux.tar.bz2 + 15b99f780b9714e3ebd82b2e41577afab232d148 pypy-1.8-osx64.tar.bz2 + 77a565b1cfa4874a0079c17edd1b458b20e67bfd pypy-1.8-win32.zip From noreply at buildbot.pypy.org Fri Feb 10 10:41:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 10 Feb 2012 10:41:34 +0100 (CET) Subject: [pypy-commit] pypy default: reflow the para and add signature Message-ID: <20120210094134.5039E8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52346:38e0a3ed29c9 Date: 2012-02-10 11:39 +0200 http://bitbucket.org/pypy/pypy/changeset/38e0a3ed29c9/ Log: reflow the para and add signature diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -3,13 +3,15 @@ ============================ We're pleased to announce the 1.8 release of PyPy. As habitual this -release brings a lot of bugfixes, together with performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -`list strategies`_ which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +you can download the PyPy 1.8 release here: http://pypy.org/download.html @@ -85,6 +87,9 @@ * It's also probably worth noting, we're considering donations for the Software Transactional Memory project. You can read more about `our plans`_ +Cheers, +The PyPy Team + .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html .. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html From noreply at buildbot.pypy.org Fri Feb 10 10:41:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 10 Feb 2012 10:41:36 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120210094136.028C18203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52347:78f7cb05d2e8 Date: 2012-02-10 11:41 +0200 http://bitbucket.org/pypy/pypy/changeset/78f7cb05d2e8/ Log: merge diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1520,7 +1520,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -100,6 +100,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -216,6 +217,7 @@ __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), __rmod__ = interp2app(W_GenericBox.descr_rmod), __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,6 +101,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") descr_lshift = _binop_impl("left_shift") @@ -134,6 +135,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -1251,6 +1253,7 @@ __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), __mod__ = interp2app(BaseArray.descr_mod), __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), @@ -1264,6 +1267,7 @@ __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), __rmod__ = interp2app(BaseArray.descr_rmod), __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -408,6 +408,7 @@ assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) assert int_(8) % int_(3) == int_(2) assert 8 % int_(3) == int_(2) assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,13 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + def test_divmod(self): from _numpypy import arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -368,14 +368,14 @@ assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -22,3 +22,7 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) From noreply at buildbot.pypy.org Fri Feb 10 14:00:34 2012 From: noreply at buildbot.pypy.org (hager) Date: Fri, 10 Feb 2012 14:00:34 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): add test to test_runner.py Message-ID: <20120210130034.A7BF88203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52348:7c26a3f31048 Date: 2012-02-10 14:00 +0100 http://bitbucket.org/pypy/pypy/changeset/7c26a3f31048/ Log: (bivab, hager): add test to test_runner.py diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -54,4 +54,27 @@ args = [i+1 for i in range(numargs)] res = self.cpu.execute_token(looptoken, *args) assert self.cpu.get_latest_value_int(0) == sum(args) + + def test_return_spilled_args(self): + numargs = 50 + for _ in range(numargs): + self.cpu.reserve_some_free_fail_descr_number() + ops = [] + arglist = "[%s]\n" % ", ".join(["i%d" % i for i in range(numargs)]) + ops.append(arglist) + # spill every inputarg + for i in range(numargs): + ops.append("force_spill(i%d)\n" % i) + ops.append("guard_value(i0, -1) %s" % arglist) + ops = "".join(ops) + loop = parse(ops) + looptoken = JitCellToken() + done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + ARGS = [lltype.Signed] * numargs + RES = lltype.Signed + args = [i+1 for i in range(numargs)] + res = self.cpu.execute_token(looptoken, *args) + for i in range(numargs): + assert self.cpu.get_latest_value_int(i) == i + 1 From noreply at buildbot.pypy.org Fri Feb 10 14:45:53 2012 From: noreply at buildbot.pypy.org (hager) Date: Fri, 10 Feb 2012 14:45:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): expand test, but unfortunately does not hit the issue Message-ID: <20120210134553.3F34B8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52349:b4770cc6bd95 Date: 2012-02-10 14:45 +0100 http://bitbucket.org/pypy/pypy/changeset/b4770cc6bd95/ Log: (bivab, hager): expand test, but unfortunately does not hit the issue diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -62,6 +62,7 @@ ops = [] arglist = "[%s]\n" % ", ".join(["i%d" % i for i in range(numargs)]) ops.append(arglist) + # spill every inputarg for i in range(numargs): ops.append("force_spill(i%d)\n" % i) @@ -69,12 +70,26 @@ ops = "".join(ops) loop = parse(ops) looptoken = JitCellToken() - done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) + faildescr = loop.operations[-1].getdescr() + done_number = self.cpu.get_fail_descr_number(faildescr) self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) ARGS = [lltype.Signed] * numargs RES = lltype.Signed args = [i+1 for i in range(numargs)] res = self.cpu.execute_token(looptoken, *args) + assert res is faildescr for i in range(numargs): assert self.cpu.get_latest_value_int(i) == i + 1 - + + bridgeops = [arglist] + bridgeops.append("guard_value(i1, -5) %s" % arglist) + bridgeops = "".join(bridgeops) + bridge = parse(bridgeops) + faildescr2 = bridge.operations[-1].getdescr() + + self.cpu.compile_bridge(faildescr, bridge.inputargs, bridge.operations, looptoken) + res2 = self.cpu.execute_token(looptoken, *args) + assert res2 is faildescr2 + for i in range(numargs): + assert self.cpu.get_latest_value_int(i) == i + 1 + From noreply at buildbot.pypy.org Fri Feb 10 14:49:47 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 10 Feb 2012 14:49:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill this test: python3 no longer supports the "raise with traceback" form: we Message-ID: <20120210134947.914798203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52350:b0752a8aa176 Date: 2012-02-10 10:48 +0100 http://bitbucket.org/pypy/pypy/changeset/b0752a8aa176/ Log: kill this test: python3 no longer supports the "raise with traceback" form: we can do something similar by setting e.__traceback__, but then the raise is traced normally, hence this test will never work on python3 (and it fails on cpython) diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -453,31 +453,6 @@ assert len(l) == 1 assert issubclass(l[0][0], Exception) - def test_dont_trace_on_raise_with_tb(self): - import sys - l = [] - def ltrace(a,b,c): - if b == 'exception': - l.append(c) - return ltrace - def trace(a,b,c): return ltrace - def f(): - try: - raise Exception - except: - return sys.exc_info() - def g(): - exc, val, tb = f() - try: - raise exc, val, tb - except: - pass - sys.settrace(trace) - g() - sys.settrace(None) - assert len(l) == 1 - assert isinstance(l[0][1], Exception) - def test_trace_changes_locals(self): import sys def trace(frame, what, arg): From noreply at buildbot.pypy.org Fri Feb 10 14:49:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 10 Feb 2012 14:49:48 +0100 (CET) Subject: [pypy-commit] pypy py3k: adapt the syntax to py3k, and kill some outdated tests about the 'raise Type, args' form which is no longer valid Message-ID: <20120210134948.CE0328203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52351:6928fe527941 Date: 2012-02-10 10:50 +0100 http://bitbucket.org/pypy/pypy/changeset/6928fe527941/ Log: adapt the syntax to py3k, and kill some outdated tests about the 'raise Type, args' form which is no longer valid diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -9,43 +9,23 @@ def test_control_flow(self): try: raise Exception - raise AssertionError, "exception failed to raise" + raise AssertionError("exception failed to raise") except: pass else: - raise AssertionError, "exception executing else clause!" + raise AssertionError("exception executing else clause!") - def test_1arg(self): + def test_args(self): try: - raise SystemError, 1 - except Exception, e: - assert e.args[0] == 1 - - def test_2args(self): - try: - raise SystemError, (1, 2) - except Exception, e: - assert e.args[0] == 1 - assert e.args[1] == 2 - - def test_instancearg(self): - try: - raise SystemError, SystemError(1, 2) - except Exception, e: - assert e.args[0] == 1 - assert e.args[1] == 2 - - def test_more_precise_instancearg(self): - try: - raise Exception, SystemError(1, 2) - except SystemError, e: + raise SystemError(1, 2) + except Exception as e: assert e.args[0] == 1 assert e.args[1] == 2 def test_builtin_exc(self): try: [][0] - except IndexError, e: + except IndexError as e: assert isinstance(e, IndexError) def test_raise_cls(self): @@ -68,7 +48,7 @@ except TypeError: pass else: - raise AssertionError, "shouldn't be able to raise 1" + raise AssertionError("shouldn't be able to raise 1") def test_raise_three_args(self): import sys From noreply at buildbot.pypy.org Fri Feb 10 14:49:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 10 Feb 2012 14:49:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: python3 no longer supports the form 'raise Type, value, tb'. Instead, we can use __traceback__. Adapt the test to the new semantics: it passes with -A but still fails on py.py Message-ID: <20120210134951.4C29E8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52352:c369a075fa90 Date: 2012-02-10 11:49 +0100 http://bitbucket.org/pypy/pypy/changeset/c369a075fa90/ Log: python3 no longer supports the form 'raise Type, value, tb'. Instead, we can use __traceback__. Adapt the test to the new semantics: it passes with -A but still fails on py.py diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -50,19 +50,20 @@ else: raise AssertionError("shouldn't be able to raise 1") - def test_raise_three_args(self): + def test_raise_with___traceback__(self): import sys try: raise ValueError except: exc_type,exc_val,exc_tb = sys.exc_info() try: - raise exc_type,exc_val,exc_tb + exc_val.__traceback__ = exc_tb + raise exc_val except: exc_type2,exc_val2,exc_tb2 = sys.exc_info() - assert exc_type ==exc_type2 - assert exc_val ==exc_val2 - assert exc_tb ==exc_tb2 + assert exc_type is exc_type2 + assert exc_val is exc_val2 + assert exc_tb is exc_tb2.tb_next def test_reraise(self): # some collection of funny code From noreply at buildbot.pypy.org Fri Feb 10 15:36:29 2012 From: noreply at buildbot.pypy.org (hager) Date: Fri, 10 Feb 2012 15:36:29 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): port regalloc tests from x86 backend Message-ID: <20120210143629.5B0B98203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52353:e27ea20c9ee3 Date: 2012-02-10 15:36 +0100 http://bitbucket.org/pypy/pypy/changeset/e27ea20c9ee3/ Log: (bivab, hager): port regalloc tests from x86 backend diff --git a/pypy/jit/backend/ppc/test/test_regalloc_2.py b/pypy/jit/backend/ppc/test/test_regalloc_2.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_regalloc_2.py @@ -0,0 +1,702 @@ + +""" Tests for register allocation for common constructs +""" + +import py +from pypy.jit.metainterp.history import BoxInt, ConstInt,\ + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken, TargetToken +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.backend.llsupport.descr import GcCache +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.backend.ppc.regalloc import Regalloc, PPCRegisterManager,\ + PPCFrameManager +from pypy.jit.backend.ppc.arch import IS_PPC_32, IS_PPC_64, MAX_REG_PARAMS +from pypy.jit.tool.oparser import parse +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rpython.annlowlevel import llhelper +from pypy.rpython.lltypesystem import rclass, rstr +from pypy.jit.codewriter import longlong +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.backend.llsupport.regalloc import is_comparison_or_ovf_op + + +def test_is_comparison_or_ovf_op(): + assert not is_comparison_or_ovf_op(rop.INT_ADD) + assert is_comparison_or_ovf_op(rop.INT_ADD_OVF) + assert is_comparison_or_ovf_op(rop.INT_EQ) + +CPU = getcpuclass() +class MockGcDescr(GcCache): + def get_funcptr_for_new(self): + return 123 + get_funcptr_for_newarray = get_funcptr_for_new + get_funcptr_for_newstr = get_funcptr_for_new + get_funcptr_for_newunicode = get_funcptr_for_new + + def rewrite_assembler(self, cpu, operations): + pass + +class MockAssembler(object): + gcrefs = None + _float_constants = None + + def __init__(self, cpu=None, gc_ll_descr=None): + self.movs = [] + self.performs = [] + self.lea = [] + if cpu is None: + cpu = CPU(None, None) + cpu.setup_once() + self.cpu = cpu + if gc_ll_descr is None: + gc_ll_descr = MockGcDescr(False) + self.cpu.gc_ll_descr = gc_ll_descr + + def dump(self, *args): + pass + + def regalloc_mov(self, from_loc, to_loc): + self.movs.append((from_loc, to_loc)) + + def regalloc_perform(self, op, arglocs, resloc): + self.performs.append((op, arglocs, resloc)) + + def regalloc_perform_discard(self, op, arglocs): + self.performs.append((op, arglocs)) + + def load_effective_addr(self, *args): + self.lea.append(args) + +def fill_regs(regalloc, cls=BoxInt): + allboxes = [] + for reg in PPCRegisterManager.all_regs: + box = cls() + allboxes.append(box) + regalloc.rm.try_allocate_reg() + return allboxes + +class RegAllocForTests(Regalloc): + position = 0 + def _compute_next_usage(self, v, _): + return -1 + +class BaseTestRegalloc(object): + cpu = CPU(None, None) + cpu.setup_once() + + def supports_float(self): + return self.cpu.supports_floats + + def raising_func(i): + if i: + raise LLException(zero_division_error, + zero_division_value) + FPTR = lltype.Ptr(lltype.FuncType([lltype.Signed], lltype.Void)) + raising_fptr = llhelper(FPTR, raising_func) + zero_division_tp, zero_division_value = cpu.get_zero_division_error() + zd_addr = cpu.cast_int_to_adr(zero_division_tp) + zero_division_error = llmemory.cast_adr_to_ptr(zd_addr, + lltype.Ptr(rclass.OBJECT_VTABLE)) + raising_calldescr = cpu.calldescrof(FPTR.TO, FPTR.TO.ARGS, FPTR.TO.RESULT, + EffectInfo.MOST_GENERAL) + + targettoken = TargetToken() + targettoken2 = TargetToken() + fdescr1 = BasicFailDescr(1) + fdescr2 = BasicFailDescr(2) + fdescr3 = BasicFailDescr(3) + + def setup_method(self, meth): + self.targettoken._ppc_loop_code = 0 + self.targettoken2._ppc_loop_code = 0 + + def f1(x): + return x+1 + + def f2(x, y): + return x*y + + def f10(*args): + assert len(args) == 10 + return sum(args) + + F1PTR = lltype.Ptr(lltype.FuncType([lltype.Signed], lltype.Signed)) + F2PTR = lltype.Ptr(lltype.FuncType([lltype.Signed]*2, lltype.Signed)) + F10PTR = lltype.Ptr(lltype.FuncType([lltype.Signed]*10, lltype.Signed)) + f1ptr = llhelper(F1PTR, f1) + f2ptr = llhelper(F2PTR, f2) + f10ptr = llhelper(F10PTR, f10) + + f1_calldescr = cpu.calldescrof(F1PTR.TO, F1PTR.TO.ARGS, F1PTR.TO.RESULT, + EffectInfo.MOST_GENERAL) + f2_calldescr = cpu.calldescrof(F2PTR.TO, F2PTR.TO.ARGS, F2PTR.TO.RESULT, + EffectInfo.MOST_GENERAL) + f10_calldescr= cpu.calldescrof(F10PTR.TO, F10PTR.TO.ARGS, F10PTR.TO.RESULT, + EffectInfo.MOST_GENERAL) + + namespace = locals().copy() + type_system = 'lltype' + + def parse(self, s, boxkinds=None): + return parse(s, self.cpu, self.namespace, + type_system=self.type_system, + boxkinds=boxkinds) + + def interpret(self, ops, args, run=True): + loop = self.parse(ops) + looptoken = JitCellToken() + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + arguments = [] + for arg in args: + if isinstance(arg, int): + arguments.append(arg) + elif isinstance(arg, float): + arg = longlong.getfloatstorage(arg) + arguments.append(arg) + else: + assert isinstance(lltype.typeOf(arg), lltype.Ptr) + llgcref = lltype.cast_opaque_ptr(llmemory.GCREF, arg) + arguments.append(llgcref) + loop._jitcelltoken = looptoken + if run: + self.cpu.execute_token(looptoken, *arguments) + return loop + + def prepare_loop(self, ops): + loop = self.parse(ops) + regalloc = Regalloc(assembler=self.cpu.asm, + frame_manager=PPCFrameManager()) + regalloc.prepare_loop(loop.inputargs, loop.operations) + return regalloc + + def getint(self, index): + return self.cpu.get_latest_value_int(index) + + def getfloat(self, index): + return self.cpu.get_latest_value_float(index) + + def getints(self, end): + return [self.cpu.get_latest_value_int(index) for + index in range(0, end)] + + def getfloats(self, end): + return [longlong.getrealfloat(self.cpu.get_latest_value_float(index)) + for index in range(0, end)] + + def getptr(self, index, T): + gcref = self.cpu.get_latest_value_ref(index) + return lltype.cast_opaque_ptr(T, gcref) + + def attach_bridge(self, ops, loop, guard_op_index, **kwds): + guard_op = loop.operations[guard_op_index] + assert guard_op.is_guard() + bridge = self.parse(ops, **kwds) + assert ([box.type for box in bridge.inputargs] == + [box.type for box in guard_op.getfailargs()]) + faildescr = guard_op.getdescr() + self.cpu.compile_bridge(faildescr, bridge.inputargs, bridge.operations, + loop._jitcelltoken) + return bridge + + def run(self, loop, *arguments): + return self.cpu.execute_token(loop._jitcelltoken, *arguments) + +class TestRegallocSimple(BaseTestRegalloc): + def test_simple_loop(self): + ops = ''' + [i0] + label(i0, descr=targettoken) + i1 = int_add(i0, 1) + i2 = int_lt(i1, 20) + guard_true(i2) [i1] + jump(i1, descr=targettoken) + ''' + self.interpret(ops, [0]) + assert self.getint(0) == 20 + + def test_two_loops_and_a_bridge(self): + ops = ''' + [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) + i4 = int_add(i0, 1) + i5 = int_lt(i4, 20) + guard_true(i5) [i4, i1, i2, i3] + jump(i4, i1, i2, i3, descr=targettoken) + ''' + loop = self.interpret(ops, [0, 0, 0, 0]) + ops2 = ''' + [i5, i6, i7, i8] + label(i5, descr=targettoken2) + i1 = int_add(i5, 1) + i3 = int_add(i1, 1) + i4 = int_add(i3, 1) + i2 = int_lt(i4, 30) + guard_true(i2) [i4] + jump(i4, descr=targettoken2) + ''' + loop2 = self.interpret(ops2, [0, 0, 0, 0]) + bridge_ops = ''' + [i4] + jump(i4, i4, i4, i4, descr=targettoken) + ''' + bridge = self.attach_bridge(bridge_ops, loop2, 5) + self.run(loop2, 0, 0, 0, 0) + assert self.getint(0) == 31 + assert self.getint(1) == 30 + assert self.getint(2) == 30 + assert self.getint(3) == 30 + + def test_pointer_arg(self): + ops = ''' + [i0, p0] + label(i0, p0, descr=targettoken) + i1 = int_add(i0, 1) + i2 = int_lt(i1, 10) + guard_true(i2) [p0] + jump(i1, p0, descr=targettoken) + ''' + S = lltype.GcStruct('S') + ptr = lltype.malloc(S) + self.cpu.clear_latest_values(2) + self.interpret(ops, [0, ptr]) + assert self.getptr(0, lltype.Ptr(S)) == ptr + + def test_exception_bridge_no_exception(self): + ops = ''' + [i0] + i1 = same_as(1) + call(ConstClass(raising_fptr), i0, descr=raising_calldescr) + guard_exception(ConstClass(zero_division_error)) [i1] + finish(0) + ''' + bridge_ops = ''' + [i3] + i2 = same_as(2) + guard_no_exception() [i2] + finish(1) + ''' + loop = self.interpret(ops, [0]) + assert self.getint(0) == 1 + bridge = self.attach_bridge(bridge_ops, loop, 2) + self.run(loop, 0) + assert self.getint(0) == 1 + + def test_inputarg_unused(self): + ops = ''' + [i0] + finish(1) + ''' + self.interpret(ops, [0]) + # assert did not explode + + def test_nested_guards(self): + ops = ''' + [i0, i1] + guard_true(i0) [i0, i1] + finish(4) + ''' + bridge_ops = ''' + [i0, i1] + guard_true(i0) [i0, i1] + finish(3) + ''' + loop = self.interpret(ops, [0, 10]) + assert self.getint(0) == 0 + assert self.getint(1) == 10 + bridge = self.attach_bridge(bridge_ops, loop, 0) + self.run(loop, 0, 10) + assert self.getint(0) == 0 + assert self.getint(1) == 10 + + def test_nested_unused_arg(self): + ops = ''' + [i0, i1] + guard_true(i0) [i0, i1] + finish(1) + ''' + loop = self.interpret(ops, [0, 1]) + assert self.getint(0) == 0 + bridge_ops = ''' + [i0, i1] + finish(1, 2) + ''' + self.attach_bridge(bridge_ops, loop, 0) + self.run(loop, 0, 1) + + def test_spill_for_constant(self): + ops = ''' + [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) + i4 = int_add(3, i1) + i5 = int_lt(i4, 30) + guard_true(i5) [i0, i4, i2, i3] + jump(1, i4, 3, 4, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 0, 0]) + assert self.getints(4) == [1, 30, 3, 4] + + def test_spill_for_constant_lshift(self): + ops = ''' + [i0, i2, i1, i3] + label(i0, i2, i1, i3, descr=targettoken) + i4 = int_lshift(1, i1) + i5 = int_add(1, i1) + i6 = int_lt(i5, 30) + guard_true(i6) [i4, i5, i2, i3] + jump(i4, 3, i5, 4, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 0, 0]) + assert self.getints(4) == [1<<29, 30, 3, 4] + ops = ''' + [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) + i4 = int_lshift(1, i1) + i5 = int_add(1, i1) + i6 = int_lt(i5, 30) + guard_true(i6) [i4, i5, i2, i3] + jump(i4, i5, 3, 4, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 0, 0]) + assert self.getints(4) == [1<<29, 30, 3, 4] + ops = ''' + [i0, i3, i1, i2] + label(i0, i3, i1, i2, descr=targettoken) + i4 = int_lshift(1, i1) + i5 = int_add(1, i1) + i6 = int_lt(i5, 30) + guard_true(i6) [i4, i5, i2, i3] + jump(i4, 4, i5, 3, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 0, 0]) + assert self.getints(4) == [1<<29, 30, 3, 4] + + def test_result_selected_reg_via_neg(self): + ops = ''' + [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) + i6 = int_neg(i2) + i7 = int_add(1, i1) + i4 = int_lt(i7, 10) + guard_true(i4) [i0, i6, i7] + jump(1, i7, i2, i6, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 3, 0]) + assert self.getints(3) == [1, -3, 10] + + def test_compare_memory_result_survives(self): + ops = ''' + [i0, i1, i2, i3] + label(i0, i1, i2, i3, descr=targettoken) + i4 = int_lt(i0, i1) + i5 = int_add(i3, 1) + i6 = int_lt(i5, 30) + guard_true(i6) [i4] + jump(i0, i1, i4, i5, descr=targettoken) + ''' + self.interpret(ops, [0, 10, 0, 0]) + assert self.getint(0) == 1 + + def test_jump_different_args(self): + ops = ''' + [i0, i15, i16, i18, i1, i2, i3] + label(i0, i15, i16, i18, i1, i2, i3, descr=targettoken) + i4 = int_add(i3, 1) + i5 = int_lt(i4, 20) + guard_true(i5) [i2, i1] + jump(i0, i18, i15, i16, i2, i1, i4, descr=targettoken) + ''' + self.interpret(ops, [0, 1, 2, 3, 0, 0, 0]) + + def test_op_result_unused(self): + ops = ''' + [i0, i1] + i2 = int_add(i0, i1) + finish(0) + ''' + self.interpret(ops, [0, 0]) + + def test_guard_value_two_boxes(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7] + guard_value(i6, i1) [i0, i2, i3, i4, i5, i6] + finish(i0, i2, i3, i4, i5, i6) + ''' + self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0]) + assert self.getint(0) == 0 + + def test_bug_wrong_stack_adj(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7, i8] + i9 = same_as(0) + guard_true(i0) [i9, i0, i1, i2, i3, i4, i5, i6, i7, i8] + finish(1, i0, i1, i2, i3, i4, i5, i6, i7, i8) + ''' + loop = self.interpret(ops, [0, 1, 2, 3, 4, 5, 6, 7, 8]) + assert self.getint(0) == 0 + bridge_ops = ''' + [i9, i0, i1, i2, i3, i4, i5, i6, i7, i8] + call(ConstClass(raising_fptr), 0, descr=raising_calldescr) + finish(i0, i1, i2, i3, i4, i5, i6, i7, i8) + ''' + self.attach_bridge(bridge_ops, loop, 1) + self.run(loop, 0, 1, 2, 3, 4, 5, 6, 7, 8) + assert self.getints(9) == range(9) + + def test_loopargs(self): + ops = """ + [i0, i1, i2, i3] + i4 = int_add(i0, i1) + jump(i4, i1, i2, i3) + """ + regalloc = self.prepare_loop(ops) + assert len(regalloc.rm.reg_bindings) == 4 + assert len(regalloc.frame_manager.bindings) == 0 + + +class TestRegallocCompOps(BaseTestRegalloc): + + def test_cmp_op_0(self): + ops = ''' + [i0, i3] + i1 = same_as(1) + i2 = int_lt(i0, 100) + guard_true(i3) [i1, i2] + finish(0, i2) + ''' + self.interpret(ops, [0, 1]) + assert self.getint(0) == 0 + +class TestRegallocMoreRegisters(BaseTestRegalloc): + + cpu = BaseTestRegalloc.cpu + targettoken = TargetToken() + + S = lltype.GcStruct('S', ('field', lltype.Char)) + fielddescr = cpu.fielddescrof(S, 'field') + + A = lltype.GcArray(lltype.Char) + arraydescr = cpu.arraydescrof(A) + + namespace = locals().copy() + + def test_int_is_true(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7] + i10 = int_is_true(i0) + i11 = int_is_true(i1) + i12 = int_is_true(i2) + i13 = int_is_true(i3) + i14 = int_is_true(i4) + i15 = int_is_true(i5) + i16 = int_is_true(i6) + i17 = int_is_true(i7) + finish(i10, i11, i12, i13, i14, i15, i16, i17) + ''' + self.interpret(ops, [0, 42, 12, 0, 13, 0, 0, 3333]) + assert self.getints(8) == [0, 1, 1, 0, 1, 0, 0, 1] + + def test_comparison_ops(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6] + i10 = int_lt(i0, i1) + i11 = int_le(i2, i3) + i12 = int_ge(i4, i5) + i13 = int_eq(i5, i6) + i14 = int_gt(i6, i2) + i15 = int_ne(i2, i6) + finish(i10, i11, i12, i13, i14, i15) + ''' + self.interpret(ops, [0, 1, 2, 3, 4, 5, 6]) + assert self.getints(6) == [1, 1, 0, 0, 1, 1] + + def test_strsetitem(self): + ops = ''' + [p0, i] + strsetitem(p0, 1, i) + finish() + ''' + llstr = rstr.mallocstr(10) + self.interpret(ops, [llstr, ord('a')]) + assert llstr.chars[1] == 'a' + + def test_setfield_char(self): + ops = ''' + [p0, i] + setfield_gc(p0, i, descr=fielddescr) + finish() + ''' + s = lltype.malloc(self.S) + self.interpret(ops, [s, ord('a')]) + assert s.field == 'a' + + def test_setarrayitem_gc(self): + ops = ''' + [p0, i] + setarrayitem_gc(p0, 1, i, descr=arraydescr) + finish() + ''' + s = lltype.malloc(self.A, 3) + self.interpret(ops, [s, ord('a')]) + assert s[1] == 'a' + + def test_division_optimized(self): + ops = ''' + [i7, i6] + label(i7, i6, descr=targettoken) + i18 = int_floordiv(i7, i6) + i19 = int_xor(i7, i6) + i21 = int_lt(i19, 0) + i22 = int_mod(i7, i6) + i23 = int_is_true(i22) + i24 = int_eq(i6, 4) + guard_false(i24) [i18] + jump(i18, i6, descr=targettoken) + ''' + self.interpret(ops, [10, 4]) + assert self.getint(0) == 2 + # FIXME: Verify that i19 - i23 are removed + +class TestRegallocFloats(BaseTestRegalloc): + + def test_float_add(self): + if not self.supports_float(): + py.test.skip("float not supported") + ops = ''' + [f0, f1] + f2 = float_add(f0, f1) + finish(f2, f0, f1) + ''' + self.interpret(ops, [3.0, 1.5]) + assert self.getfloats(3) == [4.5, 3.0, 1.5] + + def test_float_adds_stack(self): + if not self.supports_float(): + py.test.skip("float not supported") + ops = ''' + [f0, f1, f2, f3, f4, f5, f6, f7, f8] + f9 = float_add(f0, f1) + f10 = float_add(f8, 3.5) + finish(f9, f10, f2, f3, f4, f5, f6, f7, f8) + ''' + self.interpret(ops, [0.1, .2, .3, .4, .5, .6, .7, .8, .9]) + assert self.getfloats(9) == [.1+.2, .9+3.5, .3, .4, .5, .6, .7, .8, .9] + + def test_lt_const(self): + if not self.supports_float(): + py.test.skip("float not supported") + ops = ''' + [f0] + i1 = float_lt(3.5, f0) + finish(i1) + ''' + self.interpret(ops, [0.1]) + assert self.getint(0) == 0 + + def test_bug_float_is_true_stack(self): + if not self.supports_float(): + py.test.skip("float not supported") + # NB. float_is_true no longer exists. Unsure if keeping this test + # makes sense any more. + ops = ''' + [f0, f1, f2, f3, f4, f5, f6, f7, f8, f9] + i0 = float_ne(f0, 0.0) + i1 = float_ne(f1, 0.0) + i2 = float_ne(f2, 0.0) + i3 = float_ne(f3, 0.0) + i4 = float_ne(f4, 0.0) + i5 = float_ne(f5, 0.0) + i6 = float_ne(f6, 0.0) + i7 = float_ne(f7, 0.0) + i8 = float_ne(f8, 0.0) + i9 = float_ne(f9, 0.0) + finish(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9) + ''' + loop = self.interpret(ops, [0.0, .1, .2, .3, .4, .5, .6, .7, .8, .9]) + assert self.getints(9) == [0, 1, 1, 1, 1, 1, 1, 1, 1] + +class TestRegAllocCallAndStackDepth(BaseTestRegalloc): + def expected_param_depth(self, num_args): + # Assumes the arguments are all non-float + return max(num_args - MAX_REG_PARAMS, 0) + + def test_one_call(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7, i8, i9] + i10 = call(ConstClass(f1ptr), i0, descr=f1_calldescr) + finish(i10, i1, i2, i3, i4, i5, i6, i7, i8, i9) + ''' + loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9]) + assert self.getints(10) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9] + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(1) + + def test_two_calls(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7, i8, i9] + i10 = call(ConstClass(f1ptr), i0, descr=f1_calldescr) + i11 = call(ConstClass(f2ptr), i10, i1, descr=f2_calldescr) + finish(i11, i1, i2, i3, i4, i5, i6, i7, i8, i9) + ''' + loop = self.interpret(ops, [4, 7, 9, 9 ,9, 9, 9, 9, 9, 9]) + assert self.getints(10) == [5*7, 7, 9, 9, 9, 9, 9, 9, 9, 9] + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(2) + + def test_call_many_arguments(self): + # NB: The first and last arguments in the call are constants. This + # is primarily for x86-64, to ensure that loading a constant to an + # argument register or to the stack works correctly + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7] + i8 = call(ConstClass(f10ptr), 1, i0, i1, i2, i3, i4, i5, i6, i7, 10, descr=f10_calldescr) + finish(i8) + ''' + loop = self.interpret(ops, [2, 3, 4, 5, 6, 7, 8, 9]) + assert self.getint(0) == 55 + clt = loop._jitcelltoken.compiled_loop_token + assert clt.param_depth == self.expected_param_depth(10) + + def test_bridge_calls_1(self): + ops = ''' + [i0, i1] + i2 = call(ConstClass(f1ptr), i0, descr=f1_calldescr) + guard_value(i2, 0, descr=fdescr1) [i2, i1] + finish(i1) + ''' + loop = self.interpret(ops, [4, 7]) + assert self.getint(0) == 5 + ops = ''' + [i2, i1] + i3 = call(ConstClass(f2ptr), i2, i1, descr=f2_calldescr) + finish(i3, descr=fdescr2) + ''' + bridge = self.attach_bridge(ops, loop, -2) + + assert loop.operations[-2].getdescr()._ppc_bridge_param_depth\ + == self.expected_param_depth(2) + + self.run(loop, 4, 7) + assert self.getint(0) == 5*7 + + def test_bridge_calls_2(self): + ops = ''' + [i0, i1] + i2 = call(ConstClass(f2ptr), i0, i1, descr=f2_calldescr) + guard_value(i2, 0, descr=fdescr1) [i2] + finish(i1) + ''' + loop = self.interpret(ops, [4, 7]) + assert self.getint(0) == 4*7 + ops = ''' + [i2] + i3 = call(ConstClass(f1ptr), i2, descr=f1_calldescr) + finish(i3, descr=fdescr2) + ''' + bridge = self.attach_bridge(ops, loop, -2) + + assert loop.operations[-2].getdescr()._ppc_bridge_param_depth\ + == self.expected_param_depth(2) + + self.run(loop, 4, 7) + assert self.getint(0) == 29 + From noreply at buildbot.pypy.org Fri Feb 10 16:17:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 10 Feb 2012 16:17:21 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Tweaks Message-ID: <20120210151721.D05FB8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52354:e71b8bc58097 Date: 2012-02-10 15:39 +0100 http://bitbucket.org/pypy/pypy/changeset/e71b8bc58097/ Log: Tweaks diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -18,9 +18,11 @@ arg = cast_base_ptr_to_instance(argcls, llarg) else: arg = lltype.TLS.stm_callback_arg - res = func(arg, retry_counter) - assert res is None - llop.stm_commit_transaction(lltype.Void) + try: + res = func(arg, retry_counter) + assert res is None + finally: + llop.stm_commit_transaction(lltype.Void) return lltype.nullptr(rffi.VOIDP.TO) return _stm_callback diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -119,7 +119,7 @@ if in_main_thread: tls.malloc_flags = GCFLAG_GLOBAL else: - tls.malloc_flags = 0 + tls.malloc_flags = -1 # don't malloc outside a transaction! return tls def _setup_secondary_thread(self): @@ -172,6 +172,8 @@ # modes, but set different flags. tls = self.collector.get_tls() flags = tls.malloc_flags + ll_assert(flags != -1, "malloc() in a transactional thread but " + "outside a transaction") # # Get the memory from the nursery. size_gc_header = self.gcheaderbuilder.size_gc_header @@ -192,6 +194,8 @@ # XXX Be more subtle, e.g. detecting overflows, at least tls = self.collector.get_tls() flags = tls.malloc_flags + ll_assert(flags != -1, "malloc() in a transactional thread but " + "outside a transaction") size_gc_header = self.gcheaderbuilder.size_gc_header nonvarsize = size_gc_header + size totalsize = nonvarsize + itemsize * length @@ -350,10 +354,10 @@ # ---------- def acquire(self, lock): - ll_thread.c_thread_acquirelock(lock, 1) + ll_thread.c_thread_acquirelock_NOAUTO(lock, 1) def release(self, lock): - ll_thread.c_thread_releaselock(lock) + ll_thread.c_thread_releaselock_NOAUTO(lock) # ---------- @@ -392,6 +396,7 @@ """Start a transaction, by clearing and resetting the tls nursery.""" tls = self.get_tls() self.gc.reset_nursery(tls) + tls.malloc_flags = 0 def commit_transaction(self): @@ -402,6 +407,7 @@ debug_start("gc-collect-commit") # tls = self.get_tls() + tls.malloc_flags = -1 # # Do a mark-and-move minor collection out of the tls' nursery # into the main thread's global area (which is right now also diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -17,10 +17,11 @@ glob = Global() class Arg: - _alloc_nonmovable_ = True + _alloc_nonmovable_ = True # XXX kill me def add_at_end_of_chained_list(arg, retry_counter): + assert arg.foobar == 42 node = arg.anchor value = arg.value x = Node(value) @@ -66,10 +67,14 @@ try: debug_print("thread starting...") arg = Arg() - for i in range(glob.LENGTH): + arg.foobar = 41 + i = 0 + while i < glob.LENGTH: arg.anchor = glob.anchor arg.value = i + arg.foobar = 42 rstm.perform_transaction(add_at_end_of_chained_list, Arg, arg) + i += 1 rstm.perform_transaction(increment_done, Arg, arg) finally: rstm.descriptor_done() From noreply at buildbot.pypy.org Fri Feb 10 16:17:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 10 Feb 2012 16:17:23 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Replace in_main_thread() with in_transaction(). Message-ID: <20120210151723.2219A828F9@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52355:a8cf7f197ec1 Date: 2012-02-10 16:10 +0100 http://bitbucket.org/pypy/pypy/changeset/a8cf7f197ec1/ Log: Replace in_main_thread() with in_transaction(). diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -297,7 +297,7 @@ # @dont_inline def _stm_write_barrier_global(obj): - if stm_operations.in_main_thread(): + if not stm_operations.in_transaction(): return obj # we need to find of make a local copy hdr = self.header(obj) diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -26,8 +26,8 @@ def setup_size_getter(self, getsize_fn): self._getsize_fn = getsize_fn - def in_main_thread(self): - return self.threadnum == 0 + def in_transaction(self): + return self.threadnum != 0 def set_tls(self, tls, in_main_thread): assert lltype.typeOf(tls) == llmemory.Address @@ -149,6 +149,7 @@ self.gc.stm_operations.threadnum = threadnum if threadnum not in self.gc.stm_operations._tls_dict: self.gc.setup_thread(False) + self.gc.start_transaction() def gcsize(self, S): return (llmemory.raw_malloc_usage(llmemory.sizeof(self.gc.HDR)) + llmemory.raw_malloc_usage(llmemory.sizeof(S))) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -81,6 +81,7 @@ if there is an inevitable transaction running */ static volatile unsigned long global_timestamp = 2; static __thread struct tx_descriptor *thread_descriptor = NULL; +static __thread struct tx_descriptor *active_thread_descriptor = NULL; static long (*rpython_get_size)(void*); /************************************************************/ @@ -114,7 +115,7 @@ { unsigned int c; int i; - struct tx_descriptor *d = thread_descriptor; + struct tx_descriptor *d = active_thread_descriptor; d->num_spinloops[num]++; //printf("tx_spinloop(%d)\n", num); @@ -132,11 +133,6 @@ #endif } -static _Bool is_main_thread(struct tx_descriptor *d) -{ - return d->my_lock_word == 0; -} - static _Bool is_inevitable(struct tx_descriptor *d) { return d->setjmp_buf == NULL; @@ -268,7 +264,7 @@ /*** increase the abort count and restart the transaction */ static void tx_abort(int reason) { - struct tx_descriptor *d = thread_descriptor; + struct tx_descriptor *d = active_thread_descriptor; assert(!is_inevitable(d)); d->num_aborts[reason]++; #ifdef RPY_STM_DEBUG_PRINT @@ -459,12 +455,16 @@ #define STM_READ_WORD(SIZE, TYPE) \ TYPE stm_read_int##SIZE(void* addr, long offset) \ { \ - struct tx_descriptor *d = thread_descriptor; \ + struct tx_descriptor *d = active_thread_descriptor; \ volatile orec_t *o = get_orec(addr); \ owner_version_t ovt; \ \ assert(sizeof(TYPE) == SIZE); \ \ + /* XXX try to remove this check from the main path */ \ + if (d == NULL) \ + return *(TYPE *)(((char *)addr) + offset); \ + \ if ((o->tid & GCFLAG_WAS_COPIED) != 0) \ { \ /* Look up in the thread-local dictionary. */ \ @@ -477,10 +477,6 @@ not_found:; \ } \ \ - /* XXX try to remove this check from the main path */ \ - if (is_main_thread(d)) \ - return *(TYPE *)(((char *)addr) + offset); \ - \ STM_DO_READ(TYPE tmp = *(TYPE *)(((char *)addr) + offset)); \ return tmp; \ } @@ -492,11 +488,11 @@ void stm_copy_transactional_to_raw(void *src, void *dst, long size) { - struct tx_descriptor *d = thread_descriptor; + struct tx_descriptor *d = active_thread_descriptor; volatile orec_t *o = get_orec(src); owner_version_t ovt; - assert(!is_main_thread(d)); + assert(d != NULL); /* don't copy the header */ src = ((char *)src) + sizeof(orec_t); @@ -507,9 +503,10 @@ } -static struct tx_descriptor *descriptor_init(_Bool is_main_thread) +static struct tx_descriptor *descriptor_init() { assert(thread_descriptor == NULL); + assert(active_thread_descriptor == NULL); if (1) /* for hg diff */ { struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); @@ -519,21 +516,15 @@ PYPY_DEBUG_START("stm-init"); #endif - if (is_main_thread) - { - d->my_lock_word = 0; - } - else - { - /* initialize 'my_lock_word' to be a unique negative number */ - d->my_lock_word = (owner_version_t)d; - if (!IS_LOCKED(d->my_lock_word)) - d->my_lock_word = ~d->my_lock_word; - assert(IS_LOCKED(d->my_lock_word)); - } + /* initialize 'my_lock_word' to be a unique negative number */ + d->my_lock_word = (owner_version_t)d; + if (!IS_LOCKED(d->my_lock_word)) + d->my_lock_word = ~d->my_lock_word; + assert(IS_LOCKED(d->my_lock_word)); /*d->spinloop_counter = (unsigned int)(d->my_lock_word | 1);*/ thread_descriptor = d; + /* active_thread_descriptor stays NULL */ #ifdef RPY_STM_DEBUG_PRINT if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "thread %lx starting\n", @@ -548,6 +539,7 @@ { struct tx_descriptor *d = thread_descriptor; assert(d != NULL); + assert(active_thread_descriptor == NULL); thread_descriptor = NULL; @@ -593,12 +585,13 @@ assert(d != NULL); d->setjmp_buf = buf; d->start_time = (/*d->last_known_global_timestamp*/ global_timestamp) & ~1; + active_thread_descriptor = d; } static long commit_transaction(void) { - struct tx_descriptor *d = thread_descriptor; - assert(!is_main_thread(d)); + struct tx_descriptor *d = active_thread_descriptor; + assert(d != NULL); // if I don't have writes, I'm committed if (!redolog_any_entry(&d->redolog)) @@ -612,6 +605,7 @@ } d->num_commits++; common_cleanup(d); + active_thread_descriptor = NULL; return d->start_time; } @@ -657,6 +651,7 @@ // reset all lists common_cleanup(d); + active_thread_descriptor = NULL; return d->end_time; } @@ -666,6 +661,7 @@ jmp_buf _jmpbuf; volatile long v_counter = 0; long counter; + assert(active_thread_descriptor == NULL); setjmp(_jmpbuf); begin_transaction(&_jmpbuf); counter = v_counter; @@ -681,8 +677,8 @@ global_timestamp and global_timestamp cannot be incremented by another thread. We set the lowest bit in global_timestamp to 1. */ - struct tx_descriptor *d = thread_descriptor; - if (d == NULL || is_main_thread(d)) + struct tx_descriptor *d = active_thread_descriptor; + if (d == NULL) return; #ifdef RPY_STM_DEBUG_PRINT @@ -742,7 +738,10 @@ { struct tx_descriptor *d = thread_descriptor; if (d == NULL) + return -1; + if (active_thread_descriptor == NULL) return 0; + assert(d == active_thread_descriptor); if (!is_inevitable(d)) return 1; else @@ -756,9 +755,10 @@ } -void stm_set_tls(void *newtls, long is_main_thread) +void stm_set_tls(void *newtls, long in_main_thread) { - struct tx_descriptor *d = descriptor_init(is_main_thread); + /* 'in_main_thread' is ignored so far */ + struct tx_descriptor *d = descriptor_init(); d->rpython_tls_object = newtls; } @@ -785,7 +785,8 @@ void stm_tldict_add(void *key, void *value) { - struct tx_descriptor *d = thread_descriptor; + struct tx_descriptor *d = active_thread_descriptor; + assert(d != NULL); redolog_insert(&d->redolog, key, value); } @@ -806,10 +807,25 @@ rpython_get_size = getsize_fn; } -long stm_in_main_thread(void) +long stm_in_transaction(void) { - struct tx_descriptor *d = thread_descriptor; - return is_main_thread(d); + struct tx_descriptor *d = active_thread_descriptor; + return d != NULL; +} + +void _stm_activate_transaction(long activate) +{ + assert(thread_descriptor != NULL); + if (activate) + { + assert(active_thread_descriptor == NULL); + active_thread_descriptor = thread_descriptor; + } + else + { + assert(active_thread_descriptor != NULL); + active_thread_descriptor = NULL; + } } #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -46,6 +46,8 @@ 2: in an inevitable transaction */ long stm_thread_id(void); /* returns a unique thread id, or 0 if descriptor_init() was not called */ +long stm_in_transaction(void); +void _stm_activate_transaction(long); /************************************************************/ diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -15,7 +15,9 @@ setup_size_getter = smexternal('stm_setup_size_getter', [GETSIZE], lltype.Void) - in_main_thread = smexternal('stm_in_main_thread', [], lltype.Signed) + in_transaction = smexternal('stm_in_transaction', [], lltype.Signed) + _activate_transaction = smexternal('_stm_activate_transaction', + [lltype.Signed], lltype.Void) set_tls = smexternal('stm_set_tls', [llmemory.Address, lltype.Signed], lltype.Void) diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -14,6 +14,7 @@ LENGTH = 5000 USE_MEMORY = False anchor = Node(-1) + lock = ll_thread.allocate_ll_lock() glob = Global() class Arg: diff --git a/pypy/translator/stm/test/test_stmgcintf.py b/pypy/translator/stm/test/test_stmgcintf.py --- a/pypy/translator/stm/test/test_stmgcintf.py +++ b/pypy/translator/stm/test/test_stmgcintf.py @@ -30,16 +30,23 @@ class TestStmGcIntf: + _in_transaction = False def setup_method(self, meth): TLS = getattr(meth, 'TLS', DEFAULT_TLS) s = lltype.malloc(TLS, flavor='raw', immortal=True) self.tls = s a = llmemory.cast_ptr_to_adr(s) - in_main_thread = getattr(meth, 'in_main_thread', True) + in_transaction = getattr(meth, 'in_transaction', False) + in_main_thread = getattr(meth, 'in_main_thread', not in_transaction) stm_operations.set_tls(a, int(in_main_thread)) + if in_transaction: + stm_operations._activate_transaction(1) + self._in_transaction = True def teardown_method(self, meth): + if self._in_transaction: + stm_operations._activate_transaction(0) stm_operations.del_tls() def test_set_get_del(self): @@ -60,6 +67,7 @@ stm_operations.tldict_add(a3, a4) assert stm_operations.tldict_lookup(a3) == a4 assert stm_operations.tldict_lookup(a1) == a2 + test_tldict.in_transaction = True def test_tldict_large(self): content = {} @@ -74,7 +82,7 @@ a2 = rffi.cast(llmemory.Address, random.randrange(2000, 9999)) stm_operations.tldict_add(a1, a2) content[key] = a2 - return content + test_tldict_large.in_transaction = True def get_callback(self): def callback(tls, key, value): @@ -101,6 +109,7 @@ stm_operations.tldict_enum(p_callback) assert (seen == [(a1, a2), (a3, a4)] or seen == [(a3, a4), (a1, a2)]) + test_enum_tldict_nonempty.in_transaction = True def stm_read_case(self, flags, copied=False): # doesn't test STM behavior, but just that it appears to work @@ -136,7 +145,7 @@ assert res == 42042 res = self.stm_read_case(stmgc.GCFLAG_WAS_COPIED, copied=True) assert res == 84084 - test_stm_read_word_transactional_thread.in_main_thread = False + test_stm_read_word_transactional_thread.in_transaction = True def test_stm_read_int1(self): S2 = lltype.Struct('S2', ('hdr', stmgc.StmGC.HDR), @@ -186,11 +195,16 @@ # lltype.free(s2, flavor='raw') lltype.free(s1, flavor='raw') - test_stm_copy_transactional_to_raw.in_main_thread = False + test_stm_copy_transactional_to_raw.in_transaction = True - def test_in_main_thread(self): - assert stm_operations.in_main_thread() + def test_in_transaction(self): + assert stm_operations.in_transaction() + test_in_transaction.in_transaction = True - def test_not_in_main_thread(self): - assert not stm_operations.in_main_thread() - test_not_in_main_thread.in_main_thread = False + def test_not_in_transaction(self): + assert not stm_operations.in_transaction() + test_not_in_transaction.in_main_thread = False + + def test_not_in_transaction_main(self): + assert not stm_operations.in_transaction() + test_not_in_transaction.in_main_thread = True From noreply at buildbot.pypy.org Fri Feb 10 16:17:25 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 10 Feb 2012 16:17:25 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Last fix. Now it runs :-) Message-ID: <20120210151725.322508203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52356:c950d5bc52c3 Date: 2012-02-10 16:17 +0100 http://bitbucket.org/pypy/pypy/changeset/c950d5bc52c3/ Log: Last fix. Now it runs :-) diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -14,7 +14,6 @@ LENGTH = 5000 USE_MEMORY = False anchor = Node(-1) - lock = ll_thread.allocate_ll_lock() glob = Global() class Arg: @@ -67,7 +66,8 @@ rstm.descriptor_init() try: debug_print("thread starting...") - arg = Arg() + arg = glob._arg + ll_thread.release_NOAUTO(glob.lock) arg.foobar = 41 i = 0 while i < glob.LENGTH: @@ -92,8 +92,12 @@ if len(argv) > 3: glob.USE_MEMORY = bool(int(argv[3])) glob.done = 0 + glob.lock = ll_thread.allocate_ll_lock() + ll_thread.acquire_NOAUTO(glob.lock, 1) for i in range(glob.NUM_THREADS): + glob._arg = Arg() ll_thread.start_new_thread(run_me, ()) + ll_thread.acquire_NOAUTO(glob.lock, 1) print "sleeping..." while glob.done < glob.NUM_THREADS: # poor man's lock time.sleep(1) From noreply at buildbot.pypy.org Fri Feb 10 16:21:48 2012 From: noreply at buildbot.pypy.org (hager) Date: Fri, 10 Feb 2012 16:21:48 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: port another test from x86 backend Message-ID: <20120210152148.160A38203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52357:e1c7cd8055ec Date: 2012-02-10 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/e1c7cd8055ec/ Log: port another test from x86 backend diff --git a/pypy/jit/backend/ppc/test/test_regalloc_3.py b/pypy/jit/backend/ppc/test/test_regalloc_3.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_regalloc_3.py @@ -0,0 +1,272 @@ +import py +from pypy.jit.metainterp.history import ResOperation, BoxInt, ConstInt,\ + BoxPtr, ConstPtr, BasicFailDescr, JitCellToken +from pypy.jit.metainterp.resoperation import rop +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.backend.ppc.arch import WORD +CPU = getcpuclass() + +def test_bug_rshift(): + v1 = BoxInt() + v2 = BoxInt() + v3 = BoxInt() + v4 = BoxInt() + inputargs = [v1] + operations = [ + ResOperation(rop.INT_ADD, [v1, v1], v2), + ResOperation(rop.INT_INVERT, [v2], v3), + ResOperation(rop.UINT_RSHIFT, [v1, ConstInt(3)], v4), + ResOperation(rop.FINISH, [v4, v3], None, descr=BasicFailDescr()), + ] + cpu = CPU(None, None) + cpu.setup_once() + looptoken = JitCellToken() + cpu.compile_loop(inputargs, operations, looptoken) + cpu.execute_token(looptoken, 9) + assert cpu.get_latest_value_int(0) == (9 >> 3) + assert cpu.get_latest_value_int(1) == (~18) + +def test_bug_int_is_true_1(): + v1 = BoxInt() + v2 = BoxInt() + v3 = BoxInt() + v4 = BoxInt() + tmp5 = BoxInt() + inputargs = [v1] + operations = [ + ResOperation(rop.INT_MUL, [v1, v1], v2), + ResOperation(rop.INT_MUL, [v2, v1], v3), + ResOperation(rop.INT_IS_TRUE, [v2], tmp5), + ResOperation(rop.INT_IS_ZERO, [tmp5], v4), + ResOperation(rop.FINISH, [v4, v3, tmp5], None, descr=BasicFailDescr()), + ] + cpu = CPU(None, None) + cpu.setup_once() + looptoken = JitCellToken() + cpu.compile_loop(inputargs, operations, looptoken) + cpu.execute_token(looptoken, -10) + assert cpu.get_latest_value_int(0) == 0 + assert cpu.get_latest_value_int(1) == -1000 + assert cpu.get_latest_value_int(2) == 1 + +def test_bug_0(): + v1 = BoxInt() + v2 = BoxInt() + v3 = BoxInt() + v4 = BoxInt() + v5 = BoxInt() + v6 = BoxInt() + v7 = BoxInt() + v8 = BoxInt() + v9 = BoxInt() + v10 = BoxInt() + v11 = BoxInt() + v12 = BoxInt() + v13 = BoxInt() + v14 = BoxInt() + v15 = BoxInt() + v16 = BoxInt() + v17 = BoxInt() + v18 = BoxInt() + v19 = BoxInt() + v20 = BoxInt() + v21 = BoxInt() + v22 = BoxInt() + v23 = BoxInt() + v24 = BoxInt() + v25 = BoxInt() + v26 = BoxInt() + v27 = BoxInt() + v28 = BoxInt() + v29 = BoxInt() + v30 = BoxInt() + v31 = BoxInt() + v32 = BoxInt() + v33 = BoxInt() + v34 = BoxInt() + v35 = BoxInt() + v36 = BoxInt() + v37 = BoxInt() + v38 = BoxInt() + v39 = BoxInt() + v40 = BoxInt() + tmp41 = BoxInt() + tmp42 = BoxInt() + tmp43 = BoxInt() + tmp44 = BoxInt() + tmp45 = BoxInt() + tmp46 = BoxInt() + inputargs = [v1, v2, v3, v4, v5, v6, v7, v8, v9, v10] + operations = [ + ResOperation(rop.UINT_GT, [v3, ConstInt(-48)], v11), + ResOperation(rop.INT_XOR, [v8, v1], v12), + ResOperation(rop.INT_GT, [v6, ConstInt(-9)], v13), + ResOperation(rop.INT_LE, [v13, v2], v14), + ResOperation(rop.INT_LE, [v11, v5], v15), + ResOperation(rop.UINT_GE, [v13, v13], v16), + ResOperation(rop.INT_OR, [v9, ConstInt(-23)], v17), + ResOperation(rop.INT_LT, [v10, v13], v18), + ResOperation(rop.INT_OR, [v15, v5], v19), + ResOperation(rop.INT_XOR, [v17, ConstInt(54)], v20), + ResOperation(rop.INT_MUL, [v8, v10], v21), + ResOperation(rop.INT_OR, [v3, v9], v22), + ResOperation(rop.INT_AND, [v11, ConstInt(-4)], tmp41), + ResOperation(rop.INT_OR, [tmp41, ConstInt(1)], tmp42), + ResOperation(rop.INT_MOD, [v12, tmp42], v23), + ResOperation(rop.INT_IS_TRUE, [v6], v24), + ResOperation(rop.UINT_RSHIFT, [v15, ConstInt(6)], v25), + ResOperation(rop.INT_OR, [ConstInt(-4), v25], v26), + ResOperation(rop.INT_INVERT, [v8], v27), + ResOperation(rop.INT_SUB, [ConstInt(-113), v11], v28), + ResOperation(rop.INT_NEG, [v7], v29), + ResOperation(rop.INT_NEG, [v24], v30), + ResOperation(rop.INT_FLOORDIV, [v3, ConstInt(53)], v31), + ResOperation(rop.INT_MUL, [v28, v27], v32), + ResOperation(rop.INT_AND, [v18, ConstInt(-4)], tmp43), + ResOperation(rop.INT_OR, [tmp43, ConstInt(1)], tmp44), + ResOperation(rop.INT_MOD, [v26, tmp44], v33), + ResOperation(rop.INT_OR, [v27, v19], v34), + ResOperation(rop.UINT_LT, [v13, ConstInt(1)], v35), + ResOperation(rop.INT_AND, [v21, ConstInt(31)], tmp45), + ResOperation(rop.INT_RSHIFT, [v21, tmp45], v36), + ResOperation(rop.INT_AND, [v20, ConstInt(31)], tmp46), + ResOperation(rop.UINT_RSHIFT, [v4, tmp46], v37), + ResOperation(rop.UINT_GT, [v33, ConstInt(-11)], v38), + ResOperation(rop.INT_NEG, [v7], v39), + ResOperation(rop.INT_GT, [v24, v32], v40), + ResOperation(rop.FINISH, [v40, v36, v37, v31, v16, v34, v35, v23, v22, v29, v14, v39, v30, v38], None, descr=BasicFailDescr()), + ] + cpu = CPU(None, None) + cpu.setup_once() + looptoken = JitCellToken() + cpu.compile_loop(inputargs, operations, looptoken) + cpu.execute_token(looptoken, -13, 10, 10, 8, -8, -16, -18, 46, -12, 26) + assert cpu.get_latest_value_int(0) == 0 + assert cpu.get_latest_value_int(1) == 0 + assert cpu.get_latest_value_int(2) == 0 + assert cpu.get_latest_value_int(3) == 0 + assert cpu.get_latest_value_int(4) == 1 + assert cpu.get_latest_value_int(5) == -7 + assert cpu.get_latest_value_int(6) == 1 + assert cpu.get_latest_value_int(7) == 0 + assert cpu.get_latest_value_int(8) == -2 + assert cpu.get_latest_value_int(9) == 18 + assert cpu.get_latest_value_int(10) == 1 + assert cpu.get_latest_value_int(11) == 18 + assert cpu.get_latest_value_int(12) == -1 + assert cpu.get_latest_value_int(13) == 0 + +def test_bug_1(): + v1 = BoxInt() + v2 = BoxInt() + v3 = BoxInt() + v4 = BoxInt() + v5 = BoxInt() + v6 = BoxInt() + v7 = BoxInt() + v8 = BoxInt() + v9 = BoxInt() + v10 = BoxInt() + v11 = BoxInt() + v12 = BoxInt() + v13 = BoxInt() + v14 = BoxInt() + v15 = BoxInt() + v16 = BoxInt() + v17 = BoxInt() + v18 = BoxInt() + v19 = BoxInt() + v20 = BoxInt() + v21 = BoxInt() + v22 = BoxInt() + v23 = BoxInt() + v24 = BoxInt() + v25 = BoxInt() + v26 = BoxInt() + v27 = BoxInt() + v28 = BoxInt() + v29 = BoxInt() + v30 = BoxInt() + v31 = BoxInt() + v32 = BoxInt() + v33 = BoxInt() + v34 = BoxInt() + v35 = BoxInt() + v36 = BoxInt() + v37 = BoxInt() + v38 = BoxInt() + v39 = BoxInt() + v40 = BoxInt() + tmp41 = BoxInt() + tmp42 = BoxInt() + tmp43 = BoxInt() + tmp44 = BoxInt() + tmp45 = BoxInt() + inputargs = [v1, v2, v3, v4, v5, v6, v7, v8, v9, v10] + operations = [ + ResOperation(rop.UINT_LT, [v6, ConstInt(0)], v11), + ResOperation(rop.INT_AND, [v3, ConstInt(31)], tmp41), + ResOperation(rop.INT_RSHIFT, [v3, tmp41], v12), + ResOperation(rop.INT_NEG, [v2], v13), + ResOperation(rop.INT_ADD, [v11, v7], v14), + ResOperation(rop.INT_OR, [v3, v2], v15), + ResOperation(rop.INT_OR, [v12, v12], v16), + ResOperation(rop.INT_NE, [v2, v5], v17), + ResOperation(rop.INT_AND, [v5, ConstInt(31)], tmp42), + ResOperation(rop.UINT_RSHIFT, [v14, tmp42], v18), + ResOperation(rop.INT_AND, [v14, ConstInt(31)], tmp43), + ResOperation(rop.INT_LSHIFT, [ConstInt(7), tmp43], v19), + ResOperation(rop.INT_NEG, [v19], v20), + ResOperation(rop.INT_MOD, [v3, ConstInt(1)], v21), + ResOperation(rop.UINT_GE, [v15, v1], v22), + ResOperation(rop.INT_AND, [v16, ConstInt(31)], tmp44), + ResOperation(rop.INT_LSHIFT, [v8, tmp44], v23), + ResOperation(rop.INT_IS_TRUE, [v17], v24), + ResOperation(rop.INT_AND, [v5, ConstInt(31)], tmp45), + ResOperation(rop.INT_LSHIFT, [v14, tmp45], v25), + ResOperation(rop.INT_LSHIFT, [v5, ConstInt(17)], v26), + ResOperation(rop.INT_EQ, [v9, v15], v27), + ResOperation(rop.INT_GE, [ConstInt(0), v6], v28), + ResOperation(rop.INT_NEG, [v15], v29), + ResOperation(rop.INT_NEG, [v22], v30), + ResOperation(rop.INT_ADD, [v7, v16], v31), + ResOperation(rop.UINT_LT, [v19, v19], v32), + ResOperation(rop.INT_ADD, [v2, ConstInt(1)], v33), + ResOperation(rop.INT_NEG, [v5], v34), + ResOperation(rop.INT_ADD, [v17, v24], v35), + ResOperation(rop.UINT_LT, [ConstInt(2), v16], v36), + ResOperation(rop.INT_NEG, [v9], v37), + ResOperation(rop.INT_GT, [v4, v11], v38), + ResOperation(rop.INT_LT, [v27, v22], v39), + ResOperation(rop.INT_NEG, [v27], v40), + ResOperation(rop.FINISH, [v40, v10, v36, v26, v13, v30, v21, v33, v18, v25, v31, v32, v28, v29, v35, v38, v20, v39, v34, v23, v37], None, descr=BasicFailDescr()), + ] + cpu = CPU(None, None) + cpu.setup_once() + looptoken = JitCellToken() + cpu.compile_loop(inputargs, operations, looptoken) + cpu.execute_token(looptoken, 17, -20, -6, 6, 1, 13, 13, 9, 49, 8) + assert cpu.get_latest_value_int(0) == 0 + assert cpu.get_latest_value_int(1) == 8 + assert cpu.get_latest_value_int(2) == 1 + assert cpu.get_latest_value_int(3) == 131072 + assert cpu.get_latest_value_int(4) == 20 + assert cpu.get_latest_value_int(5) == -1 + assert cpu.get_latest_value_int(6) == 0 + assert cpu.get_latest_value_int(7) == -19 + assert cpu.get_latest_value_int(8) == 6 + assert cpu.get_latest_value_int(9) == 26 + assert cpu.get_latest_value_int(10) == 12 + assert cpu.get_latest_value_int(11) == 0 + assert cpu.get_latest_value_int(12) == 0 + assert cpu.get_latest_value_int(13) == 2 + assert cpu.get_latest_value_int(14) == 2 + assert cpu.get_latest_value_int(15) == 1 + assert cpu.get_latest_value_int(16) == -57344 + assert cpu.get_latest_value_int(17) == 1 + assert cpu.get_latest_value_int(18) == -1 + if WORD == 4: + assert cpu.get_latest_value_int(19) == -2147483648 + elif WORD == 8: + assert cpu.get_latest_value_int(19) == 19327352832 + assert cpu.get_latest_value_int(20) == -49 From noreply at buildbot.pypy.org Fri Feb 10 16:30:07 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 10 Feb 2012 16:30:07 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Tweak Message-ID: <20120210153007.6AD9B8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52358:b1da95d9b385 Date: 2012-02-10 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/b1da95d9b385/ Log: Tweak diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -164,7 +164,7 @@ needs_finalizer=False, is_finalizer_light=False, contains_weakptr=False): - assert not needs_finalizer, "XXX" + #assert not needs_finalizer, "XXX" --- finalizer is just ignored assert not contains_weakptr, "XXX" # # Check the mode: either in a transactional thread, or in @@ -354,13 +354,16 @@ # ---------- def acquire(self, lock): - ll_thread.c_thread_acquirelock_NOAUTO(lock, 1) + ll_thread.acquire_NOAUTO(lock, 1) def release(self, lock): - ll_thread.c_thread_releaselock_NOAUTO(lock) + ll_thread.release_NOAUTO(lock) # ---------- + def id(self, gcobj): + raise NotImplementedError("XXX") + def identityhash(self, gcobj): raise NotImplementedError("XXX") From noreply at buildbot.pypy.org Fri Feb 10 16:32:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Fri, 10 Feb 2012 16:32:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: forgot to add these lines Message-ID: <20120210153208.F3C928203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52359:0bd4401f7c46 Date: 2012-02-10 16:31 +0100 http://bitbucket.org/pypy/pypy/changeset/0bd4401f7c46/ Log: forgot to add these lines diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -477,7 +477,13 @@ self.current_clt.frame_depth = max(self.current_clt.frame_depth, spilling_area) self.current_clt.param_depth = max(self.current_clt.param_depth, param_depth) - self._patch_sp_offset(sp_patch_location, rawstart) + + if not we_are_translated(): + # for the benefit of tests + faildescr._ppc_bridge_frame_depth = self.current_clt.frame_depth + faildescr._ppc_bridge_param_depth = self.current_clt.param_depth + + self._patch_sp_offset(sp_patch_location, rawstart) if not we_are_translated(): print 'Loop', inputargs, operations self.mc._dump_trace(rawstart, 'bridge_%s.asm' % self.cpu.total_compiled_loops) From noreply at buildbot.pypy.org Fri Feb 10 17:48:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 10 Feb 2012 17:48:00 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: id() and identityhash(). Message-ID: <20120210164800.0923D8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52360:07551dfda0ea Date: 2012-02-10 17:47 +0100 http://bitbucket.org/pypy/pypy/changeset/07551dfda0ea/ Log: id() and identityhash(). diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -2,6 +2,7 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.lltypesystem.llmemory import raw_malloc_usage from pypy.rpython.memory.gc.base import GCBase +from pypy.rpython.memory.support import mangle_hash from pypy.rpython.annlowlevel import llhelper from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.debug import ll_assert, debug_start, debug_stop, fatalerror @@ -15,7 +16,8 @@ GCFLAG_GLOBAL = first_gcflag << 0 # keep in sync with et.c GCFLAG_WAS_COPIED = first_gcflag << 1 # keep in sync with et.c -GCFLAG_HAS_HASH = first_gcflag << 2 +GCFLAG_HAS_SHADOW = first_gcflag << 2 +GCFLAG_FIXED_HASH = first_gcflag << 3 PRIMITIVE_SIZES = {1: lltype.Char, 2: rffi.SHORT, @@ -45,7 +47,7 @@ HDR = lltype.Struct('header', ('tid', lltype.Signed), ('version', llmemory.Address)) typeid_is_in_field = 'tid' - withhash_flag_is_in_field = 'tid', GCFLAG_HAS_HASH + withhash_flag_is_in_field = 'tid', GCFLAG_FIXED_HASH GCTLS = lltype.Struct('GCTLS', ('nursery_free', llmemory.Address), ('nursery_top', llmemory.Address), @@ -360,12 +362,63 @@ ll_thread.release_NOAUTO(lock) # ---------- + # id() and identityhash() support + + def id_or_identityhash(self, gcobj, is_hash): + """Implement the common logic of id() and identityhash() + of an object, given as a GCREF. + """ + obj = llmemory.cast_ptr_to_adr(gcobj) + hdr = self.header(obj) + # + if hdr.tid & GCFLAG_GLOBAL == 0: + # + # The object is a local object. Find or allocate a corresponding + # global object. + if hdr.tid & (GCFLAG_WAS_COPIED | GCFLAG_HAS_SHADOW) == 0: + # + # We need to allocate a global object here. We only allocate + # it for now; it is left completely uninitialized. + size = self.get_size(obj) + self.acquire(self.mutex_lock) + main_tls = self.main_thread_tls + globalobj = self._malloc_local_raw(main_tls, size) + self.header(globalobj).tid = GCFLAG_GLOBAL + self.release(self.mutex_lock) + # + # Update the header of the local 'obj' + hdr.tid |= GCFLAG_HAS_SHADOW + hdr.version = globalobj + # + else: + # There is already a corresponding globalobj + globalobj = hdr.version + # + obj = globalobj + # + ll_assert(self.header(obj).tid & GCFLAG_GLOBAL != 0, + "id_or_identityhash: unexpected local object") + i = llmemory.cast_adr_to_int(obj) + if is_hash: + # For identityhash(), we need a special case for some + # prebuilt objects: their hash must be the same before + # and after translation. It is stored as an extra word + # after the object. But we cannot use it for id() + # because the stored value might clash with a real one. + if self.header(obj).tid & GCFLAG_FIXED_HASH: + size = self.get_size(obj) + i = (obj + size).signed[0] + else: + # mangle the hash value to increase the dispertion + # on the trailing bits, but only if !GCFLAG_FIXED_HASH + i = mangle_hash(i) + return i def id(self, gcobj): - raise NotImplementedError("XXX") + return self.id_or_identityhash(gcobj, False) def identityhash(self, gcobj): - raise NotImplementedError("XXX") + return self.id_or_identityhash(gcobj, True) # ------------------------------------------------------------ @@ -512,17 +565,31 @@ ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, "trace_and_mark: GLOBAL obj in nursery") # - if hdr.tid & GCFLAG_WAS_COPIED != 0: - # this local object is a root or was already marked. Either - # way, its 'version' field should point to the corresponding - # global object. - globalobj = hdr.version - # - else: - # First visit to a local-only 'obj': copy it into the global area + if hdr.tid & (GCFLAG_WAS_COPIED | GCFLAG_HAS_SHADOW) == 0: + # First visit to a local-only 'obj': allocate a corresponding + # global object size = self.gc.get_size(obj) main_tls = self.gc.main_thread_tls globalobj = self.gc._malloc_local_raw(main_tls, size) + need_to_copy = True + # + else: + globalobj = hdr.version + if hdr.tid & GCFLAG_WAS_COPIED != 0: + # this local object is a root or was already marked. Either + # way, its 'version' field should point to the corresponding + # global object. + size = 0 + need_to_copy = False + else: + # this local object has a shadow made by id_or_identityhash(); + # and the 'version' field points to the global shadow. + ll_assert(hdr.tid & GCFLAG_HAS_SHADOW != 0, "uh?") + size = self.gc.get_size(obj) + need_to_copy = True + # + if need_to_copy: + # Copy the data of the object from the local to the global llmemory.raw_memcopy(obj, globalobj, size) # # Initialize the header of the 'globalobj' diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -2,6 +2,7 @@ from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi from pypy.rpython.memory.gc.stmgc import StmGC, PRIMITIVE_SIZES, WORD, CALLBACK from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED +from pypy.rpython.memory.support import mangle_hash S = lltype.GcStruct('S', ('a', lltype.Signed), ('b', lltype.Signed), @@ -392,3 +393,88 @@ s1, s1_adr = self.malloc(S) assert (repr(self.gc.stm_operations._getsize_fn(s1_adr)) == repr(fake_get_size(s1_adr))) + + def test_id_of_global(self): + s, s_adr = self.malloc(S) + i = self.gc.id(s) + assert i == llmemory.cast_adr_to_int(s_adr) + + def test_id_of_globallocal(self): + s, s_adr = self.malloc(S) + self.select_thread(1) + t_adr = self.gc.stm_writebarrier(s_adr) # make a local copy + t = llmemory.cast_adr_to_ptr(t_adr, llmemory.GCREF) + i = self.gc.id(t) + assert i == llmemory.cast_adr_to_int(s_adr) + assert i == self.gc.id(s) + self.gc.commit_transaction() + assert i == self.gc.id(s) + + def test_id_of_local_nonsurviving(self): + self.select_thread(1) + s, s_adr = self.malloc(S) + i = self.gc.id(s) + assert i != llmemory.cast_adr_to_int(s_adr) + assert i == self.gc.id(s) + self.gc.commit_transaction() + + def test_id_of_local_surviving(self): + sr1, sr1_adr = self.malloc(SR) + self.select_thread(1) + t2, t2_adr = self.malloc(S) + tr1_adr = self.gc.stm_writebarrier(sr1_adr) + assert tr1_adr != sr1_adr + tr1 = llmemory.cast_adr_to_ptr(tr1_adr, lltype.Ptr(SR)) + tr1.s1 = t2 + i = self.gc.id(t2) + assert i not in (llmemory.cast_adr_to_int(sr1_adr), + llmemory.cast_adr_to_int(t2_adr), + llmemory.cast_adr_to_int(tr1_adr)) + assert i == self.gc.id(t2) + self.gc.commit_transaction() + s2 = tr1.s1 # tr1 is a root, so not copied yet + assert s2 and s2 != t2 + assert self.gc.id(s2) == i + + def test_hash_of_global(self): + s, s_adr = self.malloc(S) + i = self.gc.identityhash(s) + assert i == mangle_hash(llmemory.cast_adr_to_int(s_adr)) + + def test_hash_of_globallocal(self): + s, s_adr = self.malloc(S) + self.select_thread(1) + t_adr = self.gc.stm_writebarrier(s_adr) # make a local copy + t = llmemory.cast_adr_to_ptr(t_adr, llmemory.GCREF) + i = self.gc.identityhash(t) + assert i == mangle_hash(llmemory.cast_adr_to_int(s_adr)) + assert i == self.gc.identityhash(s) + self.gc.commit_transaction() + assert i == self.gc.identityhash(s) + + def test_hash_of_local_nonsurviving(self): + self.select_thread(1) + s, s_adr = self.malloc(S) + i = self.gc.identityhash(s) + assert i != mangle_hash(llmemory.cast_adr_to_int(s_adr)) + assert i == self.gc.identityhash(s) + self.gc.commit_transaction() + + def test_hash_of_local_surviving(self): + sr1, sr1_adr = self.malloc(SR) + self.select_thread(1) + t2, t2_adr = self.malloc(S) + tr1_adr = self.gc.stm_writebarrier(sr1_adr) + assert tr1_adr != sr1_adr + tr1 = llmemory.cast_adr_to_ptr(tr1_adr, lltype.Ptr(SR)) + tr1.s1 = t2 + i = self.gc.identityhash(t2) + assert i not in map(mangle_hash, + (llmemory.cast_adr_to_int(sr1_adr), + llmemory.cast_adr_to_int(t2_adr), + llmemory.cast_adr_to_int(tr1_adr))) + assert i == self.gc.identityhash(t2) + self.gc.commit_transaction() + s2 = tr1.s1 # tr1 is a root, so not copied yet + assert s2 and s2 != t2 + assert self.gc.identityhash(s2) == i From notifications-noreply at bitbucket.org Fri Feb 10 19:56:58 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Fri, 10 Feb 2012 18:56:58 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120210185658.2589.53320@bitbucket03.managed.contegix.com> You have received a notification from Gabriel Lavoie. Hi, I forked pypy. My fork is at https://bitbucket.org/glavoie/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From pullrequests-noreply at bitbucket.org Fri Feb 10 20:14:36 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Fri, 10 Feb 2012 19:14:36 -0000 Subject: [pypy-commit] [OPEN] Pull request #27 for pypy/pypy: Another change necessary for FreeBSD build, so expat.h and libexpat.so can be found. Message-ID: A new pull request has been opened by Gabriel Lavoie. glavoie/pypy has changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/27/another-change-necessary-for-freebsd-build Title: Another change necessary for FreeBSD build, so expat.h and libexpat.so can be found. compiler.compile and compiler.link_executable should probably use the "augmented" library_dirs and include_dirs. This is necessary to support added dirs for darwin and freebsd (the latter added in this patch). I know the "if" for freebsd is a bit ugly but it seems ctypes_configure doesn't have access to standard PyPy platform management in "pypy/translator/platform/". With this change, "tip" builds without any hiccup under FreeBSD 7 x64. Changes to be pulled: -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sat Feb 11 00:48:31 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 11 Feb 2012 00:48:31 +0100 (CET) Subject: [pypy-commit] pypy default: organize imports better, kill a relative import Message-ID: <20120210234831.026958203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52361:30237f7f4d00 Date: 2012-02-10 18:45 -0500 http://bitbucket.org/pypy/pypy/changeset/30237f7f4d00/ Log: organize imports better, kill a relative import diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( From noreply at buildbot.pypy.org Sat Feb 11 01:37:56 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 11 Feb 2012 01:37:56 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: remove now obsolete identity preserving code (the memory regulator Message-ID: <20120211003756.1E5B28203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52362:cfa1fbdbca11 Date: 2012-02-10 11:05 -0800 http://bitbucket.org/pypy/pypy/changeset/cfa1fbdbca11/ Log: remove now obsolete identity preserving code (the memory regulator takes care of it alread) diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -263,11 +263,7 @@ for i in range(len(self.functions)): cppyyfunc = self.functions[i] try: - cppresult = cppyyfunc.call(cppthis, w_type, args_w) - if cppinstance and isinstance(cppresult, W_CPPInstance): - if cppresult.rawobject == cppinstance.rawobject: - return cppinstance # recycle object to preserve identity - return cppresult + return cppyyfunc.call(cppthis, w_type, args_w) except OperationError, e: if not (e.match(space, space.w_TypeError) or \ e.match(space, space.w_NotImplementedError)): From noreply at buildbot.pypy.org Sat Feb 11 01:37:57 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 11 Feb 2012 01:37:57 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: test to make sure type(this) is checked properly Message-ID: <20120211003757.56E7D8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52363:926ef9bae484 Date: 2012-02-10 11:39 -0800 http://bitbucket.org/pypy/pypy/changeset/926ef9bae484/ Log: test to make sure type(this) is checked properly diff --git a/pypy/module/cppyy/test/fragile.h b/pypy/module/cppyy/test/fragile.h --- a/pypy/module/cppyy/test/fragile.h +++ b/pypy/module/cppyy/test/fragile.h @@ -5,6 +5,7 @@ class A { public: virtual int check() { return (int)'A'; } + virtual A* gime_null() { return (A*)0; } }; class B { diff --git a/pypy/module/cppyy/test/test_fragile.py b/pypy/module/cppyy/test/test_fragile.py --- a/pypy/module/cppyy/test/test_fragile.py +++ b/pypy/module/cppyy/test/test_fragile.py @@ -106,3 +106,22 @@ raises(TypeError, cppyy.addressof, 0) raises(TypeError, cppyy.addressof, 1) raises(TypeError, cppyy.addressof, None) + + def test06_wrong_this(self): + """Test that using an incorrect self argument raises""" + + import cppyy + + assert cppyy.gbl.fragile == cppyy.gbl.fragile + fragile = cppyy.gbl.fragile + + a = fragile.A() + assert fragile.A.check(a) == ord('A') + + b = fragile.B() + assert fragile.B.check(b) == ord('B') + raises(TypeError, fragile.A.check, b) + raises(TypeError, fragile.B.check, a) + + assert isinstance(a.gime_null(), fragile.A) + raises(ReferenceError, fragile.A.check, a.gime_null()) From noreply at buildbot.pypy.org Sat Feb 11 01:37:58 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 11 Feb 2012 01:37:58 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: enable test that now works thanks to default argument support Message-ID: <20120211003758.886F48203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52364:b258b4215935 Date: 2012-02-10 11:41 -0800 http://bitbucket.org/pypy/pypy/changeset/b258b4215935/ Log: enable test that now works thanks to default argument support diff --git a/pypy/module/cppyy/test/test_fragile.py b/pypy/module/cppyy/test/test_fragile.py --- a/pypy/module/cppyy/test/test_fragile.py +++ b/pypy/module/cppyy/test/test_fragile.py @@ -67,10 +67,8 @@ raises(TypeError, d.overload, None) raises(TypeError, d.overload, None, None, None) - # TODO: the following fails in the fast path, b/c the default - # arguments are not properly filled - #d.overload('a') - #d.overload(1) + d.overload('a') + d.overload(1) def test04_unsupported_arguments(self): """Test arguments that are yet unsupported""" From noreply at buildbot.pypy.org Sat Feb 11 01:37:59 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 11 Feb 2012 01:37:59 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: removal of spurious tabs Message-ID: <20120211003759.BD7F78203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52365:6c1e93c659ec Date: 2012-02-10 11:49 -0800 http://bitbucket.org/pypy/pypy/changeset/6c1e93c659ec/ Log: removal of spurious tabs diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -46,8 +46,8 @@ cpptype = W_CPPNamespace(space, final_name, handle) elif capi.c_has_complex_hierarchy(handle): cpptype = W_ComplexCPPType(space, final_name, handle) - else: - cpptype = W_CPPType(space, final_name, handle) + else: + cpptype = W_CPPType(space, final_name, handle) state.cpptype_cache[name] = cpptype cpptype._find_methods() cpptype._find_data_members() @@ -110,8 +110,8 @@ @jit.unroll_safe def call(self, cppthis, w_type, args_w): assert lltype.typeOf(cppthis) == capi.C_OBJECT - args_expected = len(self.arg_defs) - args_given = len(args_w) + args_expected = len(self.arg_defs) + args_given = len(args_w) if args_expected < args_given or args_given < self.args_required: raise OperationError(self.space.w_TypeError, self.space.wrap("wrong number of arguments")) From noreply at buildbot.pypy.org Sat Feb 11 01:38:00 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 11 Feb 2012 01:38:00 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: cleanup Message-ID: <20120211003800.F3F628203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52366:7bbc0040ea90 Date: 2012-02-10 16:37 -0800 http://bitbucket.org/pypy/pypy/changeset/7bbc0040ea90/ Log: cleanup diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -128,7 +128,7 @@ optimize_CALL_MAY_FORCE = optimize_CALL def optimize_FORCE_TOKEN(self, op): - # The handling of force_token needs a bit of exaplanation. + # The handling of force_token needs a bit of explanation. # The original trace which is getting optimized looks like this: # i1 = force_token() # setfield_gc(p0, i1, ...) diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -73,7 +73,7 @@ class W_CPPLibrary(Wrappable): - _immutable_fields_ = ["cdll"] + _immutable_ = True def __init__(self, space, cdll): self.cdll = cdll @@ -232,7 +232,7 @@ class W_CPPOverload(Wrappable): - _immutable_fields_ = ["scope_handle", "func_name", "functions[*]"] + _immutable_ = True def __init__(self, space, scope_handle, func_name, functions): self.space = space @@ -286,7 +286,7 @@ class W_CPPDataMember(Wrappable): - _immutable_fields_ = ["scope_handle", "converter", "offset", "_is_static"] + _immutable_ = True def __init__(self, space, scope_handle, type_name, offset, is_static): self.space = space @@ -333,7 +333,8 @@ class W_CPPScope(Wrappable): - _immutable_fields_ = ["name", "handle"] + _immutable_ = True + _immutable_fields_ = ["methods[*]", "data_members[*]"] kind = "scope" @@ -399,6 +400,8 @@ # classes for inheritance. Both are python classes, though, and refactoring # may be in order at some point. class W_CPPNamespace(W_CPPScope): + _immutable_ = True + kind = "namespace" def _make_cppfunction(self, method_index): @@ -445,6 +448,8 @@ class W_CPPType(W_CPPScope): + _immutable_ = True + kind = "class" def _make_cppfunction(self, method_index): @@ -506,6 +511,8 @@ class W_ComplexCPPType(W_CPPType): + _immutable_ = True + @jit.elidable_promote() def get_cppthis(self, cppinstance, scope_handle): offset = capi.c_base_offset( @@ -526,7 +533,7 @@ class W_CPPTemplateType(Wrappable): - _immutable_fields_ = ["name", "handle"] + _immutable_ = True def __init__(self, space, name, handle): self.space = space From noreply at buildbot.pypy.org Sat Feb 11 05:56:51 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 11 Feb 2012 05:56:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Delete duplicate write_new_force_index. Message-ID: <20120211045651.E5ECD8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52367:fd3e6f5099f1 Date: 2012-02-10 23:51 -0500 http://bitbucket.org/pypy/pypy/changeset/fd3e6f5099f1/ Log: Delete duplicate write_new_force_index. Implement skeleton for call_reacquire_gil. diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -865,20 +865,6 @@ self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) self.mc.free_scratch_reg() - def write_new_force_index(self): - # for shadowstack only: get a new, unused force_index number and - # write it to FORCE_INDEX_OFS. Used to record the call shape - # (i.e. where the GC pointers are in the stack) around a CALL - # instruction that doesn't already have a force_index. - gcrootmap = self.cpu.gc_ll_descr.gcrootmap - if gcrootmap and gcrootmap.is_shadow_stack: - clt = self.current_clt - force_index = clt.reserve_and_record_some_faildescr_index() - self._write_fail_index(force_index) - return force_index - else: - return 0 - def emit_debug_merge_point(self, op, arglocs, regalloc): pass @@ -1119,6 +1105,14 @@ self._emit_call(NO_FORCE_INDEX, self.releasegil_addr, [], self._regalloc) + def call_reacquire_gil(self, gcrootmap, save_loc): + # save the previous result into the stack temporarily. + # XXX like with call_release_gil(), we assume that we don't need + # to save vfp regs in this case. Besides the result location + assert gcrootmap.is_shadow_stack + with Saved_Volatiles(self.mc): + self._emit_call(NO_FORCE_INDEX, self.reacqgil_addr, + [], self._regalloc) class OpAssembler(IntOpAssembler, GuardOpAssembler, From noreply at buildbot.pypy.org Sat Feb 11 05:56:53 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 11 Feb 2012 05:56:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add support for gcrootmap to prepare_guard_call_release_gil. Message-ID: <20120211045653.29A0B828F9@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52368:be722eaa302c Date: 2012-02-10 23:52 -0500 http://bitbucket.org/pypy/pypy/changeset/be722eaa302c/ Log: Add support for gcrootmap to prepare_guard_call_release_gil. diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -522,8 +522,11 @@ self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack if gcrootmap: - assert 0, "not implemented yet" - # self.assembler.call_reacquire_gil(gcrootmap, registers) + if op.result: + result_loc = self.call_result_location(op.result) + else: + result_loc = None + self.assembler.call_reacquire_gil(gcrootmap, result_loc) locs = self._prepare_guard(guard_op) self.possibly_free_vars(guard_op.getfailargs()) return locs From noreply at buildbot.pypy.org Sat Feb 11 05:56:54 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 11 Feb 2012 05:56:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add _release_gil_shadowstack, _reacquire_gil_shadowstack, _build_release_gil Message-ID: <20120211045654.5B9C28203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52369:029db81d7490 Date: 2012-02-10 23:56 -0500 http://bitbucket.org/pypy/pypy/changeset/029db81d7490/ Log: Add _release_gil_shadowstack, _reacquire_gil_shadowstack, _build_release_gil and initiatlize it in setup_once. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -383,11 +383,38 @@ gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() self._build_propagate_exception_path() + #if gc_ll_descr.get_malloc_slowpath_addr is not None: + # self._build_malloc_slowpath() + if gc_ll_descr.gcrootmap and gc_ll_descr.gcrootmap.is_shadow_stack: + self._build_release_gil(gc_ll_descr.gcrootmap) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self.exit_code_adr = self._gen_exit_path() self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) + @staticmethod + def _release_gil_shadowstack(): + before = rffi.aroundstate.before + if before: + before() + + @staticmethod + def _reacquire_gil_shadowstack(): + after = rffi.aroundstate.after + if after: + after() + + _NOARG_FUNC = lltype.Ptr(lltype.FuncType([], lltype.Void)) + + def _build_release_gil(self, gcrootmap): + assert gcrootmap.is_shadow_stack + releasegil_func = llhelper(self._NOARG_FUNC, + self._release_gil_shadowstack) + reacqgil_func = llhelper(self._NOARG_FUNC, + self._reacquire_gil_shadowstack) + self.releasegil_addr = rffi.cast(lltype.Signed, releasegil_func) + self.reacqgil_addr = rffi.cast(lltype.Signed, reacqgil_func) + def assemble_loop(self, inputargs, operations, looptoken, log): clt = CompiledLoopToken(self.cpu, looptoken.number) clt.allgcrefs = [] From noreply at buildbot.pypy.org Sat Feb 11 10:20:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 10:20:17 +0100 (CET) Subject: [pypy-commit] pypy default: (glavoie, arigo rewrites) Message-ID: <20120211092017.CC27D11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52370:e40ab5c4aac0 Date: 2012-02-11 10:19 +0100 http://bitbucket.org/pypy/pypy/changeset/e40ab5c4aac0/ Log: (glavoie, arigo rewrites) Another change necessary for FreeBSD build, so expat.h and libexpat.so can be found. diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) From notifications-noreply at bitbucket.org Sat Feb 11 10:21:07 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sat, 11 Feb 2012 09:21:07 -0000 Subject: [pypy-commit] [COMMENT] Pull request #27 for pypy/pypy: Another change necessary for FreeBSD build, so expat.h and libexpat.so can be found. Message-ID: <20120211092107.2374.97997@bitbucket01.managed.contegix.com> New comment on pull request: https://bitbucket.org/pypy/pypy/pull-request/27/another-change-necessary-for-freebsd-build#comment-2781 arigo said: Checked in as e40ab5c4aac0, with minor rewrites. Gabriel: can you check if what I checked in is good too? -- This is a pull request comment notification from bitbucket.org. You are receiving this either because you are participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sat Feb 11 16:26:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 16:26:36 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Attempt to rewrite things so that there is no GC allocation Message-ID: <20120211152636.A338E11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52371:dd0c59caf8e2 Date: 2012-02-11 12:31 +0100 http://bitbucket.org/pypy/pypy/changeset/dd0c59caf8e2/ Log: Attempt to rewrite things so that there is no GC allocation in the non-main threads outside transactions. diff --git a/pypy/module/transaction/interp_epoll.py b/pypy/module/transaction/interp_epoll.py --- a/pypy/module/transaction/interp_epoll.py +++ b/pypy/module/transaction/interp_epoll.py @@ -7,13 +7,23 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.interpreter.gateway import unwrap_spec from pypy.interpreter.error import OperationError +from pypy.module.select import interp_epoll from pypy.module.select.interp_epoll import W_Epoll, FD_SETSIZE from pypy.module.select.interp_epoll import epoll_event -from pypy.module.select.interp_epoll import epoll_wait from pypy.module.transaction import interp_transaction from pypy.rlib import rposix +# a _nowrapper version, to be sure that it does not allocate anything +_epoll_wait = rffi.llexternal( + "epoll_wait", + [rffi.INT, lltype.Ptr(rffi.CArray(epoll_event)), rffi.INT, rffi.INT], + rffi.INT, + compilation_info=eci, + _nowrapper=True +) + + class EPollPending(interp_transaction.AbstractPending): def __init__(self, space, epoller, w_callback): self.space = space @@ -21,26 +31,48 @@ self.w_callback = w_callback def run(self): - # this code is run non-transactionally + # This code is run non-transactionally. Careful, no GC available. state = interp_transaction.state if state.has_exception(): return maxevents = FD_SETSIZE - 1 # for now - timeout = 500 # for now: half a second - with lltype.scoped_alloc(rffi.CArray(epoll_event), maxevents) as evs: - nfds = epoll_wait(self.epoller.epfd, evs, maxevents, int(timeout)) - if nfds < 0: - errno = rposix.get_errno() - if errno == EINTR: - nfds = 0 # ignore, just wait for more later - else: - state.got_exception_errno = errno - state.must_reraise_exception(_reraise_from_errno) - return - for i in range(nfds): - event = evs[i] - fd = rffi.cast(lltype.Signed, event.c_data.c_fd) - PendingCallback(self.w_callback, fd, event.c_events).register() + evs = lltype.malloc(rffi.CArray(epoll_event), maxevents, flavor='raw') + try: + self.wait_and_process_events(evs, maxevents) + finally: + lltype.free(evs, flavor='raw') + + def wait_and_process_events(self, evs, maxevents): + fd = rffi.cast(rffi.INT, self.epoller.epfd) + maxevents = rffi.cast(rffi.INT, maxevents) + timeout = rffi.cast(rffi.INT, 500) # for now: half a second + nfds = _epoll_wait(fd, evs, maxevents, timeout) + nfds = rffi.cast(lltype.Signed, nfds) + # + if nfds < 0: + errno = rposix.get_errno() + if errno == EINTR: + nfds = 0 # ignore, just wait for more later + else: + state.got_exception_errno = errno + state.must_reraise_exception(_reraise_from_errno) + return + # We have to allocate new PendingCallback objects, but we can't + # allocate anything here because we are not running transactionally. + # Workaround for now: run a new tiny transaction just to create + # and register these PendingCallback's. + self.evs = evs + self.nfds = nfds + rstm.perform_transaction(EPollPending._add_real_transactions, + EPollPending, self) + + @staticmethod + def _add_real_transactions(self, retry_counter): + evs = self.evs + for i in range(self.nfds): + event = evs[i] + fd = rffi.cast(lltype.Signed, event.c_data.c_fd) + PendingCallback(self.w_callback, fd, event.c_events).register() # re-register myself to epoll_wait() for more self.register() diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -3,12 +3,11 @@ from pypy.module.transaction import threadintf from pypy.module.transaction.fifo import Fifo from pypy.rlib import rstm +from pypy.rlib.debug import ll_assert NUM_THREADS_DEFAULT = 4 # by default -MAIN_THREAD_ID = 0 - class State(object): @@ -22,6 +21,7 @@ self.ll_no_tasks_pending_lock = threadintf.null_ll_lock self.ll_unfinished_lock = threadintf.null_ll_lock self.threadobjs = {} # empty during translation + self.main_thread_id = 0 self.pending = Fifo() def _freeze_(self): @@ -69,11 +69,15 @@ def setvalue(self, value): id = rstm.thread_id() - assert id == MAIN_THREAD_ID # should not be used from a transaction + if self.main_thread_id == 0: + self.main_thread_id = id + else: + # this should not be used from a transaction + assert id == self.main_thread_id self.threadobjs[id] = value def getmainthreadvalue(self): - return self.threadobjs[0] + return self.threadobjs[self.main_thread_id] def getallvalues(self): return self.threadobjs @@ -90,28 +94,38 @@ self.num_threads = num def lock(self): - # XXX think about the interaction between locks and the GC + """Acquire the main lock. This plays a role similar to the GIL + in that it must be acquired in order to have the _run_thread() + code execute; but it is released around every execution of a + transaction.""" threadintf.acquire(self.ll_lock, True) def unlock(self): + """Release the main lock.""" threadintf.release(self.ll_lock) def lock_no_tasks_pending(self): + """This lock is acquired when state.pending.is_empty().""" threadintf.acquire(self.ll_no_tasks_pending_lock, True) def unlock_no_tasks_pending(self): + """Release the ll_no_tasks_pending_lock.""" threadintf.release(self.ll_no_tasks_pending_lock) def is_locked_no_tasks_pending(self): + """Test ll_no_tasks_pending_lock for debugging.""" just_locked = threadintf.acquire(self.ll_no_tasks_pending_lock, False) if just_locked: threadintf.release(self.ll_no_tasks_pending_lock) return not just_locked def lock_unfinished(self): + """This lock is normally acquired. It is released when all threads + are done.""" threadintf.acquire(self.ll_unfinished_lock, True) def unlock_unfinished(self): + """Release ll_unfinished_lock.""" threadintf.release(self.ll_unfinished_lock) def init_exceptions(self): @@ -142,6 +156,11 @@ _alloc_nonmovable_ = True def register(self): + """Register this AbstractPending instance in the pending list + belonging to the current thread. If called from the main + thread, it is the global list. If called from a transaction, + it is a thread-local list that will be merged with the global + list when the transaction is done.""" ec = state.getvalue() ec._transaction_pending.append(self) @@ -192,16 +211,27 @@ state.unlock_no_tasks_pending() -def _run_thread(): - state.lock() - rstm.descriptor_init() +def _setup_thread(_, retry_counter): + """Setup a thread. Run as a transaction because it allocates.""" my_thread_id = rstm.thread_id() my_ec = state.space.createexecutioncontext() state.add_thread(my_thread_id, my_ec) + + +def _run_thread(): + """The main function running one of the threads.""" + # Note that we cannot allocate any object here outside a transaction, + # so we need to be very careful. + state.lock() + rstm.descriptor_init() + # + rstm.perform_transaction(_setup_thread, AbstractPending, None) + my_transactions_pending = state.getvalue()._transaction_pending # while True: if state.pending.is_empty(): - assert state.is_locked_no_tasks_pending() + ll_assert(state.is_locked_no_tasks_pending(), + "inconsistently unlocked no_tasks_pending") state.num_waiting_threads += 1 if state.num_waiting_threads == state.num_threads: state.finished = True @@ -222,9 +252,9 @@ state.unlock() pending.run() state.lock() - _add_list(my_ec._transaction_pending) + _add_list(my_transactions_pending) # - state.del_thread(my_thread_id) + state.del_thread(rstm.thread_id()) rstm.descriptor_done() if state.num_waiting_threads == 0: # only the last thread to leave state.unlock_unfinished() @@ -240,20 +270,31 @@ assert not state.is_locked_no_tasks_pending() if state.pending.is_empty(): return + # + # 'num_waiting_threads' is the number of threads that are currently + # waiting for more work to do. When it becomes equal to + # 'num_threads' then we are done: we set 'finished' to True and this + # causes all threads to leave. Only accessed during a + # 'state.lock'-protected region. state.num_waiting_threads = 0 state.finished = False + # state.running = True state.init_exceptions() # + # --- start the threads --- don't use the GC here any more! --- for i in range(state.num_threads): threadintf.start_new_thread(_run_thread, ()) # state.lock_unfinished() # wait for all threads to finish + # --- done, we can use the GC again --- # assert state.num_waiting_threads == 0 assert state.pending.is_empty() assert state.threadobjs.keys() == [MAIN_THREAD_ID] assert not state.is_locked_no_tasks_pending() + assert state.threadobjects == [] + state.threadobjects = None state.running = False # # now re-raise the exception that we got in a transaction From noreply at buildbot.pypy.org Sat Feb 11 16:26:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 16:26:37 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix fix fix. Message-ID: <20120211152637.D762F11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52372:25a1c74bdc71 Date: 2012-02-11 12:47 +0100 http://bitbucket.org/pypy/pypy/changeset/25a1c74bdc71/ Log: Fix fix fix. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -77,7 +77,7 @@ self.threadobjs[id] = value def getmainthreadvalue(self): - return self.threadobjs[self.main_thread_id] + return self.threadobjs.get(self.main_thread_id, None) def getallvalues(self): return self.threadobjs @@ -267,6 +267,7 @@ state.w_error, space.wrap("recursive invocation of transaction.run()")) state.startup_run() + assert state.main_thread_id == rstm.thread_id() assert not state.is_locked_no_tasks_pending() if state.pending.is_empty(): return @@ -291,10 +292,8 @@ # assert state.num_waiting_threads == 0 assert state.pending.is_empty() - assert state.threadobjs.keys() == [MAIN_THREAD_ID] + assert state.threadobjs.keys() == [state.main_thread_id] assert not state.is_locked_no_tasks_pending() - assert state.threadobjects == [] - state.threadobjects = None state.running = False # # now re-raise the exception that we got in a transaction diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,5 +1,7 @@ import thread -from pypy.rlib.objectmodel import specialize, we_are_translated, keepalive_until_here +from pypy.rlib.objectmodel import specialize, we_are_translated +from pypy.rlib.objectmodel import keepalive_until_here +from pypy.rlib.debug import ll_assert from pypy.rpython.lltypesystem import rffi, lltype, rclass from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, @@ -20,7 +22,7 @@ arg = lltype.TLS.stm_callback_arg try: res = func(arg, retry_counter) - assert res is None + ll_assert(res is None, "stm_callback should return None") finally: llop.stm_commit_transaction(lltype.Void) return lltype.nullptr(rffi.VOIDP.TO) @@ -28,8 +30,10 @@ @specialize.arg(0, 1) def perform_transaction(func, argcls, arg): - assert isinstance(arg, argcls) - assert argcls._alloc_nonmovable_ + ll_assert(arg is None or isinstance(arg, argcls), + "perform_transaction: wrong class") + ll_assert(argcls._alloc_nonmovable_, + "perform_transaction: XXX") # XXX kill me if we_are_translated(): llarg = cast_instance_to_base_ptr(arg) llarg = rffi.cast(rffi.VOIDP, llarg) diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -399,11 +399,11 @@ 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), 'stm_become_inevitable': LLOp(), - 'stm_descriptor_init': LLOp(), - 'stm_descriptor_done': LLOp(), + 'stm_descriptor_init': LLOp(canrun=True), + 'stm_descriptor_done': LLOp(canrun=True), 'stm_writebarrier': LLOp(sideeffects=False), - 'stm_start_transaction': LLOp(), - 'stm_commit_transaction': LLOp(), + 'stm_start_transaction': LLOp(canrun=True), + 'stm_commit_transaction': LLOp(canrun=True), # __________ address operations __________ diff --git a/pypy/rpython/lltypesystem/opimpl.py b/pypy/rpython/lltypesystem/opimpl.py --- a/pypy/rpython/lltypesystem/opimpl.py +++ b/pypy/rpython/lltypesystem/opimpl.py @@ -608,6 +608,21 @@ from pypy.rlib.rtimer import read_timestamp return read_timestamp() +def op_stm_descriptor_init(): + # for direct testing only + from pypy.translator.stm import stmgcintf + stmgcintf.StmOperations().set_tls(llmemory.NULL, 0) + +def op_stm_descriptor_done(): + from pypy.translator.stm import stmgcintf + stmgcintf.StmOperations().del_tls() + +def op_stm_start_transaction(): + pass + +def op_stm_commit_transaction(): + pass + # ____________________________________________________________ def get_op_impl(opname): diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -503,7 +503,7 @@ } -static struct tx_descriptor *descriptor_init() +static struct tx_descriptor *descriptor_init(long in_main_thread) { assert(thread_descriptor == NULL); assert(active_thread_descriptor == NULL); @@ -516,11 +516,18 @@ PYPY_DEBUG_START("stm-init"); #endif - /* initialize 'my_lock_word' to be a unique negative number */ - d->my_lock_word = (owner_version_t)d; - if (!IS_LOCKED(d->my_lock_word)) - d->my_lock_word = ~d->my_lock_word; - assert(IS_LOCKED(d->my_lock_word)); + if (in_main_thread) + { + d->my_lock_word = 0; /* special value for the main thread */ + } + else + { + /* initialize 'my_lock_word' to be a unique negative number */ + d->my_lock_word = (owner_version_t)d; + if (!IS_LOCKED(d->my_lock_word)) + d->my_lock_word = ~d->my_lock_word; + assert(IS_LOCKED(d->my_lock_word)); + } /*d->spinloop_counter = (unsigned int)(d->my_lock_word | 1);*/ thread_descriptor = d; @@ -751,14 +758,15 @@ long stm_thread_id(void) { struct tx_descriptor *d = thread_descriptor; + if (d == NULL) + return 0; /* no thread_descriptor yet, assume it's the main thread */ return d->my_lock_word; } void stm_set_tls(void *newtls, long in_main_thread) { - /* 'in_main_thread' is ignored so far */ - struct tx_descriptor *d = descriptor_init(); + struct tx_descriptor *d = descriptor_init(in_main_thread); d->rpython_tls_object = newtls; } From noreply at buildbot.pypy.org Sat Feb 11 16:26:39 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 16:26:39 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fixes Message-ID: <20120211152639.0FEA311B2E79@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52373:861e498d1d52 Date: 2012-02-11 16:26 +0100 http://bitbucket.org/pypy/pypy/changeset/861e498d1d52/ Log: Fixes diff --git a/pypy/module/transaction/interp_epoll.py b/pypy/module/transaction/interp_epoll.py --- a/pypy/module/transaction/interp_epoll.py +++ b/pypy/module/transaction/interp_epoll.py @@ -11,7 +11,7 @@ from pypy.module.select.interp_epoll import W_Epoll, FD_SETSIZE from pypy.module.select.interp_epoll import epoll_event from pypy.module.transaction import interp_transaction -from pypy.rlib import rposix +from pypy.rlib import rstm, rposix # a _nowrapper version, to be sure that it does not allocate anything @@ -19,8 +19,8 @@ "epoll_wait", [rffi.INT, lltype.Ptr(rffi.CArray(epoll_event)), rffi.INT, rffi.INT], rffi.INT, - compilation_info=eci, - _nowrapper=True + compilation_info = interp_epoll.eci, + _nowrapper = True ) @@ -54,6 +54,8 @@ if errno == EINTR: nfds = 0 # ignore, just wait for more later else: + # unsure how to trigger this case + state = interp_transaction.state state.got_exception_errno = errno state.must_reraise_exception(_reraise_from_errno) return @@ -65,6 +67,8 @@ self.nfds = nfds rstm.perform_transaction(EPollPending._add_real_transactions, EPollPending, self) + # XXX could be avoided in the common case with some pool of + # PendingCallback instances @staticmethod def _add_real_transactions(self, retry_counter): From noreply at buildbot.pypy.org Sat Feb 11 16:30:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 16:30:45 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: main_thread_id is zero after all. Message-ID: <20120211153045.BCD0F11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52374:9ba7bb9d0e0a Date: 2012-02-11 16:30 +0100 http://bitbucket.org/pypy/pypy/changeset/9ba7bb9d0e0a/ Log: main_thread_id is zero after all. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -8,6 +8,8 @@ NUM_THREADS_DEFAULT = 4 # by default +MAIN_THREAD_ID = 0 + class State(object): @@ -21,7 +23,6 @@ self.ll_no_tasks_pending_lock = threadintf.null_ll_lock self.ll_unfinished_lock = threadintf.null_ll_lock self.threadobjs = {} # empty during translation - self.main_thread_id = 0 self.pending = Fifo() def _freeze_(self): @@ -69,15 +70,11 @@ def setvalue(self, value): id = rstm.thread_id() - if self.main_thread_id == 0: - self.main_thread_id = id - else: - # this should not be used from a transaction - assert id == self.main_thread_id + assert id == MAIN_THREAD_ID # should not be used from a transaction self.threadobjs[id] = value def getmainthreadvalue(self): - return self.threadobjs.get(self.main_thread_id, None) + return self.threadobjs.get(MAIN_THREAD_ID, None) def getallvalues(self): return self.threadobjs @@ -267,7 +264,7 @@ state.w_error, space.wrap("recursive invocation of transaction.run()")) state.startup_run() - assert state.main_thread_id == rstm.thread_id() + assert rstm.thread_id() == MAIN_THREAD_ID assert not state.is_locked_no_tasks_pending() if state.pending.is_empty(): return @@ -292,7 +289,7 @@ # assert state.num_waiting_threads == 0 assert state.pending.is_empty() - assert state.threadobjs.keys() == [state.main_thread_id] + assert state.threadobjs.keys() == [MAIN_THREAD_ID] assert not state.is_locked_no_tasks_pending() state.running = False # From noreply at buildbot.pypy.org Sat Feb 11 17:15:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 17:15:54 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Silence this warning, because we get infinitely many of them right Message-ID: <20120211161554.6367C11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52376:2cdaa145d8a6 Date: 2012-02-11 16:15 +0000 http://bitbucket.org/pypy/pypy/changeset/2cdaa145d8a6/ Log: Silence this warning, because we get infinitely many of them right now. diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -685,10 +685,10 @@ OP_CAST_OPAQUE_PTR = OP_CAST_POINTER def OP_CAST_PTR_TO_ADR(self, op): - if self.lltypemap(op.args[0]).TO._gckind == 'gc' and self._is_stm(): - from pypy.translator.c.support import log - log.WARNING("cast_ptr_to_adr(gcref) might be a bad idea with STM:") - log.WARNING(" %r" % (self.graph,)) + #if self.lltypemap(op.args[0]).TO._gckind == 'gc' and self._is_stm(): + # from pypy.translator.c.support import log + # log.WARNING("cast_ptr_to_adr(gcref) might be a bad idea with STM:") + # log.WARNING(" %r" % (self.graph,)) return self.OP_CAST_POINTER(op) def OP_CAST_INT_TO_PTR(self, op): From noreply at buildbot.pypy.org Sat Feb 11 17:15:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 17:15:53 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Translation fixes. Message-ID: <20120211161553.33B2611B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52375:c5e09fcf82da Date: 2012-02-11 16:12 +0000 http://bitbucket.org/pypy/pypy/changeset/c5e09fcf82da/ Log: Translation fixes. diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -813,6 +813,7 @@ b.append(cp[i]) i += 1 return assert_str0(b.build()) + charp2strn._annenforceargs_ = [None, int] # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -356,7 +356,7 @@ # ---------- def acquire(self, lock): - ll_thread.acquire_NOAUTO(lock, 1) + ll_thread.acquire_NOAUTO(lock, True) def release(self, lock): ll_thread.release_NOAUTO(lock) diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -93,11 +93,11 @@ glob.USE_MEMORY = bool(int(argv[3])) glob.done = 0 glob.lock = ll_thread.allocate_ll_lock() - ll_thread.acquire_NOAUTO(glob.lock, 1) + ll_thread.acquire_NOAUTO(glob.lock, True) for i in range(glob.NUM_THREADS): glob._arg = Arg() ll_thread.start_new_thread(run_me, ()) - ll_thread.acquire_NOAUTO(glob.lock, 1) + ll_thread.acquire_NOAUTO(glob.lock, True) print "sleeping..." while glob.done < glob.NUM_THREADS: # poor man's lock time.sleep(1) From noreply at buildbot.pypy.org Sat Feb 11 17:37:30 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 11 Feb 2012 17:37:30 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Support stm_getinteriorfield. Message-ID: <20120211163730.3888C11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52377:be298a75c78f Date: 2012-02-11 16:37 +0000 http://bitbucket.org/pypy/pypy/changeset/be298a75c78f/ Log: Support stm_getinteriorfield. diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -10,7 +10,7 @@ cresulttypename = cdecl(resulttypename, '') newvalue = funcgen.expr(op.result, special_case_void=False) # - assert T is not lltype.Void # XXX + assert T is not lltype.Void fieldsize = rffi.sizeof(T) assert fieldsize in (1, 2, 4, 8) if T == lltype.Float: @@ -52,9 +52,10 @@ return _stm_generic_get(funcgen, op, access_info) def stm_getinteriorfield(funcgen, op): - xxx + ptr = funcgen.expr(op.args[0]) expr = funcgen.interior_expr(op.args) - return _stm_generic_get(funcgen, op, expr) + access_info = (None, ptr, expr) + return _stm_generic_get(funcgen, op, access_info) def stm_become_inevitable(funcgen, op): From noreply at buildbot.pypy.org Sat Feb 11 17:55:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 11 Feb 2012 17:55:29 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: write a draft of performance dissemination Message-ID: <20120211165529.BFA5211B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r314:d0cf8619db86 Date: 2012-02-11 18:55 +0200 http://bitbucket.org/pypy/pypy.org/changeset/d0cf8619db86/ Log: write a draft of performance dissemination diff --git a/source/performance.txt b/source/performance.txt new file mode 100644 --- /dev/null +++ b/source/performance.txt @@ -0,0 +1,43 @@ +One of the goals of the PyPy project is to provide a fast and compliant python +interpreter. Part of the way we achieve this is to provide a high-performance +garbage collector and a high performance JIT. Results of comparing PyPy and +CPython can be found on the `speed website`_. Those benchmarks are not a random +collection. They're a combination of real-world Python programs, benchmarks +originally included and benchmarks we found PyPy to be slow on. Consult +descriptions of each for details. + +JIT is however not a magic bullet. There are several characteristics that might +be surprising for people having first encounter with it. JIT is generally good +at speeding up straightforward python code that spends a lot of time in +the bytecode dispatch loop, numerics, heave oo etc. When JIT does not help, +PyPy is generally slower than CPython, those things include: + +* **Tests**: Ideal unit tests would execute each piece of tested code which + leaves no time for the JIT to warm up. + +* **Really short-running scripts**: A rule of thumb is if something runs below + 0.2s JIT has no chance, but it depends a lot on the program in question. In + general, make sure you warm up your program before running benchmarks if + you're measuring something long-running like a server. + +* **Functions in runtime**: Functions that take significant time in runtime. + PyPy's runtime is generally not as optimized as CPython's and expect those + functions to take somewhere between same time as CPython to 2x longer. + XXX explain exactly what runtime is + +Unrelated things that we know PyPy is slow at (note that we're probably working +on it): + +* **Long integers** + +* **Building very large dicts** + +XXX + +We generally consider things that are slower on PyPy than CPython PyPy's bugs. +In case you find a thing that's not documented here, report it to our +`bug tracker`_ for investigation + +.. _`bug tracker`: http://bugs.pypy.org +.. _`speed website`: http://speed.pypy.org + From noreply at buildbot.pypy.org Sat Feb 11 17:55:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 11 Feb 2012 17:55:28 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: thanks phreach, update this a bit Message-ID: <20120211165528.A8B6B11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r313:bd50ca0262e2 Date: 2012-02-10 12:10 +0200 http://bitbucket.org/pypy/pypy.org/changeset/bd50ca0262e2/ Log: thanks phreach, update this a bit diff --git a/compat.html b/compat.html --- a/compat.html +++ b/compat.html @@ -67,7 +67,9 @@

    Python libraries known to work under PyPy (the list is not exhaustive):

    We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, From noreply at buildbot.pypy.org Sun Feb 12 22:10:18 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 12 Feb 2012 22:10:18 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: oops title Message-ID: <20120212211018.95D8C8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r325:57e99e68ffb7 Date: 2012-02-12 23:09 +0200 http://bitbucket.org/pypy/pypy.org/changeset/57e99e68ffb7/ Log: oops title diff --git a/performance.html b/performance.html --- a/performance.html +++ b/performance.html @@ -1,7 +1,7 @@ - PyPy :: PyPy + PyPy :: Performance @@ -44,7 +44,7 @@

    -

    PyPy

    +

    Performance

    One of the goals of the PyPy project is to provide a fast and compliant python interpreter. Some of the ways we achieve this are by providing a high-performance garbage collector (GC) and a high-performance diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -1,6 +1,6 @@ --- layout: page -title: PyPy +title: Performance --- One of the goals of the PyPy project is to provide a fast and compliant From noreply at buildbot.pypy.org Sun Feb 12 22:19:25 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 12 Feb 2012 22:19:25 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: merge default Message-ID: <20120212211925.21D498203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52397:e6b7060ae276 Date: 2012-02-11 19:01 +0200 http://bitbucket.org/pypy/pypy/changeset/e6b7060ae276/ Log: merge default diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1520,7 +1520,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,16 +2,21 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As has become a habit, this -release brings a lot of bugfixes, and performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -60,13 +65,6 @@ * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. -* Since the last release there was a significant breakthrough in PyPy's - fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_. We would like to thank again to everyone who donated. - - It's also probably worth noting, we're considering donations for the STM - project. - * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work @@ -82,7 +80,15 @@ * More numpy work -* Software Transactional Memory, you can read more about `our plans`_ +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,6 +83,8 @@ descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") descr_xor = _binop_impl("bitwise_xor") @@ -97,13 +99,31 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") descr_invert = _unaryop_impl("invert") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -185,7 +205,10 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), __and__ = interp2app(W_GenericBox.descr_and), __or__ = interp2app(W_GenericBox.descr_or), __xor__ = interp2app(W_GenericBox.descr_xor), @@ -193,7 +216,16 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,8 +101,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +117,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +135,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1227,21 +1246,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1250,10 +1284,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -392,6 +392,8 @@ ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -406,15 +406,28 @@ from operator import truediv from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) assert int_(3) & int_(1) == int_(1) - raises(TypeError, lambda: float64(3) & 1) - assert int_(8) % int_(3) == int_(2) + assert 2 & int_(3) == int_(2) assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,59 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +731,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -368,14 +368,14 @@ assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -22,3 +22,7 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', From noreply at buildbot.pypy.org Sun Feb 12 22:19:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 12 Feb 2012 22:19:26 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: tests and fixes Message-ID: <20120212211926.50EE38203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52398:64ecf22e0e9e Date: 2012-02-12 23:18 +0200 http://bitbucket.org/pypy/pypy/changeset/64ecf22e0e9e/ Log: tests and fixes diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -27,9 +27,9 @@ def test_basic(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -59,9 +59,9 @@ def test_basic_sub(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_sub(f0, f1) @@ -93,9 +93,9 @@ def test_unfit_trees(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -131,9 +131,9 @@ def test_unfit_trees_2(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -163,9 +163,9 @@ def test_unfit_trees_3(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -199,9 +199,9 @@ def test_guard_forces(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) @@ -233,14 +233,14 @@ def test_guard_prevents(self): ops = """ [p0, p1, p2, i0, i1, i2] - call(p0, i0, descr=assert_aligned) - call(p1, i1, descr=assert_aligned) - call(p1, i2, descr=assert_aligned) + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) + guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] setarrayitem_raw(p2, i2, f2, descr=arraydescr) - guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] i0_1 = int_add(i0, 1) i1_1 = int_add(1, i1) i2_1 = int_add(i2, 1) @@ -255,8 +255,8 @@ f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) + guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] setarrayitem_raw(p2, i2, f2, descr=arraydescr) - guard_true(i1) [p0, p1, p2, i1, i0, i2, f1, f2] i0_1 = int_add(i0, 1) i2_1 = int_add(i2, 1) f0_1 = getarrayitem_raw(p0, i0_1, descr=arraydescr) @@ -266,3 +266,10 @@ finish(p0, p1, p2, i0_1, i2_1) """ self.optimize_loop(ops, expected) + + def test_force_by_box_usage(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + call(0, p0, i0, descr=assert_aligned) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + xxx diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -101,7 +101,7 @@ def optimize_CALL(self, op): oopspec = self.get_oopspec(op) if oopspec == EffectInfo.OS_ASSERT_ALIGNED: - index = self.getvalue(op.getarg(1)) + index = self.getvalue(op.getarg(2)) self.tracked_indexes[index] = TrackIndex(index, 0) else: self.optimize_default(op) @@ -165,7 +165,7 @@ def emit_vector_ops(self, forbidden_boxes): for arg in forbidden_boxes: - if arg in self.track: + if self.getvalue(arg) in self.track: self.reset() return if self.full: @@ -181,7 +181,7 @@ for arr, items in self.full.iteritems(): items[0].emit(self) self.ops_so_far = [] - self.reset() + self.reset() def optimize_default(self, op): # list operations that are fine, not that many diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -152,9 +152,10 @@ def eval(self, frame, arr): iter = frame.iterators[self.iter_no] offset = iter.offset + arr = frame.arrays[self.array_no] if frame.first_iteration: - jit.assert_aligned(offset) - return self.dtype.getitem(frame.arrays[self.array_no], offset) + jit.assert_aligned(arr, offset) + return self.dtype.getitem(arr, offset) class ScalarSignature(ConcreteSignature): def debug_repr(self): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -872,6 +872,7 @@ return hop.genop('jit_record_known_class', [v_inst, v_cls], resulttype=lltype.Void) - at oopspec('assert_aligned(arg)') -def assert_aligned(arg): - keepalive_until_here(arg) + at oopspec('assert_aligned(arr, index)') +def assert_aligned(arr, index): + keepalive_until_here(arr) + keepalive_until_here(index) From noreply at buildbot.pypy.org Sun Feb 12 22:46:00 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sun, 12 Feb 2012 22:46:00 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Add some verbs to cPickle sentence. Message-ID: <20120212214600.3D7E18203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r326:78c58258887b Date: 2012-02-12 16:45 -0500 http://bitbucket.org/pypy/pypy.org/changeset/78c58258887b/ Log: Add some verbs to cPickle sentence. diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -63,10 +63,11 @@ that uses something like ``ctypes`` for the interface. * **Missing RPython modules**: A few modules of the standard library - (like ``csv`` and ``cPickle``) are in C in CPython, but in pure Python - in PyPy. Sometimes the JIT is able to do a good job on them, and - sometimes not. In most cases (like ``csv`` and ``cPickle``), we're slower - than cPython, with the notable exception of ``json`` and ``heapq``. + (like ``csv`` and ``cPickle``) are written in C in CPython, but written + natively in pure Python in PyPy. Sometimes the JIT is able to do a + good job on them, and sometimes not. In most cases (like ``csv`` and + ``cPickle``), we're slower than cPython, with the notable exception of + ``json`` and ``heapq``. We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, From noreply at buildbot.pypy.org Sun Feb 12 23:34:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 12 Feb 2012 23:34:04 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: export typeinfo Message-ID: <20120212223404.B4FAF8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52399:040a65b61977 Date: 2012-02-13 00:33 +0200 http://bitbucket.org/pypy/pypy/changeset/040a65b61977/ Log: export typeinfo diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -37,6 +37,8 @@ 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', + 'typeinfo': 'interp_dtype.get_dtype_cache(space).w_typeinfo', + 'generic': 'interp_boxes.W_GenericBox', 'number': 'interp_boxes.W_NumberBox', 'integer': 'interp_boxes.W_IntegerBox', diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -7,7 +7,7 @@ interp_attrproperty, interp_attrproperty_w) from pypy.module.micronumpy import types, interp_boxes from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT +from pypy.rlib.rarithmetic import LONG_BIT, r_longlong, r_ulonglong from pypy.rpython.lltypesystem import lltype @@ -116,6 +116,9 @@ return (self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR or self.kind == BOOLLTR) + def is_signed(self): + return self.kind == SIGNEDLTR + def is_bool_type(self): return self.kind == BOOLLTR @@ -425,5 +428,73 @@ self.dtypes_by_name[alias] = dtype self.dtypes_by_name[dtype.char] = dtype + typeinfo_full = { + 'LONGLONG': self.w_int64dtype, + 'SHORT': self.w_int16dtype, + 'VOID': self.w_voiddtype, + #'LONGDOUBLE':, + 'UBYTE': self.w_uint8dtype, + 'UINTP': self.w_ulongdtype, + 'ULONG': self.w_ulongdtype, + 'LONG': self.w_longdtype, + 'UNICODE': self.w_unicodedtype, + #'OBJECT', + 'ULONGLONG': self.w_ulonglongdtype, + 'STRING': self.w_stringdtype, + #'CDOUBLE', + #'DATETIME', + 'UINT': self.w_uint32dtype, + 'INTP': self.w_longdtype, + #'HALF', + 'BYTE': self.w_int8dtype, + #'CFLOAT': , + #'TIMEDELTA', + 'INT': self.w_int32dtype, + 'DOUBLE': self.w_float64dtype, + 'USHORT': self.w_uint16dtype, + 'FLOAT': self.w_float32dtype, + 'BOOL': self.w_booldtype, + #, 'CLONGDOUBLE'] + } + typeinfo_partial = { + 'Generic': interp_boxes.W_GenericBox, + 'Character': interp_boxes.W_CharacterBox, + 'Flexible': interp_boxes.W_FlexibleBox, + 'Inexact': interp_boxes.W_InexactBox, + 'Integer': interp_boxes.W_IntegerBox, + 'SignedInteger': interp_boxes.W_SignedIntegerBox, + 'UnsignedInteger': interp_boxes.W_UnsignedIntegerBox, + #'ComplexFloating', + 'Number': interp_boxes.W_NumberBox, + 'Floating': interp_boxes.W_FloatingBox + } + w_typeinfo = space.newdict() + for k, v in typeinfo_partial.iteritems(): + space.setitem(w_typeinfo, space.wrap(k), space.gettypefor(v)) + for k, dtype in typeinfo_full.iteritems(): + itemsize = dtype.itemtype.get_element_size() + items_w = [space.wrap(dtype.char), + space.wrap(dtype.num), + space.wrap(itemsize * 8), # in case of changing + # number of bits per byte in the future + space.wrap(itemsize or 1)] + if dtype.is_int_type(): + if dtype.kind == BOOLLTR: + w_maxobj = space.wrap(1) + w_minobj = space.wrap(0) + elif dtype.is_signed(): + w_maxobj = space.wrap(r_longlong((1 << (itemsize*8 - 1)) + - 1)) + w_minobj = space.wrap(r_longlong(-1) << (itemsize*8 - 1)) + else: + w_maxobj = space.wrap(r_ulonglong(1 << (itemsize*8)) - 1) + w_minobj = space.wrap(0) + items_w = items_w + [w_maxobj, w_minobj] + items_w = items_w + [dtype.w_box_type] + + w_tuple = space.newtuple(items_w) + space.setitem(w_typeinfo, space.wrap(k), w_tuple) + self.w_typeinfo = w_typeinfo + def get_dtype_cache(space): return space.fromcache(DtypeCache) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -457,6 +457,13 @@ from _numpypy import dtype assert dtype('i4').alignment == 4 + def test_typeinfo(self): + from _numpypy import typeinfo, void, number, int64, bool_ + assert typeinfo['Number'] == number + assert typeinfo['LONGLONG'] == ('q', 9, 64, 8, 9223372036854775807L, -9223372036854775808L, int64) + assert typeinfo['VOID'] == ('V', 20, 0, 1, void) + assert typeinfo['BOOL'] == ('?', 0, 8, 1, 1, 0, bool_) + class AppTestStrUnicodeDtypes(BaseNumpyAppTest): def test_str_unicode(self): from _numpypy import str_, unicode_, character, flexible, generic @@ -511,3 +518,4 @@ from _numpypy import dtype d = dtype({'names': ['a', 'b', 'c'], }) + From noreply at buildbot.pypy.org Mon Feb 13 00:52:42 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 13 Feb 2012 00:52:42 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-type-pure-python: Start working towards converting dtype constructor to use more of the pure python stuff, incomplete and not fully working. Message-ID: <20120212235242.84B2A8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-record-type-pure-python Changeset: r52400:8dbc11e0657e Date: 2012-02-12 18:52 -0500 http://bitbucket.org/pypy/pypy/changeset/8dbc11e0657e/ Log: Start working towards converting dtype constructor to use more of the pure python stuff, incomplete and not fully working. diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py --- a/lib_pypy/numpypy/core/__init__.py +++ b/lib_pypy/numpypy/core/__init__.py @@ -1,2 +1,9 @@ from .fromnumeric import * from .numeric import * + +import _numpypy +from .numerictypes import sctypeDict +_numpypy.set_typeDict(sctypeDict) + +del _numpypy +del sctypeDict diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,8 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'set_typeDict': 'interp_dtype.set_typeDict', + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,7 +1,8 @@ +import sys -import sys from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import OperationError +from pypy.interpreter.buffer import Buffer +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, interp_attrproperty_w) @@ -24,16 +25,16 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[], - fields=None, fieldnames=None): + def __init__(self, itemtype, num, kind, name, char, w_box_type, + byteorder="=", builtin_type=None, fields=None, fieldnames=None): self.itemtype = itemtype self.num = num self.kind = kind self.name = name self.char = char self.w_box_type = w_box_type - self.alternate_constructors = alternate_constructors - self.aliases = aliases + self.byteorder = byteorder + self.builtin_type = builtin_type self.fields = fields self.fieldnames = fieldnames @@ -103,6 +104,12 @@ return space.w_None return space.newtuple([space.wrap(name) for name in self.fieldnames]) + def descr_get_str(self, space): + byteorder = self.byteorder + if self.byteorder == "=": + byteorder = "<" + return space.wrap("%s%s%d" % (byteorder, self.kind, self.itemtype.get_element_size())) + @unwrap_spec(item=str) def descr_getitem(self, space, item): if self.fields is None: @@ -122,93 +129,136 @@ def is_bool_type(self): return self.kind == BOOLLTR -def dtype_from_list(space, w_lst): - lst_w = space.listview(w_lst) - fields = {} - offset = 0 - ofs_and_items = [] - fieldnames = [] - for w_elem in lst_w: - w_fldname, w_flddesc = space.fixedview(w_elem, 2) - subdtype = descr__new__(space, space.gettypefor(W_Dtype), w_flddesc) - fldname = space.str_w(w_fldname) - if fldname in fields: - raise OperationError(space.w_ValueError, space.wrap("two fields with the same name")) - fields[fldname] = (offset, subdtype) - ofs_and_items.append((offset, subdtype.itemtype)) - offset += subdtype.itemtype.get_element_size() - fieldnames.append(fldname) - itemtype = types.RecordType(ofs_and_items) - return W_Dtype(itemtype, 20, VOIDLTR, "void" + str(8 * itemtype.get_element_size()), - "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, - fieldnames=fieldnames) -def dtype_from_dict(space, w_dict): - xxx +def invalid_dtype(space, w_obj): + if space.isinstance_w(w_obj, space.w_str): + raise operationerrfmt(space.w_TypeError, + 'data type "%s" not understood', space.str_w(w_obj) + ) + else: + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) -def variable_dtype(space, name): - if name[0] in '<>': - # ignore byte order, not sure if it's worth it for unicode only - if name[0] != byteorder_prefix and name[1] == 'U': - xxx - name = name[1:] - char = name[0] - if len(name) == 1: - size = 0 - else: +def is_byteorder(ch): + return ch == ">" or ch == "<" or ch == "|" or ch == "=" + +def is_commastring(typestr): + # Number at the start of the string. + if ((typestr[0] >= "0" and typestr[0] <= "9") or + (len(typestr) > 1 and is_byteorder(typestr[0]) and + (typestr[1] >= "0" and typestr[1] <= "9"))): + return True + + # Starts with an empty tuple. + if ((len(typestr) > 1 and typestr[0] == "(" and typestr[1] == ")") or + (len(typestr) > 3 and is_byteorder(typestr[0]) and + (typestr[1] == "(" and typestr[2] == ")"))): + return True + + # Commas outside of [] + sqbracket = 0 + for i in xrange(1, len(typestr)): + ch = typestr[i] + if ch == ",": + if not sqbracket: + return True + elif ch == "[": + sqbracket += 1 + elif ch == "]": + sqbracket -= 1 + return False + +def dtype_from_object(space, w_obj): + cache = get_dtype_cache(space) + + if space.is_w(w_obj, space.w_None): + return cache.w_float64dtype + + if space.isinstance_w(w_obj, space.gettypefor(W_Dtype)): + return w_obj + + if space.isinstance_w(w_obj, space.w_type): + for dtype in cache.builtin_dtypes: + if (space.is_w(w_obj, dtype.w_box_type) or + dtype.builtin_type is not None and space.is_w(w_obj, dtype.builtin_type)): + return dtype + raise invalid_dtype(space, w_obj) + + if (space.isinstance_w(w_obj, space.w_str) or + space.isinstance_w(w_obj, space.w_unicode)): + + typestr = space.str_w(w_obj) + + if not typestr: + raise invalid_dtype(space, w_obj) + + if is_commastring(typestr): + return dtype_from_commastring(space, typestr) + + if is_byteorder(typestr[0]): + endian = typestr[0] + if endian == "|": + endian = "=" + typestr = typestr[1:] + + if not typestr: + raise invalid_dtype(space, w_obj) + + if len(typestr) == 1: + try: + return cache.dtypes_by_name[typestr] + except KeyError: + raise invalid_dtype(space, w_obj) + else: + # Something like f8 + try: + elsize = int(typestr[1:]) + except ValueError: + pass + else: + kind = typestr[0] + if kind == STRINGLTR: + w_base_dtype = cache.w_stringdtype + elif kind == UNICODELTR: + w_base_dtype = cache.w_unicodedtype + elif kind == VOIDLTR: + w_base_dtype = cache.w_voiddtype + else: + for dtype in cache.builtin_dtypes: + if (dtype.kind == kind and + dtype.itemtype.get_element_size() == elsize): + return dtype + raise invalid_dtype(space, w_obj) + + if space.isinstance_w(w_obj, space.w_tuple): + return dtype_from_tuple(space, space.listview(w_obj)) + + if space.isinstance_w(w_obj, space.w_list): + return dtype_from_list(space, space.listview(w_obj)) + + if space.isinstance_w(w_obj, space.w_dict): + return dtype_from_dict(space, w_obj) + + w_type_dict = cache.w_type_dict + w_result = None + if w_type_dict is not None: try: - size = int(name[1:]) - except ValueError: - raise OperationError(space.w_TypeError, space.wrap("data type not understood")) - if char == 'S': - itemtype = types.StringType(size) - basename = 'string' - num = 18 - w_box_type = space.gettypefor(interp_boxes.W_StringBox) - elif char == 'V': - num = 20 - basename = 'void' - w_box_type = space.gettypefor(interp_boxes.W_VoidBox) - xxx - else: - assert char == 'U' - basename = 'unicode' - itemtype = types.UnicodeType(size) - num = 19 - w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox) - return W_Dtype(itemtype, num, char, - basename + str(8 * itemtype.get_element_size()), - char, w_box_type) + w_result = space.getitem(w_type_dict, w_obj) + except OperationError, e: + if not e.match(space, space.w_KeyError): + raise + if space.isinstance_w(w_obj, space.w_str): + w_key = space.call_method(w_obj, "decode", space.wrap("ascii")) + w_result = space.getitem(w_type_dict, w_key) + if w_result is not None: + return dtype_from_object(space, w_result) + + raise invalid_dtype(space, w_obj) def descr__new__(space, w_subtype, w_dtype): - cache = get_dtype_cache(space) + w_dtype = dtype_from_object(space, w_dtype) + return w_dtype - if space.is_w(w_dtype, space.w_None): - return cache.w_float64dtype - elif space.isinstance_w(w_dtype, w_subtype): - return w_dtype - elif space.isinstance_w(w_dtype, space.w_str): - name = space.str_w(w_dtype) - if ',' in name: - return dtype_from_spec(space, name) - try: - return cache.dtypes_by_name[name] - except KeyError: - pass - if name[0] in 'VSU' or name[0] in '<>' and name[1] in 'VSU': - return variable_dtype(space, name) - elif space.isinstance_w(w_dtype, space.w_list): - return dtype_from_list(space, w_dtype) - elif space.isinstance_w(w_dtype, space.w_dict): - return dtype_from_dict(space, w_dtype) - else: - for dtype in cache.builtin_dtypes: - if w_dtype in dtype.alternate_constructors: - return dtype - if w_dtype is dtype.w_box_type: - return dtype - raise OperationError(space.w_TypeError, space.wrap("data type not understood")) W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", @@ -230,6 +280,7 @@ name = interp_attrproperty('name', cls=W_Dtype), fields = GetSetProperty(W_Dtype.descr_get_fields), names = GetSetProperty(W_Dtype.descr_get_names), + str = GetSetProperty(W_Dtype.descr_get_str), ) W_Dtype.typedef.acceptable_as_base_class = False @@ -240,7 +291,14 @@ byteorder_prefix = '>' nonnative_byteorder_prefix = '<' + +def set_typeDict(space, w_type_dict): + cache = get_dtype_cache(space) + cache.w_type_dict = w_type_dict + class DtypeCache(object): + w_type_dict = None + def __init__(self, space): self.w_booldtype = W_Dtype( types.Bool(), @@ -249,7 +307,7 @@ name="bool", char="?", w_box_type=space.gettypefor(interp_boxes.W_BoolBox), - alternate_constructors=[space.w_bool], + builtin_type=space.w_bool, ) self.w_int8dtype = W_Dtype( types.Int8(), @@ -310,7 +368,7 @@ name=name, char="l", w_box_type=space.gettypefor(interp_boxes.W_LongBox), - alternate_constructors=[space.w_int], + builtin_type=space.w_int, ) self.w_ulongdtype = W_Dtype( types.ULong(), @@ -351,8 +409,7 @@ name="float64", char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), - alternate_constructors=[space.w_float], - aliases=["float"], + builtin_type=space.w_float, ) self.w_longlongdtype = W_Dtype( types.Int64(), @@ -361,7 +418,7 @@ name='int64', char='q', w_box_type = space.gettypefor(interp_boxes.W_LongLongBox), - alternate_constructors=[space.w_long], + builtin_type=space.w_long, ) self.w_ulonglongdtype = W_Dtype( types.UInt64(), @@ -378,7 +435,7 @@ name='string', char='S', w_box_type = space.gettypefor(interp_boxes.W_StringBox), - alternate_constructors=[space.w_str], + builtin_type=space.w_str, ) self.w_unicodedtype = W_Dtype( types.UnicodeType(0), @@ -387,7 +444,7 @@ name='unicode', char='U', w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox), - alternate_constructors=[space.w_unicode], + builtin_type=space.w_unicode, ) self.w_voiddtype = W_Dtype( types.VoidType(0), @@ -396,8 +453,7 @@ name='void', char='V', w_box_type = space.gettypefor(interp_boxes.W_VoidBox), - #alternate_constructors=[space.w_buffer], - # XXX no buffer in space + builtin_type=space.gettypefor(Buffer), ) self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, @@ -414,19 +470,8 @@ ) self.dtypes_by_name = {} for dtype in self.builtin_dtypes: - self.dtypes_by_name[dtype.name] = dtype - can_name = dtype.kind + str(dtype.itemtype.get_element_size()) - self.dtypes_by_name[can_name] = dtype - self.dtypes_by_name[byteorder_prefix + can_name] = dtype - new_name = nonnative_byteorder_prefix + can_name - itemtypename = dtype.itemtype.__class__.__name__ - itemtype = getattr(types, 'NonNative' + itemtypename)() - self.dtypes_by_name[new_name] = W_Dtype( - itemtype, - dtype.num, dtype.kind, new_name, dtype.char, dtype.w_box_type) - for alias in dtype.aliases: - self.dtypes_by_name[alias] = dtype self.dtypes_by_name[dtype.char] = dtype + self.dtypes_by_name["p"] = self.w_longdtype typeinfo_full = { 'LONGLONG': self.w_int64dtype, @@ -491,7 +536,7 @@ w_minobj = space.wrap(0) items_w = items_w + [w_maxobj, w_minobj] items_w = items_w + [dtype.w_box_type] - + w_tuple = space.newtuple(items_w) space.setitem(w_typeinfo, space.wrap(k), w_tuple) self.w_typeinfo = w_typeinfo diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -1,10 +1,11 @@ +from pypy.interpreter.gateway import interp2app +from pypy.module.micronumpy.interp_dtype import nonnative_byteorder_prefix from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -from pypy.module.micronumpy.interp_dtype import nonnative_byteorder_prefix -from pypy.interpreter.gateway import interp2app + class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from _numpypy import dtype + from numpypy import dtype d = dtype('?') assert d.num == 0 @@ -17,6 +18,9 @@ assert dtype(int).names is None raises(TypeError, dtype, 1042) raises(KeyError, 'dtype(int)["asdasd"]') + assert dtype("i4").str == "i4").str == ">i4" def test_dtype_eq(self): from _numpypy import dtype @@ -199,7 +203,7 @@ assert w_obj2.storage[1] == '\x01' assert w_obj2.storage[0] == '\x00' cls.w_check_non_native = cls.space.wrap(interp2app(check_non_native)) - + def test_abstract_types(self): import _numpypy as numpy raises(TypeError, numpy.generic, 0) @@ -424,7 +428,7 @@ def test_various_types(self): import _numpypy as numpy import sys - + assert numpy.int16 is numpy.short assert numpy.int8 is numpy.byte assert numpy.bool_ is numpy.bool8 @@ -435,7 +439,7 @@ def test_mro(self): import _numpypy as numpy - + assert numpy.int16.__mro__ == (numpy.int16, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object) @@ -467,7 +471,7 @@ class AppTestStrUnicodeDtypes(BaseNumpyAppTest): def test_str_unicode(self): from _numpypy import str_, unicode_, character, flexible, generic - + assert str_.mro() == [str_, str, basestring, character, flexible, generic, object] assert unicode_.mro() == [unicode_, unicode, basestring, character, flexible, generic, object] @@ -518,4 +522,4 @@ from _numpypy import dtype d = dtype({'names': ['a', 'b', 'c'], }) - + From noreply at buildbot.pypy.org Mon Feb 13 00:52:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 13 Feb 2012 00:52:44 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-type-pure-python: forgotten file Message-ID: <20120212235244.141F58203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-record-type-pure-python Changeset: r52401:2702f00a8382 Date: 2012-02-12 18:52 -0500 http://bitbucket.org/pypy/pypy/changeset/2702f00a8382/ Log: forgotten file diff --git a/lib_pypy/numpypy/core/numerictypes.py b/lib_pypy/numpypy/core/numerictypes.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/numerictypes.py @@ -0,0 +1,1034 @@ +""" +numerictypes: Define the numeric type objects + +This module is designed so "from numerictypes import \\*" is safe. +Exported symbols include: + + Dictionary with all registered number types (including aliases): + typeDict + + Type objects (not all will be available, depends on platform): + see variable sctypes for which ones you have + + Bit-width names + + int8 int16 int32 int64 int128 + uint8 uint16 uint32 uint64 uint128 + float16 float32 float64 float96 float128 float256 + complex32 complex64 complex128 complex192 complex256 complex512 + datetime64 timedelta64 + + c-based names + + bool_ + + object_ + + void, str_, unicode_ + + byte, ubyte, + short, ushort + intc, uintc, + intp, uintp, + int_, uint, + longlong, ulonglong, + + single, csingle, + float_, complex_, + longfloat, clongfloat, + + As part of the type-hierarchy: xx -- is bit-width + + generic + +-> bool_ (kind=b) + +-> number (kind=i) + | integer + | signedinteger (intxx) + | byte + | short + | intc + | intp int0 + | int_ + | longlong + +-> unsignedinteger (uintxx) (kind=u) + | ubyte + | ushort + | uintc + | uintp uint0 + | uint_ + | ulonglong + +-> inexact + | +-> floating (floatxx) (kind=f) + | | half + | | single + | | float_ (double) + | | longfloat + | \\-> complexfloating (complexxx) (kind=c) + | csingle (singlecomplex) + | complex_ (cfloat, cdouble) + | clongfloat (longcomplex) + +-> flexible + | character + | void (kind=V) + | + | str_ (string_, bytes_) (kind=S) [Python 2] + | unicode_ (kind=U) [Python 2] + | + | bytes_ (string_) (kind=S) [Python 3] + | str_ (unicode_) (kind=U) [Python 3] + | + \\-> object_ (not used much) (kind=O) + +""" + +# we add more at the bottom +__all__ = ['sctypeDict', 'sctypeNA', 'typeDict', 'typeNA', 'sctypes', + 'ScalarType', 'obj2sctype', 'cast', 'nbytes', 'sctype2char', + 'maximum_sctype', 'issctype', 'typecodes', 'find_common_type', + 'issubdtype', 'datetime_data','datetime_as_string', + 'busday_offset', 'busday_count', 'is_busday', 'busdaycalendar', + 'NA', 'NAType'] + +from numpypy import ndarray, array, empty, dtype, typeinfo + +import types as _types +import sys + +# we don't export these for import *, but we do want them accessible +# as numerictypes.bool, etc. +from __builtin__ import bool, int, long, float, complex, object, unicode, str, str as bytes + +if sys.version_info[0] >= 3: + # Py3K + class long(int): + # Placeholder class -- this will not escape outside numerictypes.py + pass + +# String-handling utilities to avoid locale-dependence. + +# "import string" is costly to import! +# Construct the translation tables directly +# "A" = chr(65), "a" = chr(97) +_all_chars = map(chr, range(256)) +_ascii_upper = _all_chars[65:65+26] +_ascii_lower = _all_chars[97:97+26] +LOWER_TABLE="".join(_all_chars[:65] + _ascii_lower + _all_chars[65+26:]) +UPPER_TABLE="".join(_all_chars[:97] + _ascii_upper + _all_chars[97+26:]) + +#import string +# assert (string.maketrans(string.ascii_uppercase, string.ascii_lowercase) == \ +# LOWER_TABLE) +# assert (string.maketrnas(string_ascii_lowercase, string.ascii_uppercase) == \ +# UPPER_TABLE) +#LOWER_TABLE = string.maketrans(string.ascii_uppercase, string.ascii_lowercase) +#UPPER_TABLE = string.maketrans(string.ascii_lowercase, string.ascii_uppercase) + +def english_lower(s): + """ Apply English case rules to convert ASCII strings to all lower case. + + This is an internal utility function to replace calls to str.lower() such + that we can avoid changing behavior with changing locales. In particular, + Turkish has distinct dotted and dotless variants of the Latin letter "I" in + both lowercase and uppercase. Thus, "I".lower() != "i" in a "tr" locale. + + Parameters + ---------- + s : str + + Returns + ------- + lowered : str + + Examples + -------- + >>> from numpy.core.numerictypes import english_lower + >>> english_lower('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') + 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz0123456789_' + >>> english_lower('') + '' + """ + lowered = s.translate(LOWER_TABLE) + return lowered + +def english_upper(s): + """ Apply English case rules to convert ASCII strings to all upper case. + + This is an internal utility function to replace calls to str.upper() such + that we can avoid changing behavior with changing locales. In particular, + Turkish has distinct dotted and dotless variants of the Latin letter "I" in + both lowercase and uppercase. Thus, "i".upper() != "I" in a "tr" locale. + + Parameters + ---------- + s : str + + Returns + ------- + uppered : str + + Examples + -------- + >>> from numpy.core.numerictypes import english_upper + >>> english_upper('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') + 'ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_' + >>> english_upper('') + '' + """ + uppered = s.translate(UPPER_TABLE) + return uppered + +def english_capitalize(s): + """ Apply English case rules to convert the first character of an ASCII + string to upper case. + + This is an internal utility function to replace calls to str.capitalize() + such that we can avoid changing behavior with changing locales. + + Parameters + ---------- + s : str + + Returns + ------- + capitalized : str + + Examples + -------- + >>> from numpy.core.numerictypes import english_capitalize + >>> english_capitalize('int8') + 'Int8' + >>> english_capitalize('Int8') + 'Int8' + >>> english_capitalize('') + '' + """ + if s: + return english_upper(s[0]) + s[1:] + else: + return s + + +sctypeDict = {} # Contains all leaf-node scalar types with aliases +sctypeNA = {} # Contails all leaf-node types -> numarray type equivalences +allTypes = {} # Collect the types we will add to the module here + +def _evalname(name): + k = 0 + for ch in name: + if ch in '0123456789': + break + k += 1 + try: + bits = int(name[k:]) + except ValueError: + bits = 0 + base = name[:k] + return base, bits + +def bitname(obj): + """Return a bit-width name for a given type object""" + name = obj.__name__ + base = '' + char = '' + try: + if name[-1] == '_': + newname = name[:-1] + else: + newname = name + info = typeinfo[english_upper(newname)] + assert(info[-1] == obj) # sanity check + bits = info[2] + + except KeyError: # bit-width name + base, bits = _evalname(name) + char = base[0] + + if name == 'bool_': + char = 'b' + base = 'bool' + elif name=='void': + char = 'V' + base = 'void' + elif name=='object_': + char = 'O' + base = 'object' + bits = 0 + elif name=='datetime64': + char = 'M' + elif name=='timedelta64': + char = 'm' + + if sys.version_info[0] >= 3: + if name=='bytes_': + char = 'S' + base = 'bytes' + elif name=='str_': + char = 'U' + base = 'str' + else: + if name=='string_': + char = 'S' + base = 'string' + elif name=='unicode_': + char = 'U' + base = 'unicode' + + bytes = bits // 8 + + if char != '' and bytes != 0: + char = "%s%d" % (char, bytes) + + return base, bits, char + + +def _add_types(): + for a in typeinfo.keys(): + name = english_lower(a) + if isinstance(typeinfo[a], tuple): + typeobj = typeinfo[a][-1] + + # define C-name and insert typenum and typechar references also + allTypes[name] = typeobj + sctypeDict[name] = typeobj + sctypeDict[typeinfo[a][0]] = typeobj + sctypeDict[typeinfo[a][1]] = typeobj + + else: # generic class + allTypes[name] = typeinfo[a] +_add_types() + +def _add_aliases(): + for a in typeinfo.keys(): + name = english_lower(a) + if not isinstance(typeinfo[a], tuple): + continue + typeobj = typeinfo[a][-1] + # insert bit-width version for this class (if relevant) + base, bit, char = bitname(typeobj) + if base[-3:] == 'int' or char[0] in 'ui': continue + if base != '': + myname = "%s%d" % (base, bit) + if (name != 'longdouble' and name != 'clongdouble') or \ + myname not in allTypes.keys(): + allTypes[myname] = typeobj + sctypeDict[myname] = typeobj + if base == 'complex': + na_name = '%s%d' % (english_capitalize(base), bit//2) + elif base == 'bool': + na_name = english_capitalize(base) + sctypeDict[na_name] = typeobj + else: + na_name = "%s%d" % (english_capitalize(base), bit) + sctypeDict[na_name] = typeobj + sctypeNA[na_name] = typeobj + sctypeDict[na_name] = typeobj + sctypeNA[typeobj] = na_name + sctypeNA[typeinfo[a][0]] = na_name + if char != '': + sctypeDict[char] = typeobj + sctypeNA[char] = na_name +_add_aliases() + +# Integers handled so that +# The int32, int64 types should agree exactly with +# PyArray_INT32, PyArray_INT64 in C +# We need to enforce the same checking as is done +# in arrayobject.h where the order of getting a +# bit-width match is: +# long, longlong, int, short, char +# for int8, int16, int32, int64, int128 + +def _add_integer_aliases(): + _ctypes = ['LONG', 'LONGLONG', 'INT', 'SHORT', 'BYTE'] + for ctype in _ctypes: + val = typeinfo[ctype] + bits = val[2] + charname = 'i%d' % (bits//8,) + ucharname = 'u%d' % (bits//8,) + intname = 'int%d' % bits + UIntname = 'UInt%d' % bits + Intname = 'Int%d' % bits + uval = typeinfo['U'+ctype] + typeobj = val[-1] + utypeobj = uval[-1] + if intname not in allTypes.keys(): + uintname = 'uint%d' % bits + allTypes[intname] = typeobj + allTypes[uintname] = utypeobj + sctypeDict[intname] = typeobj + sctypeDict[uintname] = utypeobj + sctypeDict[Intname] = typeobj + sctypeDict[UIntname] = utypeobj + sctypeDict[charname] = typeobj + sctypeDict[ucharname] = utypeobj + sctypeNA[Intname] = typeobj + sctypeNA[UIntname] = utypeobj + sctypeNA[charname] = typeobj + sctypeNA[ucharname] = utypeobj + sctypeNA[typeobj] = Intname + sctypeNA[utypeobj] = UIntname + sctypeNA[val[0]] = Intname + sctypeNA[uval[0]] = UIntname +_add_integer_aliases() + +# We use these later +void = allTypes['void'] +generic = allTypes['generic'] + +# +# Rework the Python names (so that float and complex and int are consistent +# with Python usage) +# +def _set_up_aliases(): + type_pairs = [#('complex_', 'cdouble'), + ('int0', 'intp'), + ('uint0', 'uintp'), + ('single', 'float'), + #('csingle', 'cfloat'), + #('singlecomplex', 'cfloat'), + ('float_', 'double'), + ('intc', 'int'), + ('uintc', 'uint'), + ('int_', 'long'), + ('uint', 'ulong'), + #('cfloat', 'cdouble'), + #('longfloat', 'longdouble'), + #('clongfloat', 'clongdouble'), + #('longcomplex', 'clongdouble'), + ('bool_', 'bool'), + ('unicode_', 'unicode'),] + #('object_', 'object')] + if sys.version_info[0] >= 3: + type_pairs.extend([('bytes_', 'string'), + ('str_', 'unicode'), + ('string_', 'string')]) + else: + type_pairs.extend([('str_', 'string'), + ('string_', 'string'), + ('bytes_', 'string')]) + for alias, t in type_pairs: + allTypes[alias] = allTypes[t] + sctypeDict[alias] = sctypeDict[t] + # Remove aliases overriding python types and modules + to_remove = ['ulong', 'object', 'unicode', 'int', 'long', 'float', + 'complex', 'bool', 'string', 'datetime', 'timedelta'] + if sys.version_info[0] >= 3: + # Py3K + to_remove.append('bytes') + to_remove.append('str') + to_remove.remove('unicode') + to_remove.remove('long') + for t in to_remove: + try: + del allTypes[t] + del sctypeDict[t] + except KeyError: + pass +_set_up_aliases() + +# Now, construct dictionary to lookup character codes from types +_sctype2char_dict = {} +def _construct_char_code_lookup(): + for name in typeinfo.keys(): + tup = typeinfo[name] + if isinstance(tup, tuple): + if tup[0] not in ['p','P']: + _sctype2char_dict[tup[-1]] = tup[0] +_construct_char_code_lookup() + + +sctypes = {'int': [], + 'uint':[], + 'float':[], + 'complex':[], + 'others':[bool,object,str,unicode,void]} + +def _add_array_type(typename, bits): + try: + t = allTypes['%s%d' % (typename, bits)] + except KeyError: + pass + else: + sctypes[typename].append(t) + +def _set_array_types(): + ibytes = [1, 2, 4, 8, 16, 32, 64] + fbytes = [2, 4, 8, 10, 12, 16, 32, 64] + for bytes in ibytes: + bits = 8*bytes + _add_array_type('int', bits) + _add_array_type('uint', bits) + for bytes in fbytes: + bits = 8*bytes + _add_array_type('float', bits) + _add_array_type('complex', 2*bits) + _gi = dtype('p') + if _gi.type not in sctypes['int']: + indx = 0 + sz = _gi.itemsize + _lst = sctypes['int'] + while (indx < len(_lst) and sz >= _lst[indx](0).itemsize): + indx += 1 + sctypes['int'].insert(indx, _gi.type) + sctypes['uint'].insert(indx, dtype('P').type) +_set_array_types() + + +genericTypeRank = ['bool', 'int8', 'uint8', 'int16', 'uint16', + 'int32', 'uint32', 'int64', 'uint64', 'int128', + 'uint128', 'float16', + 'float32', 'float64', 'float80', 'float96', 'float128', + 'float256', + 'complex32', 'complex64', 'complex128', 'complex160', + 'complex192', 'complex256', 'complex512', 'object'] + +def maximum_sctype(t): + """ + Return the scalar type of highest precision of the same kind as the input. + + Parameters + ---------- + t : dtype or dtype specifier + The input data type. This can be a `dtype` object or an object that + is convertible to a `dtype`. + + Returns + ------- + out : dtype + The highest precision data type of the same kind (`dtype.kind`) as `t`. + + See Also + -------- + obj2sctype, mintypecode, sctype2char + dtype + + Examples + -------- + >>> np.maximum_sctype(np.int) + + >>> np.maximum_sctype(np.uint8) + + >>> np.maximum_sctype(np.complex) + + + >>> np.maximum_sctype(str) + + + >>> np.maximum_sctype('i2') + + >>> np.maximum_sctype('f4') + + + """ + g = obj2sctype(t) + if g is None: + return t + t = g + name = t.__name__ + base, bits = _evalname(name) + if bits == 0: + return t + else: + return sctypes[base][-1] + +try: + buffer_type = _types.BufferType +except AttributeError: + # Py3K + buffer_type = memoryview + +_python_types = {int : 'int_', + float: 'float_', + complex: 'complex_', + bool: 'bool_', + bytes: 'bytes_', + unicode: 'unicode_', + buffer_type: 'void', + } + +if sys.version_info[0] >= 3: + def _python_type(t): + """returns the type corresponding to a certain Python type""" + if not isinstance(t, type): + t = type(t) + return allTypes[_python_types.get(t, 'object_')] +else: + def _python_type(t): + """returns the type corresponding to a certain Python type""" + if not isinstance(t, _types.TypeType): + t = type(t) + return allTypes[_python_types.get(t, 'object_')] + +def issctype(rep): + """ + Determines whether the given object represents a scalar data-type. + + Parameters + ---------- + rep : any + If `rep` is an instance of a scalar dtype, True is returned. If not, + False is returned. + + Returns + ------- + out : bool + Boolean result of check whether `rep` is a scalar dtype. + + See Also + -------- + issubsctype, issubdtype, obj2sctype, sctype2char + + Examples + -------- + >>> np.issctype(np.int32) + True + >>> np.issctype(list) + False + >>> np.issctype(1.1) + False + + Strings are also a scalar type: + + >>> np.issctype(np.dtype('str')) + True + + """ + if not isinstance(rep, (type, dtype)): + return False + try: + res = obj2sctype(rep) + if res and res != object_: + return True + return False + except: + return False + +def obj2sctype(rep, default=None): + """ + Return the scalar dtype or NumPy equivalent of Python type of an object. + + Parameters + ---------- + rep : any + The object of which the type is returned. + default : any, optional + If given, this is returned for objects whose types can not be + determined. If not given, None is returned for those objects. + + Returns + ------- + dtype : dtype or Python type + The data type of `rep`. + + See Also + -------- + sctype2char, issctype, issubsctype, issubdtype, maximum_sctype + + Examples + -------- + >>> np.obj2sctype(np.int32) + + >>> np.obj2sctype(np.array([1., 2.])) + + >>> np.obj2sctype(np.array([1.j])) + + + >>> np.obj2sctype(dict) + + >>> np.obj2sctype('string') + + + >>> np.obj2sctype(1, default=list) + + + """ + try: + if issubclass(rep, generic): + return rep + except TypeError: + pass + if isinstance(rep, dtype): + return rep.type + if isinstance(rep, type): + return _python_type(rep) + if isinstance(rep, ndarray): + return rep.dtype.type + try: + res = dtype(rep) + except: + return default + return res.type + + +def issubclass_(arg1, arg2): + """ + Determine if a class is a subclass of a second class. + + `issubclass_` is equivalent to the Python built-in ``issubclass``, + except that it returns False instead of raising a TypeError is one + of the arguments is not a class. + + Parameters + ---------- + arg1 : class + Input class. True is returned if `arg1` is a subclass of `arg2`. + arg2 : class or tuple of classes. + Input class. If a tuple of classes, True is returned if `arg1` is a + subclass of any of the tuple elements. + + Returns + ------- + out : bool + Whether `arg1` is a subclass of `arg2` or not. + + See Also + -------- + issubsctype, issubdtype, issctype + + Examples + -------- + >>> np.issubclass_(np.int32, np.int) + True + >>> np.issubclass_(np.int32, np.float) + False + + """ + try: + return issubclass(arg1, arg2) + except TypeError: + return False + +def issubsctype(arg1, arg2): + """ + Determine if the first argument is a subclass of the second argument. + + Parameters + ---------- + arg1, arg2 : dtype or dtype specifier + Data-types. + + Returns + ------- + out : bool + The result. + + See Also + -------- + issctype, issubdtype,obj2sctype + + Examples + -------- + >>> np.issubsctype('S8', str) + True + >>> np.issubsctype(np.array([1]), np.int) + True + >>> np.issubsctype(np.array([1]), np.float) + False + + """ + return issubclass(obj2sctype(arg1), obj2sctype(arg2)) + +def issubdtype(arg1, arg2): + """ + Returns True if first argument is a typecode lower/equal in type hierarchy. + + Parameters + ---------- + arg1, arg2 : dtype_like + dtype or string representing a typecode. + + Returns + ------- + out : bool + + See Also + -------- + issubsctype, issubclass_ + numpy.core.numerictypes : Overview of numpy type hierarchy. + + Examples + -------- + >>> np.issubdtype('S1', str) + True + >>> np.issubdtype(np.float64, np.float32) + False + + """ + if issubclass_(arg2, generic): + return issubclass(dtype(arg1).type, arg2) + mro = dtype(arg2).type.mro() + if len(mro) > 1: + val = mro[1] + else: + val = mro[0] + return issubclass(dtype(arg1).type, val) + + +# This dictionary allows look up based on any alias for an array data-type +class _typedict(dict): + """ + Base object for a dictionary for look-up with any alias for an array dtype. + + Instances of `_typedict` can not be used as dictionaries directly, + first they have to be populated. + + """ + def __getitem__(self, obj): + return dict.__getitem__(self, obj2sctype(obj)) + +nbytes = _typedict() +_alignment = _typedict() +_maxvals = _typedict() +_minvals = _typedict() +def _construct_lookups(): + for name, val in typeinfo.iteritems(): + if not isinstance(val, tuple): + continue + obj = val[-1] + nbytes[obj] = val[2] // 8 + _alignment[obj] = val[3] + if (len(val) > 5): + _maxvals[obj] = val[4] + _minvals[obj] = val[5] + else: + _maxvals[obj] = None + _minvals[obj] = None + +_construct_lookups() + +def sctype2char(sctype): + """ + Return the string representation of a scalar dtype. + + Parameters + ---------- + sctype : scalar dtype or object + If a scalar dtype, the corresponding string character is + returned. If an object, `sctype2char` tries to infer its scalar type + and then return the corresponding string character. + + Returns + ------- + typechar : str + The string character corresponding to the scalar type. + + Raises + ------ + ValueError + If `sctype` is an object for which the type can not be inferred. + + See Also + -------- + obj2sctype, issctype, issubsctype, mintypecode + + Examples + -------- + >>> for sctype in [np.int32, np.float, np.complex, np.string_, np.ndarray]: + ... print np.sctype2char(sctype) + l + d + D + S + O + + >>> x = np.array([1., 2-1.j]) + >>> np.sctype2char(x) + 'D' + >>> np.sctype2char(list) + 'O' + + """ + sctype = obj2sctype(sctype) + if sctype is None: + raise ValueError("unrecognized type") + return _sctype2char_dict[sctype] + +# Create dictionary of casting functions that wrap sequences +# indexed by type or type character + + +cast = _typedict() +try: + ScalarType = [_types.IntType, _types.FloatType, _types.ComplexType, + _types.LongType, _types.BooleanType, + _types.StringType, _types.UnicodeType, _types.BufferType] +except AttributeError: + # Py3K + ScalarType = [int, float, complex, long, bool, bytes, str, memoryview] + +ScalarType.extend(_sctype2char_dict.keys()) +ScalarType = tuple(ScalarType) +for key in _sctype2char_dict.keys(): + cast[key] = lambda x, k=key : array(x, copy=False).astype(k) + +# Create the typestring lookup dictionary +_typestr = _typedict() +for key in _sctype2char_dict.keys(): + if issubclass(key, allTypes['flexible']): + _typestr[key] = _sctype2char_dict[key] + else: + _typestr[key] = empty((1,),key).dtype.str[1:] + +# Make sure all typestrings are in sctypeDict +for key, val in _typestr.items(): + if val not in sctypeDict: + sctypeDict[val] = key + +# Add additional strings to the sctypeDict + +if sys.version_info[0] >= 3: + _toadd = ['int', 'float', 'complex', 'bool', 'object', + 'str', 'bytes', 'object', ('a', allTypes['bytes_'])] +else: + _toadd = ['int', 'float', 'complex', 'bool', 'object', 'string', + ('str', allTypes['string_']), + 'unicode', 'object', ('a', allTypes['string_'])] +_toadd.remove('complex') +_toadd.remove('object') +_toadd.remove('object') + +for name in _toadd: + if isinstance(name, tuple): + sctypeDict[name[0]] = name[1] + else: + sctypeDict[name] = allTypes['%s_' % name] + +del _toadd, name + +# Now add the types we've determined to this module +for key in allTypes: + globals()[key] = allTypes[key] + __all__.append(key) + +del key + +typecodes = {'Character':'c', + 'Integer':'bhilqp', + 'UnsignedInteger':'BHILQP', + 'Float':'efdg', + 'Complex':'FDG', + 'AllInteger':'bBhHiIlLqQpP', + 'AllFloat':'efdgFDG', + 'Datetime': 'Mm', + 'All':'?bhilqpBHILQPefdgFDGSUVOMm'} + +# backwards compatibility --- deprecated name +typeDict = sctypeDict +typeNA = sctypeNA + +# b -> boolean +# u -> unsigned integer +# i -> signed integer +# f -> floating point +# c -> complex +# M -> datetime +# m -> timedelta +# S -> string +# U -> Unicode string +# V -> record +# O -> Python object +_kind_list = ['b', 'u', 'i', 'f', 'c', 'S', 'U', 'V', 'O', 'M', 'm'] + +__test_types = '?'+typecodes['AllInteger'][:-2]+typecodes['AllFloat']+'O' +__len_test_types = len(__test_types) + +# Keep incrementing until a common type both can be coerced to +# is found. Otherwise, return None +def _find_common_coerce(a, b): + if a > b: + return a + try: + thisind = __test_types.index(a.char) + except ValueError: + return None + return _can_coerce_all([a,b], start=thisind) + +# Find a data-type that all data-types in a list can be coerced to +def _can_coerce_all(dtypelist, start=0): + N = len(dtypelist) + if N == 0: + return None + if N == 1: + return dtypelist[0] + thisind = start + while thisind < __len_test_types: + newdtype = dtype(__test_types[thisind]) + numcoerce = len([x for x in dtypelist if newdtype >= x]) + if numcoerce == N: + return newdtype + thisind += 1 + return None + +def find_common_type(array_types, scalar_types): + """ + Determine common type following standard coercion rules. + + Parameters + ---------- + array_types : sequence + A list of dtypes or dtype convertible objects representing arrays. + scalar_types : sequence + A list of dtypes or dtype convertible objects representing scalars. + + Returns + ------- + datatype : dtype + The common data type, which is the maximum of `array_types` ignoring + `scalar_types`, unless the maximum of `scalar_types` is of a + different kind (`dtype.kind`). If the kind is not understood, then + None is returned. + + See Also + -------- + dtype, common_type, can_cast, mintypecode + + Examples + -------- + >>> np.find_common_type([], [np.int64, np.float32, np.complex]) + dtype('complex128') + >>> np.find_common_type([np.int64, np.float32], []) + dtype('float64') + + The standard casting rules ensure that a scalar cannot up-cast an + array unless the scalar is of a fundamentally different kind of data + (i.e. under a different hierarchy in the data type hierarchy) then + the array: + + >>> np.find_common_type([np.float32], [np.int64, np.float64]) + dtype('float32') + + Complex is of a different type, so it up-casts the float in the + `array_types` argument: + + >>> np.find_common_type([np.float32], [np.complex]) + dtype('complex128') + + Type specifier strings are convertible to dtypes and can therefore + be used instead of dtypes: + + >>> np.find_common_type(['f4', 'f4', 'i4'], ['c8']) + dtype('complex128') + + """ + array_types = [dtype(x) for x in array_types] + scalar_types = [dtype(x) for x in scalar_types] + + maxa = _can_coerce_all(array_types) + maxsc = _can_coerce_all(scalar_types) + + if maxa is None: + return maxsc + + if maxsc is None: + return maxa + + try: + index_a = _kind_list.index(maxa.kind) + index_sc = _kind_list.index(maxsc.kind) + except ValueError: + return None + + if index_sc > index_a: + return _find_common_coerce(maxsc,maxa) + else: + return maxa From noreply at buildbot.pypy.org Mon Feb 13 00:57:51 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 13 Feb 2012 00:57:51 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: merge with default Message-ID: <20120212235751.1244C8203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52402:5325f0bba750 Date: 2012-02-09 20:06 +0200 http://bitbucket.org/pypy/pypy/changeset/5325f0bba750/ Log: merge with default diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -309,3 +310,13 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -93,6 +93,10 @@ # make input arguments and set their type args_s = [self.typeannotation(t) for t in input_arg_types] + # XXX hack + annmodel.TLS.check_str_without_nul = ( + self.translator.config.translation.check_str_without_nul) + flowgraph, inputcells = self.get_call_parameters(function, args_s, policy) if not isinstance(flowgraph, FunctionGraph): assert isinstance(flowgraph, annmodel.SomeObject) diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -434,11 +434,13 @@ class __extend__(pairtype(SomeString, SomeString)): def union((str1, str2)): - return SomeString(can_be_None=str1.can_be_None or str2.can_be_None) + can_be_None = str1.can_be_None or str2.can_be_None + no_nul = str1.no_nul and str2.no_nul + return SomeString(can_be_None=can_be_None, no_nul=no_nul) def add((str1, str2)): # propagate const-ness to help getattr(obj, 'prefix' + const_name) - result = SomeString() + result = SomeString(no_nul=str1.no_nul and str2.no_nul) if str1.is_immutable_constant() and str2.is_immutable_constant(): result.const = str1.const + str2.const return result @@ -475,7 +477,16 @@ raise NotImplementedError( "string formatting mixing strings and unicode not supported") getbookkeeper().count('strformat', str, s_tuple) - return SomeString() + no_nul = str.no_nul + for s_item in s_tuple.items: + if isinstance(s_item, SomeFloat): + pass # or s_item is a subclass, like SomeInteger + elif isinstance(s_item, SomeString) and s_item.no_nul: + pass + else: + no_nul = False + break + return SomeString(no_nul=no_nul) class __extend__(pairtype(SomeString, SomeObject)): @@ -828,7 +839,7 @@ exec source.compile() in glob _make_none_union('SomeInstance', 'classdef=obj.classdef, can_be_None=True') -_make_none_union('SomeString', 'can_be_None=True') +_make_none_union('SomeString', 'no_nul=obj.no_nul, can_be_None=True') _make_none_union('SomeUnicodeString', 'can_be_None=True') _make_none_union('SomeList', 'obj.listdef') _make_none_union('SomeDict', 'obj.dictdef') diff --git a/pypy/annotation/bookkeeper.py b/pypy/annotation/bookkeeper.py --- a/pypy/annotation/bookkeeper.py +++ b/pypy/annotation/bookkeeper.py @@ -342,10 +342,11 @@ else: raise Exception("seeing a prebuilt long (value %s)" % hex(x)) elif issubclass(tp, str): # py.lib uses annotated str subclasses + no_nul = not '\x00' in x if len(x) == 1: - result = SomeChar() + result = SomeChar(no_nul=no_nul) else: - result = SomeString() + result = SomeString(no_nul=no_nul) elif tp is unicode: if len(x) == 1: result = SomeUnicodeCodePoint() diff --git a/pypy/annotation/listdef.py b/pypy/annotation/listdef.py --- a/pypy/annotation/listdef.py +++ b/pypy/annotation/listdef.py @@ -86,18 +86,19 @@ read_locations = self.read_locations.copy() other_read_locations = other.read_locations.copy() self.read_locations.update(other.read_locations) - self.patch() # which should patch all refs to 'other' s_value = self.s_value s_other_value = other.s_value s_new_value = unionof(s_value, s_other_value) + if s_new_value != s_value: + if self.dont_change_any_more: + raise TooLateForChange if isdegenerated(s_new_value): if self.bookkeeper: self.bookkeeper.ondegenerated(self, s_new_value) elif other.bookkeeper: other.bookkeeper.ondegenerated(other, s_new_value) + self.patch() # which should patch all refs to 'other' if s_new_value != s_value: - if self.dont_change_any_more: - raise TooLateForChange self.s_value = s_new_value # reflow from reading points for position_key in read_locations: @@ -222,4 +223,5 @@ MOST_GENERAL_LISTDEF = ListDef(None, SomeObject()) -s_list_of_strings = SomeList(ListDef(None, SomeString(), resized = True)) +s_list_of_strings = SomeList(ListDef(None, SomeString(no_nul=True), + resized = True)) diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -39,7 +39,9 @@ DEBUG = False # set to False to disable recording of debugging information class State(object): - pass + # A global attribute :-( Patch it with 'True' to enable checking of + # the no_nul attribute... + check_str_without_nul = False TLS = State() class SomeObject(object): @@ -225,43 +227,57 @@ def __init__(self): pass -class SomeString(SomeObject): - "Stands for an object which is known to be a string." - knowntype = str +class SomeStringOrUnicode(SomeObject): immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None + can_be_None=False + no_nul = False # No NUL character in the string. + + def __init__(self, can_be_None=False, no_nul=False): + if can_be_None: + self.can_be_None = True + if no_nul: + self.no_nul = True def can_be_none(self): return self.can_be_None + def __eq__(self, other): + if self.__class__ is not other.__class__: + return False + d1 = self.__dict__ + d2 = other.__dict__ + if not TLS.check_str_without_nul: + d1 = d1.copy(); d1['no_nul'] = 0 # ignored + d2 = d2.copy(); d2['no_nul'] = 0 # ignored + return d1 == d2 + +class SomeString(SomeStringOrUnicode): + "Stands for an object which is known to be a string." + knowntype = str + def nonnoneify(self): - return SomeString(can_be_None=False) + return SomeString(can_be_None=False, no_nul=self.no_nul) -class SomeUnicodeString(SomeObject): +class SomeUnicodeString(SomeStringOrUnicode): "Stands for an object which is known to be an unicode string" knowntype = unicode - immutable = True - def __init__(self, can_be_None=False): - self.can_be_None = can_be_None - - def can_be_none(self): - return self.can_be_None def nonnoneify(self): - return SomeUnicodeString(can_be_None=False) + return SomeUnicodeString(can_be_None=False, no_nul=self.no_nul) class SomeChar(SomeString): "Stands for an object known to be a string of length 1." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True class SomeUnicodeCodePoint(SomeUnicodeString): "Stands for an object known to be a unicode codepoint." can_be_None = False - def __init__(self): # no 'can_be_None' argument here - pass + def __init__(self, no_nul=False): # no 'can_be_None' argument here + if no_nul: + self.no_nul = True SomeString.basestringclass = SomeString SomeString.basecharclass = SomeChar @@ -502,6 +518,7 @@ s_None = SomePBC([], can_be_None=True) s_Bool = SomeBool() s_ImpossibleValue = SomeImpossibleValue() +s_Str0 = SomeString(no_nul=True) # ____________________________________________________________ # weakrefs @@ -716,8 +733,7 @@ def not_const(s_obj): if s_obj.is_constant(): - new_s_obj = SomeObject() - new_s_obj.__class__ = s_obj.__class__ + new_s_obj = SomeObject.__new__(s_obj.__class__) dic = new_s_obj.__dict__ = s_obj.__dict__.copy() if 'const' in dic: del new_s_obj.const diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -456,6 +456,20 @@ return ''.join(g(n)) s = a.build_types(f, [int]) assert s.knowntype == str + assert s.no_nul + + def test_str_split(self): + a = self.RPythonAnnotator() + def g(n): + if n: + return "test string" + def f(n): + if n: + return g(n).split(' ') + s = a.build_types(f, [int]) + assert isinstance(s, annmodel.SomeList) + s_item = s.listdef.listitem.s_value + assert s_item.no_nul def test_str_splitlines(self): a = self.RPythonAnnotator() @@ -465,6 +479,18 @@ assert isinstance(s, annmodel.SomeList) assert s.listdef.listitem.resized + def test_str_strip(self): + a = self.RPythonAnnotator() + def f(n, a_str): + if n == 0: + return a_str.strip(' ') + elif n == 1: + return a_str.rstrip(' ') + else: + return a_str.lstrip(' ') + s = a.build_types(f, [int, annmodel.SomeString(no_nul=True)]) + assert s.no_nul + def test_str_mul(self): a = self.RPythonAnnotator() def f(a_str): @@ -1841,7 +1867,7 @@ return obj.indirect() a = self.RPythonAnnotator() s = a.build_types(f, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_dont_see_AttributeError_clause(self): class Stuff: @@ -2018,6 +2044,37 @@ s = a.build_types(g, [int]) assert not s.can_be_None + def test_string_noNUL_canbeNone(self): + def f(a): + if a: + return "abc" + else: + return None + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + + def test_str_or_None(self): + def f(a): + if a: + return "abc" + else: + return None + def g(a): + x = f(a) + #assert x is not None + if x is None: + return "abcd" + return x + if isinstance(x, str): + return x + return "impossible" + a = self.RPythonAnnotator() + s = a.build_types(f, [int]) + assert s.can_be_None + assert s.no_nul + def test_emulated_pbc_call_simple(self): def f(a,b): return a + b @@ -2071,6 +2128,19 @@ assert isinstance(s, annmodel.SomeIterator) assert s.variant == ('items',) + def test_iteritems_str0(self): + def it(d): + return d.iteritems() + def f(): + d0 = {'1a': '2a', '3': '4'} + for item in it(d0): + return "%s=%s" % item + raise ValueError + a = self.RPythonAnnotator() + s = a.build_types(f, []) + assert isinstance(s, annmodel.SomeString) + assert s.no_nul + def test_non_none_and_none_with_isinstance(self): class A(object): pass diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -480,13 +480,13 @@ return SomeInteger(nonneg=True) def method_strip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_lstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_rstrip(str, chr): - return str.basestringclass() + return str.basestringclass(no_nul=str.no_nul) def method_join(str, s_list): if s_None.contains(s_list): @@ -497,7 +497,8 @@ if isinstance(str, SomeUnicodeString): return immutablevalue(u"") return immutablevalue("") - return str.basestringclass() + no_nul = str.no_nul and s_item.no_nul + return str.basestringclass(no_nul=no_nul) def iter(str): return SomeIterator(str) @@ -508,18 +509,21 @@ def method_split(str, patt, max=-1): getbookkeeper().count("str_split", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_rsplit(str, patt, max=-1): getbookkeeper().count("str_rsplit", str, patt) - return getbookkeeper().newlist(str.basestringclass()) + s_item = str.basestringclass(no_nul=str.no_nul) + return getbookkeeper().newlist(s_item) def method_replace(str, s1, s2): return str.basestringclass() def getslice(str, s_start, s_stop): check_negative_slice(s_start, s_stop) - return str.basestringclass() + result = str.basestringclass(no_nul=str.no_nul) + return result class __extend__(SomeUnicodeString): def method_encode(uni, s_enc): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -123,6 +123,9 @@ default="off"), # jit_ffi is automatically turned on by withmod-_ffi (which is enabled by default) BoolOption("jit_ffi", "optimize libffi calls", default=False, cmdline=None), + BoolOption("check_str_without_nul", + "Forbid NUL chars in strings in some external function calls", + default=False, cmdline=None), # misc BoolOption("verbose", "Print extra information", default=False), diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,11 +53,10 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.1 (48ebdce33e1b, Feb 09 2012, 00:55:31) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``implementing LOGO in LOGO: "turtles all the way down"'' @@ -75,14 +74,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,93 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory improvements over +the 1.7 release. The main highlight of the release is the introduction of +`list strategies`_ which makes homogenous lists more efficient both in terms +of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense +that performance improved roughly 10% on average since the previous release. + +You can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are many examples + of python constructs that now should be faster; too many to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + * multi dimensional arrays + + * various sizes of dtypes + + * a lot of ufuncs + + * a lot of other minor changes + + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1312,6 +1312,15 @@ def str_w(self, w_obj): return w_obj.str_w(self) + def str0_w(self, w_obj): + "Like str_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.str_w(self) + if '\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a string without NUL characters')) + return rstring.assert_str0(result) + def int_w(self, w_obj): return w_obj.int_w(self) @@ -1331,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1629,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -130,6 +130,9 @@ def visit_str_or_None(self, el, app_sig): self.checked_space_method(el, app_sig) + def visit_str0(self, el, app_sig): + self.checked_space_method(el, app_sig) + def visit_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) @@ -249,6 +252,9 @@ def visit_str_or_None(self, typ): self.run_args.append("space.str_or_None_w(%s)" % (self.scopenext(),)) + def visit_str0(self, typ): + self.run_args.append("space.str0_w(%s)" % (self.scopenext(),)) + def visit_nonnegint(self, typ): self.run_args.append("space.gateway_nonnegint_w(%s)" % ( self.scopenext(),)) @@ -383,6 +389,9 @@ def visit_str_or_None(self, typ): self.unwrap.append("space.str_or_None_w(%s)" % (self.nextarg(),)) + def visit_str0(self, typ): + self.unwrap.append("space.str0_w(%s)" % (self.nextarg(),)) + def visit_nonnegint(self, typ): self.unwrap.append("space.gateway_nonnegint_w(%s)" % (self.nextarg(),)) diff --git a/pypy/interpreter/mixedmodule.py b/pypy/interpreter/mixedmodule.py --- a/pypy/interpreter/mixedmodule.py +++ b/pypy/interpreter/mixedmodule.py @@ -50,7 +50,7 @@ space.call_method(self.w_dict, 'update', self.w_initialdict) for w_submodule in self.submodules_w: - name = space.str_w(w_submodule.w_name) + name = space.str0_w(w_submodule.w_name) space.setitem(self.w_dict, space.wrap(name.split(".")[-1]), w_submodule) space.getbuiltinmodule(name) diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -31,7 +31,8 @@ def install(self): """NOT_RPYTHON: installs this module into space.builtin_modules""" w_mod = self.space.wrap(self) - self.space.builtin_modules[self.space.unwrap(self.w_name)] = w_mod + modulename = self.space.str0_w(self.w_name) + self.space.builtin_modules[modulename] = w_mod def setup_after_space_initialization(self): """NOT_RPYTHON: to allow built-in modules to do some more setup diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -780,6 +780,9 @@ self.overflow_flag = ovf return z + def op_keepalive(self, _, x): + pass + # ---------- # delegating to the builtins do_xxx() (done automatically for simple cases) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1463,6 +1463,9 @@ if jump_op is not None and jump_op.getdescr() is descr: self._compute_hint_frame_locations_from_descr(descr) + def consider_keepalive(self, op): + pass + def not_implemented_op(self, op): not_implemented("not implemented operation: %s" % op.getopname()) diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -254,6 +254,9 @@ assert isinstance(x, r_longlong) # 32-bit return BoxFloat(x) +def do_keepalive(cpu, _, x): + pass + # ____________________________________________________________ ##def do_force_token(cpu): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1346,12 +1346,16 @@ resbox = self.metainterp.execute_and_record_varargs( rop.CALL_MAY_FORCE, allboxes, descr=descr) self.metainterp.vrefs_after_residual_call() + vablebox = None if assembler_call: - self.metainterp.direct_assembler_call(assembler_call_jd) + vablebox = self.metainterp.direct_assembler_call( + assembler_call_jd) if resbox is not None: self.make_result_of_lastop(resbox) self.metainterp.vable_after_residual_call() self.generate_guard(rop.GUARD_NOT_FORCED, None) + if vablebox is not None: + self.metainterp.history.record(rop.KEEPALIVE, [vablebox], None) self.metainterp.handle_possible_exception() return resbox else: @@ -2478,6 +2482,15 @@ token = warmrunnerstate.get_assembler_token(greenargs) op = op.copy_and_change(rop.CALL_ASSEMBLER, args=args, descr=token) self.history.operations.append(op) + # + # To fix an obscure issue, make sure the vable stays alive + # longer than the CALL_ASSEMBLER operation. We do it by + # inserting explicitly an extra KEEPALIVE operation. + jd = token.outermost_jitdriver_sd + if jd.index_of_virtualizable >= 0: + return args[jd.index_of_virtualizable] + else: + return None # ____________________________________________________________ diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -503,6 +503,7 @@ 'COPYUNICODECONTENT/5', 'QUASIIMMUT_FIELD/1d', # [objptr], descr=SlowMutateDescr 'RECORD_KNOWN_CLASS/2', # [objptr, clsptr] + 'KEEPALIVE/1', '_CANRAISE_FIRST', # ----- start of can_raise operations ----- '_CALL_FIRST', diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -322,6 +322,17 @@ res = self.interp_operations(f, [42]) assert res == ord(u"?") + def test_char_in_constant_string(self): + def g(string): + return '\x00' in string + def f(): + if g('abcdef'): return -60 + if not g('abc\x00ef'): return -61 + return 42 + res = self.interp_operations(f, []) + assert res == 42 + self.check_operations_history({'finish': 1}) # nothing else + def test_residual_call(self): @dont_look_inside def externfn(x, y): @@ -3695,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -328,7 +328,7 @@ if basemode == "a": raise OperationError(space.w_ValueError, space.wrap("cannot append to bz2 file")) - stream = open_path_helper(space.str_w(w_path), os_flags, False) + stream = open_path_helper(space.str0_w(w_path), os_flags, False) if reading: bz2stream = ReadBZ2Filter(space, stream, buffering) buffering = 0 # by construction, the ReadBZ2Filter acts like diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -13,6 +13,7 @@ #define Py_FrozenFlag 0 #define Py_VerboseFlag 0 +#define Py_DebugFlag 1 typedef struct { int cf_flags; /* bitmask of CO_xxx flags relevant to future */ diff --git a/pypy/module/gc/interp_gc.py b/pypy/module/gc/interp_gc.py --- a/pypy/module/gc/interp_gc.py +++ b/pypy/module/gc/interp_gc.py @@ -49,7 +49,7 @@ # ____________________________________________________________ - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def dump_heap_stats(space, filename): tb = rgc._heap_stats() if not tb: diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -138,7 +138,7 @@ ctxt_package = None if ctxt_w_package is not None and ctxt_w_package is not space.w_None: try: - ctxt_package = space.str_w(ctxt_w_package) + ctxt_package = space.str0_w(ctxt_w_package) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -187,7 +187,7 @@ ctxt_name = None if ctxt_w_name is not None: try: - ctxt_name = space.str_w(ctxt_w_name) + ctxt_name = space.str0_w(ctxt_w_name) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -230,7 +230,7 @@ return rel_modulename, rel_level - at unwrap_spec(name=str, level=int) + at unwrap_spec(name='str0', level=int) def importhook(space, name, w_globals=None, w_locals=None, w_fromlist=None, level=-1): modulename = name @@ -377,8 +377,8 @@ fromlist_w = space.fixedview(w_all) for w_name in fromlist_w: if try_getattr(space, w_mod, w_name) is None: - load_part(space, w_path, prefix, space.str_w(w_name), w_mod, - tentative=1) + load_part(space, w_path, prefix, space.str0_w(w_name), + w_mod, tentative=1) return w_mod else: return first @@ -432,7 +432,7 @@ def __init__(self, space): pass - @unwrap_spec(path=str) + @unwrap_spec(path='str0') def descr_init(self, space, path): if not path: raise OperationError(space.w_ImportError, space.wrap( @@ -513,7 +513,7 @@ if w_loader: return FindInfo.fromLoader(w_loader) - path = space.str_w(w_pathitem) + path = space.str0_w(w_pathitem) filepart = os.path.join(path, partname) if os.path.isdir(filepart) and case_ok(filepart): initfile = os.path.join(filepart, '__init__') @@ -671,7 +671,7 @@ space.wrap("reload() argument must be module")) w_modulename = space.getattr(w_module, space.wrap("__name__")) - modulename = space.str_w(w_modulename) + modulename = space.str0_w(w_modulename) if not space.is_w(check_sys_modules(space, w_modulename), w_module): raise operationerrfmt( space.w_ImportError, diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -44,7 +44,7 @@ return space.interp_w(W_File, w_file).stream def find_module(space, w_name, w_path=None): - name = space.str_w(w_name) + name = space.str0_w(w_name) if space.is_w(w_path, space.w_None): w_path = None @@ -75,7 +75,7 @@ def load_module(space, w_name, w_file, w_filename, w_info): w_suffix, w_filemode, w_modtype = space.unpackiterable(w_info) - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) filemode = space.str_w(w_filemode) if space.is_w(w_file, space.w_None): stream = None @@ -92,7 +92,7 @@ space, w_name, find_info, reuse=True) def load_source(space, w_modulename, w_filename, w_file=None): - filename = space.str_w(w_filename) + filename = space.str0_w(w_filename) stream = get_file(space, w_file, filename, 'U') @@ -105,7 +105,7 @@ stream.close() return w_mod - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def _run_compiled_module(space, w_modulename, filename, w_file, w_module): # the function 'imp._run_compiled_module' is a pypy-only extension stream = get_file(space, w_file, filename, 'rb') @@ -119,7 +119,7 @@ if space.is_w(w_file, space.w_None): stream.close() - at unwrap_spec(filename=str) + at unwrap_spec(filename='str0') def load_compiled(space, w_modulename, filename, w_file=None): w_mod = space.wrap(Module(space, w_modulename)) importing._prepare_module(space, w_mod, filename, None) @@ -138,7 +138,7 @@ return space.wrap(Module(space, w_name, add_package=False)) def init_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return if space.finditem(space.sys.get('modules'), w_name) is not None: @@ -151,7 +151,7 @@ return None def is_builtin(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) if name not in space.builtin_modules: return space.wrap(0) if space.finditem(space.sys.get('modules'), w_name) is not None: diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -111,8 +111,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -80,7 +80,15 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -91,9 +99,29 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") + descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") + + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -174,11 +202,28 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), + __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -187,8 +232,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -101,8 +101,13 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +116,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +134,18 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None, w_out=None): @@ -803,7 +820,7 @@ self.left.create_sig(), self.right.create_sig()) class ResultArray(Call2): - def __init__(self, child, size, shape, dtype, res=None, order='C'): + def __init__(self, child, size, shape, dtype, res=None, order='C'): if res is None: res = W_NDimArray(size, shape, dtype, order) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) @@ -831,7 +848,7 @@ frame.next(len(self.right.shape)) else: frame.cur_value = self.identity.convert_to(self.calc_dtype) - + def create_sig(self): if self.name == 'logical_and': done_func = done_if_false @@ -1234,21 +1251,34 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1257,10 +1287,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1328,6 +1354,9 @@ def descr_iter(self): return self + def descr_len(self, space): + return space.wrap(self.size) + def descr_index(self, space): return space.wrap(self.index) @@ -1403,7 +1432,7 @@ return signature.FlatSignature(self.base.dtype) def create_iter(self, transforms=None): - return ViewIterator(self.base.start, self.base.strides, + return ViewIterator(self.base.start, self.base.strides, self.base.backstrides, self.base.shape).apply_transformations(self.base, transforms) @@ -1414,14 +1443,17 @@ W_FlatIterator.typedef = TypeDef( 'flatiter', __iter__ = interp2app(W_FlatIterator.descr_iter), + __len__ = interp2app(W_FlatIterator.descr_len), __getitem__ = interp2app(W_FlatIterator.descr_getitem), __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), __lt__ = interp2app(BaseArray.descr_lt), __le__ = interp2app(BaseArray.descr_le), __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + base = GetSetProperty(W_FlatIterator.descr_base), index = GetSetProperty(W_FlatIterator.descr_index), coords = GetSetProperty(W_FlatIterator.descr_coords), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -414,14 +414,17 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -401,3 +401,32 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_, True_, False_ + + assert 5 / int_(2) == int_(2) + assert truediv(int_(3), int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) + assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) + assert int_(3) & int_(1) == int_(1) + assert 2 & int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) + + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -579,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -600,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array @@ -625,6 +625,52 @@ for i in range(5): assert b[i] == i / 5.0 + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +724,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1512,24 +1582,26 @@ def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - #no == yet. - # a[::2].flat == [0, 1, 4, 5, 8, 9] - isequal = True - for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): - if y != z: - isequal = False - assert isequal == True + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]).all() def test_flatiter_transpose(self): from _numpypy import arange - a = arange(10).reshape(2,5).T + a = arange(10).reshape(2, 5).T b = a.flat assert (b[:5] == [0, 5, 1, 6, 2]).all() b.next() b.next() b.next() assert b.index == 3 - assert b.coords == (1,1) + assert b.coords == (1, 1) + + def test_flatiter_len(self): + from _numpypy import arange + + assert len(arange(10).flat) == 10 + assert len(arange(10).reshape(2, 5).flat) == 10 + assert len(arange(10)[:2].flat) == 2 + assert len((arange(2) + arange(2)).flat) == 2 def test_slice_copy(self): from _numpypy import zeros diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,7 +367,7 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -199,7 +199,7 @@ assert result == 1 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "int_and": 1, "int_add": 1, - 'convert_float_to_int': 1, + 'cast_float_to_int': 1, "int_ge": 1, "jump": 1, "guard_false": 2, 'arraylen_gc': 1}) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -59,10 +59,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True @@ -253,6 +249,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: @@ -313,6 +321,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -37,7 +37,7 @@ if space.isinstance_w(w_obj, space.w_unicode): w_obj = space.call_method(w_obj, 'encode', getfilesystemencoding(space)) - return space.str_w(w_obj) + return space.str0_w(w_obj) class FileEncoder(object): def __init__(self, space, w_obj): @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -56,13 +56,13 @@ self.w_obj = w_obj def as_bytes(self): - return self.space.str_w(self.w_obj) + return self.space.str0_w(self.w_obj) def as_unicode(self): space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -71,7 +71,7 @@ fname = FileEncoder(space, w_fname) return func(fname, *args) else: - fname = space.str_w(w_fname) + fname = space.str0_w(w_fname) return func(fname, *args) return dispatch @@ -369,7 +369,7 @@ space.wrap(times[3]), space.wrap(times[4])]) - at unwrap_spec(cmd=str) + at unwrap_spec(cmd='str0') def system(space, cmd): """Execute the command (a string) in a subshell.""" try: @@ -401,7 +401,7 @@ fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) else: - path = space.str_w(w_path) + path = space.str0_w(w_path) fullpath = rposix._getfullpathname(path) w_fullpath = space.wrap(fullpath) except OSError, e: @@ -512,7 +512,7 @@ for key, value in os.environ.items(): space.setitem(w_env, space.wrap(key), space.wrap(value)) - at unwrap_spec(name=str, value=str) + at unwrap_spec(name='str0', value='str0') def putenv(space, name, value): """Change or add an environment variable.""" try: @@ -520,7 +520,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def unsetenv(space, name): """Delete an environment variable.""" try: @@ -543,12 +543,18 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: - dirname = space.str_w(w_dirname) + dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) result_w = [space.wrap(s) for s in result] except OSError, e: @@ -635,7 +641,7 @@ import signal os.kill(os.getpid(), signal.SIGABRT) - at unwrap_spec(src=str, dst=str) + at unwrap_spec(src='str0', dst='str0') def link(space, src, dst): "Create a hard link to a file." try: @@ -650,7 +656,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def readlink(space, path): "Return a string representing the path to which the symbolic link points." try: @@ -765,7 +771,7 @@ w_keys = space.call_method(w_env, 'keys') for w_key in space.unpackiterable(w_keys): w_value = space.getitem(w_env, w_key) - env[space.str_w(w_key)] = space.str_w(w_value) + env[space.str0_w(w_key)] = space.str0_w(w_value) return env def execve(space, w_command, w_args, w_env): @@ -785,18 +791,18 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnv(space, mode, path, w_args): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] try: ret = os.spawnv(mode, path, args) except OSError, e: raise wrap_oserror(space, e) return space.wrap(ret) - at unwrap_spec(mode=int, path=str) + at unwrap_spec(mode=int, path='str0') def spawnve(space, mode, path, w_args, w_env): - args = [space.str_w(w_arg) for w_arg in space.unpackiterable(w_args)] + args = [space.str0_w(w_arg) for w_arg in space.unpackiterable(w_args)] env = _env2interp(space, w_env) try: ret = os.spawnve(mode, path, args, env) @@ -914,7 +920,7 @@ raise wrap_oserror(space, e) return space.w_None - at unwrap_spec(path=str) + at unwrap_spec(path='str0') def chroot(space, path): """ chroot(path) @@ -1103,7 +1109,7 @@ except OSError, e: raise wrap_oserror(space, e) - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def chown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) @@ -1113,7 +1119,7 @@ raise wrap_oserror(space, e, path) return space.w_None - at unwrap_spec(path=str, uid=c_uid_t, gid=c_gid_t) + at unwrap_spec(path='str0', uid=c_uid_t, gid=c_gid_t) def lchown(space, path, uid, gid): check_uid_range(space, uid) check_uid_range(space, gid) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/sys/state.py b/pypy/module/sys/state.py --- a/pypy/module/sys/state.py +++ b/pypy/module/sys/state.py @@ -74,7 +74,7 @@ # return importlist - at unwrap_spec(srcdir=str) + at unwrap_spec(srcdir='str0') def pypy_initial_path(space, srcdir): try: path = getinitialpath(get(space), srcdir) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -342,7 +344,7 @@ space = self.space return space.wrap(self.filename) - at unwrap_spec(name=str) + at unwrap_spec(name='str0') def descr_new_zipimporter(space, w_type, name): w = space.wrap ok = False @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rlib/rstring.py b/pypy/rlib/rstring.py --- a/pypy/rlib/rstring.py +++ b/pypy/rlib/rstring.py @@ -205,3 +205,45 @@ assert p.const is None return SomeUnicodeBuilder(can_be_None=True) +#___________________________________________________________________ +# Support functions for SomeString.no_nul + +def assert_str0(fname): + assert '\x00' not in fname, "NUL byte in string" + return fname + +class Entry(ExtRegistryEntry): + _about_ = assert_str0 + + def compute_result_annotation(self, s_obj): + if s_None.contains(s_obj): + return s_obj + assert isinstance(s_obj, (SomeString, SomeUnicodeString)) + if s_obj.no_nul: + return s_obj + new_s_obj = SomeObject.__new__(s_obj.__class__) + new_s_obj.__dict__ = s_obj.__dict__.copy() + new_s_obj.no_nul = True + return new_s_obj + + def specialize_call(self, hop): + hop.exception_cannot_occur() + return hop.inputarg(hop.args_r[0], arg=0) + +def check_str0(fname): + """A 'probe' to trigger a failure at translation time, if the + string was not proved to not contain NUL characters.""" + assert '\x00' not in fname, "NUL byte in string" + +class Entry(ExtRegistryEntry): + _about_ = check_str0 + + def compute_result_annotation(self, s_obj): + if not isinstance(s_obj, (SomeString, SomeUnicodeString)): + return s_obj + if not s_obj.no_nul: + raise ValueError("Value is not no_nul") + + def specialize_call(self, hop): + pass + diff --git a/pypy/rlib/test/test_rmarshal.py b/pypy/rlib/test/test_rmarshal.py --- a/pypy/rlib/test/test_rmarshal.py +++ b/pypy/rlib/test/test_rmarshal.py @@ -169,7 +169,7 @@ assert st2.st_mode == st.st_mode assert st2[9] == st[9] return buf - fn = compile(f, [str]) + fn = compile(f, [annmodel.s_Str0]) res = fn('.') st = os.stat('.') sttuple = marshal.loads(res) diff --git a/pypy/rpython/extfunc.py b/pypy/rpython/extfunc.py --- a/pypy/rpython/extfunc.py +++ b/pypy/rpython/extfunc.py @@ -2,7 +2,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.lltypesystem.lltype import typeOf from pypy.objspace.flow.model import Constant -from pypy.annotation.model import unionof +from pypy.annotation import model as annmodel from pypy.annotation.signature import annotation import py, sys @@ -138,7 +138,6 @@ # we defer a bit annotation here def compute_result_annotation(self): - from pypy.annotation import model as annmodel return annmodel.SomeGenericCallable([annotation(i, self.bookkeeper) for i in self.instance.args], annotation(self.instance.result, self.bookkeeper)) @@ -152,8 +151,9 @@ signature_args = [annotation(arg, None) for arg in args] assert len(args_s) == len(signature_args),\ "Argument number mismatch" + for i, expected in enumerate(signature_args): - arg = unionof(args_s[i], expected) + arg = annmodel.unionof(args_s[i], expected) if not expected.contains(arg): name = getattr(self, 'name', None) if not name: diff --git a/pypy/rpython/extfuncregistry.py b/pypy/rpython/extfuncregistry.py --- a/pypy/rpython/extfuncregistry.py +++ b/pypy/rpython/extfuncregistry.py @@ -85,7 +85,8 @@ # llinterpreter path_functions = [ - ('join', [str, str], str), + ('join', [ll_os.str0, ll_os.str0], ll_os.str0), + ('dirname', [ll_os.str0], ll_os.str0), ] for name, args, res in path_functions: diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1036,13 +1036,8 @@ libraries = eci.testonly_libraries + eci.libraries + eci.frameworks FUNCTYPE = lltype.typeOf(funcptr).TO - if not libraries: - cfunc = get_on_lib(standard_c_lib, funcname) - # XXX magic: on Windows try to load the function from 'kernel32' too - if cfunc is None and hasattr(ctypes, 'windll'): - cfunc = get_on_lib(ctypes.windll.kernel32, funcname) - else: - cfunc = None + cfunc = None + if libraries: not_found = [] for libname in libraries: libpath = None @@ -1075,6 +1070,12 @@ not_found.append(libname) if cfunc is None: + cfunc = get_on_lib(standard_c_lib, funcname) + # XXX magic: on Windows try to load the function from 'kernel32' too + if cfunc is None and hasattr(ctypes, 'windll'): + cfunc = get_on_lib(ctypes.windll.kernel32, funcname) + + if cfunc is None: # function name not found in any of the libraries if not libraries: place = 'the standard C library (missing libraries=...?)' diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -15,7 +15,7 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder, UnicodeBuilder, assert_str0 from pypy.rlib import jit from pypy.rpython.lltypesystem import llmemory import os, sys @@ -698,7 +698,7 @@ while cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # str -> char* # Can't inline this because of the raw address manipulation. @@ -804,7 +804,7 @@ while i < maxlen and cp[i] != lastchar: b.append(cp[i]) i += 1 - return b.build() + return assert_str0(b.build()) # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): @@ -842,6 +842,7 @@ array[i] = str2charp(l[i]) array[len(l)] = lltype.nullptr(CCHARP.TO) return array +liststr2charpp._annenforceargs_ = [[annmodel.s_Str0]] # List of strings def free_charpp(ref): """ frees list of char**, NULL terminated diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -31,6 +31,10 @@ from pypy.rlib import rgc from pypy.rlib.objectmodel import specialize +str0 = SomeString(no_nul=True) +unicode0 = SomeUnicodeString(no_nul=True) + + def monkeypatch_rposix(posixfunc, unicodefunc, signature): func_name = posixfunc.__name__ @@ -39,12 +43,15 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -60,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix @@ -68,6 +75,7 @@ class StringTraits: str = str + str0 = str0 CHAR = rffi.CHAR CCHARP = rffi.CCHARP charp2str = staticmethod(rffi.charp2str) @@ -85,6 +93,7 @@ class UnicodeTraits: str = unicode + str0 = unicode0 CHAR = rffi.WCHAR_T CCHARP = rffi.CWCHARP charp2str = staticmethod(rffi.wcharp2unicode) @@ -301,7 +310,7 @@ rffi.free_charpp(l_args) raise OSError(rposix.get_errno(), "execv failed") - return extdef([str, [str]], s_ImpossibleValue, llimpl=execv_llimpl, + return extdef([str0, [str0]], s_ImpossibleValue, llimpl=execv_llimpl, export_name="ll_os.ll_os_execv") @@ -319,7 +328,8 @@ # appropriate envstrs = [] for item in env.iteritems(): - envstrs.append("%s=%s" % item) + envstr = "%s=%s" % item + envstrs.append(envstr) l_args = rffi.liststr2charpp(args) l_env = rffi.liststr2charpp(envstrs) @@ -332,7 +342,7 @@ raise OSError(rposix.get_errno(), "execve failed") return extdef( - [str, [str], {str: str}], + [str0, [str0], {str0: str0}], s_ImpossibleValue, llimpl=execve_llimpl, export_name="ll_os.ll_os_execve") @@ -353,7 +363,7 @@ raise OSError(rposix.get_errno(), "os_spawnv failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str]], int, llimpl=spawnv_llimpl, + return extdef([int, str0, [str0]], int, llimpl=spawnv_llimpl, export_name="ll_os.ll_os_spawnv") @registering_if(os, 'spawnve') @@ -378,7 +388,7 @@ raise OSError(rposix.get_errno(), "os_spawnve failed") return rffi.cast(lltype.Signed, childpid) - return extdef([int, str, [str], {str: str}], int, + return extdef([int, str0, [str0], {str0: str0}], int, llimpl=spawnve_llimpl, export_name="ll_os.ll_os_spawnve") @@ -517,7 +527,7 @@ else: raise Exception("os.utime() arg 2 must be None or a tuple of " "2 floats, got %s" % (s_times,)) - os_utime_normalize_args._default_signature_ = [traits.str, None] + os_utime_normalize_args._default_signature_ = [traits.str0, None] return extdef(os_utime_normalize_args, s_None, "ll_os.ll_os_utime", @@ -612,7 +622,7 @@ if result == -1: raise OSError(rposix.get_errno(), "os_chroot failed") - return extdef([str], None, export_name="ll_os.ll_os_chroot", + return extdef([str0], None, export_name="ll_os.ll_os_chroot", llimpl=chroot_llimpl) @registering_if(os, 'uname') @@ -816,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([traits.str, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') @@ -1050,7 +1060,7 @@ def os_access_oofakeimpl(path, mode): return os.access(OOSupport.from_rstr(path), mode) - return extdef([traits.str, int], s_Bool, llimpl=access_llimpl, + return extdef([traits.str0, int], s_Bool, llimpl=access_llimpl, export_name=traits.ll_os_name("access"), oofakeimpl=os_access_oofakeimpl) @@ -1062,8 +1072,8 @@ from pypy.rpython.module.ll_win32file import make_getfullpathname_impl getfullpathname_llimpl = make_getfullpathname_impl(traits) - return extdef([traits.str], # a single argument which is a str - traits.str, # returns a string + return extdef([traits.str0], # a single argument which is a str + traits.str0, # returns a string traits.ll_os_name('_getfullpathname'), llimpl=getfullpathname_llimpl) @@ -1174,8 +1184,8 @@ raise OSError(error, "os_readdir failed") return result - return extdef([traits.str], # a single argument which is a str - [traits.str], # returns a list of strings + return extdef([traits.str0], # a single argument which is a str + [traits.str0], # returns a list of strings traits.ll_os_name('listdir'), llimpl=os_listdir_llimpl) @@ -1241,7 +1251,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_chown failed") - return extdef([str, int, int], None, "ll_os.ll_os_chown", + return extdef([str0, int, int], None, "ll_os.ll_os_chown", llimpl=os_chown_llimpl) @registering_if(os, 'lchown') @@ -1254,7 +1264,7 @@ if res == -1: raise OSError(rposix.get_errno(), "os_lchown failed") - return extdef([str, int, int], None, "ll_os.ll_os_lchown", + return extdef([str0, int, int], None, "ll_os.ll_os_lchown", llimpl=os_lchown_llimpl) @registering_if(os, 'readlink') @@ -1283,12 +1293,11 @@ lltype.free(buf, flavor='raw') bufsize *= 4 # convert the result to a string - l = [buf[i] for i in range(res)] - result = ''.join(l) + result = rffi.charp2strn(buf, res) lltype.free(buf, flavor='raw') return result - return extdef([str], str, + return extdef([str0], str0, "ll_os.ll_os_readlink", llimpl=os_readlink_llimpl) @@ -1361,7 +1370,7 @@ res = os_system(command) return rffi.cast(lltype.Signed, res) - return extdef([str], int, llimpl=system_llimpl, + return extdef([str0], int, llimpl=system_llimpl, export_name="ll_os.ll_os_system") @registering_str_unicode(os.unlink) @@ -1383,7 +1392,7 @@ if not win32traits.DeleteFile(path): raise rwin32.lastWindowsError() - return extdef([traits.str], s_None, llimpl=unlink_llimpl, + return extdef([traits.str0], s_None, llimpl=unlink_llimpl, export_name=traits.ll_os_name('unlink')) @registering_str_unicode(os.chdir) @@ -1401,7 +1410,7 @@ from pypy.rpython.module.ll_win32file import make_chdir_impl os_chdir_llimpl = make_chdir_impl(traits) - return extdef([traits.str], s_None, llimpl=os_chdir_llimpl, + return extdef([traits.str0], s_None, llimpl=os_chdir_llimpl, export_name=traits.ll_os_name('chdir')) @registering_str_unicode(os.mkdir) @@ -1424,7 +1433,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkdir failed") - return extdef([traits.str, int], s_None, llimpl=os_mkdir_llimpl, + return extdef([traits.str0, int], s_None, llimpl=os_mkdir_llimpl, export_name=traits.ll_os_name('mkdir')) @registering_str_unicode(os.rmdir) @@ -1437,7 +1446,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_rmdir failed") - return extdef([traits.str], s_None, llimpl=rmdir_llimpl, + return extdef([traits.str0], s_None, llimpl=rmdir_llimpl, export_name=traits.ll_os_name('rmdir')) @registering_str_unicode(os.chmod) @@ -1454,7 +1463,7 @@ from pypy.rpython.module.ll_win32file import make_chmod_impl chmod_llimpl = make_chmod_impl(traits) - return extdef([traits.str, int], s_None, llimpl=chmod_llimpl, + return extdef([traits.str0, int], s_None, llimpl=chmod_llimpl, export_name=traits.ll_os_name('chmod')) @registering_str_unicode(os.rename) @@ -1476,7 +1485,7 @@ if not win32traits.MoveFile(oldpath, newpath): raise rwin32.lastWindowsError() - return extdef([traits.str, traits.str], s_None, llimpl=rename_llimpl, + return extdef([traits.str0, traits.str0], s_None, llimpl=rename_llimpl, export_name=traits.ll_os_name('rename')) @registering_str_unicode(getattr(os, 'mkfifo', None)) @@ -1489,7 +1498,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mkfifo failed") - return extdef([traits.str, int], s_None, llimpl=mkfifo_llimpl, + return extdef([traits.str0, int], s_None, llimpl=mkfifo_llimpl, export_name=traits.ll_os_name('mkfifo')) @registering_str_unicode(getattr(os, 'mknod', None)) @@ -1503,7 +1512,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_mknod failed") - return extdef([traits.str, int, int], s_None, llimpl=mknod_llimpl, + return extdef([traits.str0, int, int], s_None, llimpl=mknod_llimpl, export_name=traits.ll_os_name('mknod')) @registering(os.umask) @@ -1555,7 +1564,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_link failed") - return extdef([str, str], s_None, llimpl=link_llimpl, + return extdef([str0, str0], s_None, llimpl=link_llimpl, export_name="ll_os.ll_os_link") @registering_if(os, 'symlink') @@ -1568,7 +1577,7 @@ if res < 0: raise OSError(rposix.get_errno(), "os_symlink failed") - return extdef([str, str], s_None, llimpl=symlink_llimpl, + return extdef([str0, str0], s_None, llimpl=symlink_llimpl, export_name="ll_os.ll_os_symlink") @registering_if(os, 'fork') diff --git a/pypy/rpython/module/ll_os_environ.py b/pypy/rpython/module/ll_os_environ.py --- a/pypy/rpython/module/ll_os_environ.py +++ b/pypy/rpython/module/ll_os_environ.py @@ -3,8 +3,11 @@ from pypy.rpython.controllerentry import Controller from pypy.rpython.extfunc import register_external from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.module import ll_os from pypy.rlib import rposix +str0 = ll_os.str0 + # ____________________________________________________________ # # Annotation support to control access to 'os.environ' in the RPython program @@ -64,7 +67,7 @@ rffi.free_charp(l_name) return result -register_external(r_getenv, [str], annmodel.SomeString(can_be_None=True), +register_external(r_getenv, [str0], annmodel.SomeString(can_be_None=True), export_name='ll_os.ll_os_getenv', llimpl=getenv_llimpl) @@ -93,7 +96,7 @@ if l_oldstring: rffi.free_charp(l_oldstring) -register_external(r_putenv, [str, str], annmodel.s_None, +register_external(r_putenv, [str0, str0], annmodel.s_None, export_name='ll_os.ll_os_putenv', llimpl=putenv_llimpl) @@ -128,7 +131,7 @@ del envkeepalive.byname[name] rffi.free_charp(l_oldstring) - register_external(r_unsetenv, [str], annmodel.s_None, + register_external(r_unsetenv, [str0], annmodel.s_None, export_name='ll_os.ll_os_unsetenv', llimpl=unsetenv_llimpl) @@ -172,7 +175,7 @@ i += 1 return result -register_external(r_envkeys, [], [str], # returns a list of strings +register_external(r_envkeys, [], [str0], # returns a list of strings export_name='ll_os.ll_os_envkeys', llimpl=envkeys_llimpl) @@ -193,6 +196,6 @@ i += 1 return result -register_external(r_envitems, [], [(str, str)], +register_external(r_envitems, [], [(str0, str0)], export_name='ll_os.ll_os_envitems', llimpl=envitems_llimpl) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -236,7 +236,7 @@ def register_stat_variant(name, traits): if name != 'fstat': arg_is_path = True - s_arg = traits.str + s_arg = traits.str0 ARG1 = traits.CCHARP else: arg_is_path = False @@ -251,8 +251,6 @@ [s_arg], s_StatResult, traits.ll_os_name(name), llimpl=posix_stat_llimpl) - assert traits.str is str - if sys.platform.startswith('linux'): # because we always use _FILE_OFFSET_BITS 64 - this helps things work that are not a c compiler _functions = {'stat': 'stat64', @@ -283,7 +281,7 @@ @func_renamer('os_%s_fake' % (name,)) def posix_fakeimpl(arg): - if s_arg == str: + if s_arg == traits.str0: arg = hlstr(arg) st = getattr(os, name)(arg) fields = [TYPE for fieldname, TYPE in STAT_FIELDS] diff --git a/pypy/rpython/ootypesystem/test/test_ooann.py b/pypy/rpython/ootypesystem/test/test_ooann.py --- a/pypy/rpython/ootypesystem/test/test_ooann.py +++ b/pypy/rpython/ootypesystem/test/test_ooann.py @@ -231,7 +231,7 @@ a = RPythonAnnotator() s = a.build_types(oof, [bool]) - assert s == annmodel.SomeString(can_be_None=True) + assert annmodel.SomeString(can_be_None=True).contains(s) def test_oostring(): def oof(): diff --git a/pypy/rpython/test/test_extfunc.py b/pypy/rpython/test/test_extfunc.py --- a/pypy/rpython/test/test_extfunc.py +++ b/pypy/rpython/test/test_extfunc.py @@ -167,3 +167,43 @@ a = RPythonAnnotator(policy=policy) s = a.build_types(f, []) assert isinstance(s, annmodel.SomeString) + + def test_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_open(s): + pass + register_external(os_open, [str0], None) + def f(s): + return os_open(s) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [str]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(s): + return os_open(s) + raises(Exception, a.build_types, g, [str]) + a.build_types(g, [str0]) # Does not raise + + def test_list_of_str0(self): + str0 = annmodel.SomeString(no_nul=True) + def os_execve(l): + pass + register_external(os_execve, [[str0]], None) + def f(l): + return os_execve(l) + policy = AnnotatorPolicy() + policy.allow_someobjects = False + a = RPythonAnnotator(policy=policy) + a.build_types(f, [[str]]) # Does not raise + assert a.translator.config.translation.check_str_without_nul == False + # Now enable the str0 check, and try again with a similar function + a.translator.config.translation.check_str_without_nul=True + def g(l): + return os_execve(l) + raises(Exception, a.build_types, g, [[str]]) + a.build_types(g, [[str0]]) # Does not raise + + diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/test/test_extfunc.py b/pypy/translator/c/test/test_extfunc.py --- a/pypy/translator/c/test/test_extfunc.py +++ b/pypy/translator/c/test/test_extfunc.py @@ -3,6 +3,7 @@ import os, time, sys from pypy.tool.udir import udir from pypy.rlib.rarithmetic import r_longlong +from pypy.annotation import model as annmodel from pypy.translator.c.test.test_genc import compile from pypy.translator.c.test.test_standalone import StandaloneTests posix = __import__(os.name) @@ -145,7 +146,7 @@ filename = str(py.path.local(__file__)) def call_access(path, mode): return os.access(path, mode) - f = compile(call_access, [str, int]) + f = compile(call_access, [annmodel.s_Str0, int]) for mode in os.R_OK, os.W_OK, os.X_OK, (os.R_OK | os.W_OK | os.X_OK): assert f(filename, mode) == os.access(filename, mode) @@ -225,7 +226,7 @@ def test_system(): def does_stuff(cmd): return os.system(cmd) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) res = f1("echo hello") assert res == 0 @@ -311,7 +312,7 @@ def test_chdir(): def does_stuff(path): os.chdir(path) - f1 = compile(does_stuff, [str]) + f1 = compile(does_stuff, [annmodel.s_Str0]) curdir = os.getcwd() try: os.chdir('..') @@ -325,7 +326,7 @@ os.rmdir(path) else: os.mkdir(path, 0777) - f1 = compile(does_stuff, [str, bool]) + f1 = compile(does_stuff, [annmodel.s_Str0, bool]) dirname = str(udir.join('test_mkdir_rmdir')) f1(dirname, False) assert os.path.exists(dirname) and os.path.isdir(dirname) @@ -628,7 +629,7 @@ return os.environ[s] except KeyError: return '--missing--' - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -640,7 +641,7 @@ res = os.environ.get(s) if res is None: res = '--missing--' return res - func = compile(fn, [str]) + func = compile(fn, [annmodel.s_Str0]) os.environ.setdefault('USER', 'UNNAMED_USER') result = func('USER') assert result == os.environ['USER'] @@ -654,7 +655,7 @@ os.environ[s] = t3 os.environ[s] = t4 os.environ[s] = t5 - func = compile(fn, [str, str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 6) func('PYPY_TEST_DICTLIKE_ENVIRON', 'a', 'b', 'c', 'FOOBAR', '42', expected_extra_mallocs = (2, 3, 4)) # at least two, less than 5 assert _real_getenv('PYPY_TEST_DICTLIKE_ENVIRON') == '42' @@ -678,7 +679,7 @@ else: raise Exception("should have raised!") # os.environ[s5] stays - func = compile(fn, [str, str, str, str, str]) + func = compile(fn, [annmodel.s_Str0] * 5) if hasattr(__import__(os.name), 'unsetenv'): expected_extra_mallocs = range(2, 10) # at least 2, less than 10: memory for s1, s2, s3, s4 should be freed @@ -743,7 +744,7 @@ raise AssertionError("should have failed!") result = os.listdir(s) return '/'.join(result) - func = compile(mylistdir, [str]) + func = compile(mylistdir, [annmodel.s_Str0]) for testdir in [str(udir), os.curdir]: result = func(testdir) result = result.split('/') diff --git a/pypy/translator/cli/test/runtest.py b/pypy/translator/cli/test/runtest.py --- a/pypy/translator/cli/test/runtest.py +++ b/pypy/translator/cli/test/runtest.py @@ -276,7 +276,7 @@ def get_annotation(x): if isinstance(x, basestring) and len(x) > 1: - return SomeString() + return SomeString(no_nul='\x00' not in x) else: return lltype_to_annotation(typeOf(x)) diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -184,6 +184,7 @@ self.standalone = standalone if standalone: + # the 'argv' parameter inputtypes = [s_list_of_strings] self.inputtypes = inputtypes diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -266,7 +266,7 @@ raise NotImplementedError("os.name == %r" % (os.name,)) def getenv(space, w_name): - name = space.str_w(w_name) + name = space.str0_w(w_name) return space.wrap(os.environ.get(name)) getenv_w = interp2app(getenv) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -159,6 +159,8 @@ ## if config.translation.type_system == 'ootype': ## config.objspace.usemodules.suggest(rbench=True) + config.translation.suggest(check_str_without_nul=True) + if config.translation.thread: config.objspace.usemodules.thread = True elif config.objspace.usemodules.thread: From noreply at buildbot.pypy.org Mon Feb 13 00:57:52 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 13 Feb 2012 00:57:52 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: never create intermediates, add tests to verify Message-ID: <20120212235752.9F58B8203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52403:4dad485d2e24 Date: 2012-02-12 23:20 +0200 http://bitbucket.org/pypy/pypy/changeset/4dad485d2e24/ Log: never create intermediates, add tests to verify diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -168,22 +168,23 @@ 'output parameter shape mismatch, expecting %s' + ' , got %s', str(shape), str(out.shape)) #Test for dtype agreement, perhaps create an itermediate - if out.dtype != dtype - raise OperationError(space.w_TypeError, space.wrap( - "mismatched dtypes")) - return self.do_axis_reduce(obj, dtype, axis, out) + #if out.dtype != dtype: + # raise OperationError(space.w_TypeError, space.wrap( + # "mismatched dtypes")) + return self.do_axis_reduce(obj, out.find_dtype(), axis, out) else: result = W_NDimArray(support.product(shape), shape, dtype) return self.do_axis_reduce(obj, dtype, axis, result) - arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) - val = loop.compute(arr) if out: if len(out.shape)>0: raise operationerrfmt(space.w_ValueError, "output parameter " "for reduction operation %s has too many" " dimensions",self.name) - out.value = out.dtype.coerce(space, val) - return out + arr = ReduceArray(self.func, self.name, self.identity, obj, + out.find_dtype()) + else: + arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) + val = loop.compute(arr) return val def do_axis_reduce(self, obj, dtype, axis, result): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -441,6 +441,5 @@ cur = arr.left.getitem(iterator.offset) value = self.binfunc(self.calc_dtype, cur, v) arr.left.setitem(iterator.offset, value) - def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -896,7 +896,7 @@ assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_reduce_out(self): - from numpypy import arange, array + from numpypy import arange, zeros, array a = arange(15).reshape(5, 3) b = arange(12).reshape(4,3) c = a.sum(0, out=b[1]) @@ -906,6 +906,16 @@ a=arange(12).reshape(3,2,2) raises(ValueError, 'a.sum(0, out=arange(12).reshape(3,2,2))') raises(ValueError, 'a.sum(0, out=arange(3))') + c = array([-1, 0, 1]).sum(out=zeros([], dtype=bool)) + #You could argue that this should product False, but + # that would require an itermediate result. Cpython numpy + # gives True. + assert c == True + a = array([[-1, 0, 1], [1, 0, -1]]) + c = a.sum(0, out=zeros((3,), dtype=bool)) + assert (c == [True, False, True]).all() + c = a.sum(1, out=zeros((2,), dtype=bool)) + assert (c == [True, True]).all() def test_reduce_intermediary(self): from numpypy import arange, array From noreply at buildbot.pypy.org Mon Feb 13 00:57:53 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 13 Feb 2012 00:57:53 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: merge from default Message-ID: <20120212235753.D9E158203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52404:b945d10c4adf Date: 2012-02-12 23:21 +0200 http://bitbucket.org/pypy/pypy/changeset/b945d10c4adf/ Log: merge from default diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1520,7 +1520,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -55,11 +55,13 @@ $ tar xf pypy-1.8-linux.tar.bz2 $ ./pypy-1.8/bin/pypy - Python 2.7.1 (48ebdce33e1b, Feb 09 2012, 00:55:31) + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -3,13 +3,15 @@ ============================ We're pleased to announce the 1.8 release of PyPy. As habitual this -release brings a lot of bugfixes, together with performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -`list strategies`_ which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +you can download the PyPy 1.8 release here: http://pypy.org/download.html @@ -85,6 +87,9 @@ * It's also probably worth noting, we're considering donations for the Software Transactional Memory project. You can read more about `our plans`_ +Cheers, +The PyPy Team + .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html .. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -100,6 +100,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -216,6 +217,7 @@ __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), __rmod__ = interp2app(W_GenericBox.descr_rmod), __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -101,6 +101,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") descr_lshift = _binop_impl("left_shift") @@ -134,6 +135,7 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") descr_rlshift = _binop_right_impl("left_shift") @@ -1258,6 +1260,7 @@ __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), __mod__ = interp2app(BaseArray.descr_mod), __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), @@ -1271,6 +1274,7 @@ __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), __rmod__ = interp2app(BaseArray.descr_rmod), __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -408,6 +408,7 @@ assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) assert int_(8) % int_(3) == int_(2) assert 8 % int_(3) == int_(2) assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,13 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + def test_divmod(self): from _numpypy import arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -368,14 +368,14 @@ assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -22,3 +22,7 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -126,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Mon Feb 13 00:57:55 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 13 Feb 2012 00:57:55 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: expose 'out' arguments, need lots of tests Message-ID: <20120212235755.1A6958203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52405:04f3673a1de7 Date: 2012-02-13 01:57 +0200 http://bitbucket.org/pypy/pypy/changeset/04f3673a1de7/ Log: expose 'out' arguments, need lots of tests diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -59,21 +59,24 @@ return space.wrap(dtype.itemtype.bool(self)) def _binop_impl(ufunc_name): - def impl(self, space, w_other): + def impl(self, space, w_other, w_out=None): from pypy.module.micronumpy import interp_ufuncs - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, + [self, w_other, w_out]) return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) def _binop_right_impl(ufunc_name): - def impl(self, space, w_other): + def impl(self, space, w_other, w_out=None): from pypy.module.micronumpy import interp_ufuncs - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [w_other, self]) + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, + [w_other, self, w_out]) return func_with_new_name(impl, "binop_right_%s_impl" % ufunc_name) def _unaryop_impl(ufunc_name): - def impl(self, space): + def impl(self, space, w_out=None): from pypy.module.micronumpy import interp_ufuncs - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, + [self, w_out]) return func_with_new_name(impl, "unaryop_%s_impl" % ufunc_name) descr_add = _binop_impl("add") diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -83,8 +83,9 @@ return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) def _unaryop_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self]) + def impl(self, space, w_out=None): + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, + [self, w_out]) return func_with_new_name(impl, "unaryop_%s_impl" % ufunc_name) descr_pos = _unaryop_impl("positive") @@ -93,8 +94,9 @@ descr_invert = _unaryop_impl("invert") def _binop_impl(ufunc_name): - def impl(self, space, w_other): - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [self, w_other]) + def impl(self, space, w_other, w_out=None): + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, + [self, w_other, w_out]) return func_with_new_name(impl, "binop_%s_impl" % ufunc_name) descr_add = _binop_impl("add") @@ -123,12 +125,12 @@ return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): - def impl(self, space, w_other): + def impl(self, space, w_other, w_out=None): w_other = scalar_w(space, interp_ufuncs.find_dtype_for_scalar(space, w_other, self.find_dtype()), w_other ) - return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [w_other, self]) + return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [w_other, self, w_out]) return func_with_new_name(impl, "binop_right_%s_impl" % ufunc_name) descr_radd = _binop_right_impl("add") @@ -155,11 +157,11 @@ axis = -1 else: axis = space.int_w(w_axis) - if space.is_w(w_out, space.w_None): + if space.is_w(w_out, space.w_None) or not w_out: out = None elif not isinstance(w_out, BaseArray): - raise OperationError(space.w_TypeError, space.wrap( - 'output must be an array')) + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) else: out = w_out return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, @@ -215,14 +217,15 @@ descr_argmax = _reduce_argmax_argmin_impl("max") descr_argmin = _reduce_argmax_argmin_impl("min") - def descr_dot(self, space, w_other): + def descr_dot(self, space, w_other, w_out=None): other = convert_to_array(space, w_other) if isinstance(other, Scalar): + #Note: w_out is not modified, this is numpy compliant. return self.descr_mul(space, other) elif len(self.shape) < 2 and len(other.shape) < 2: - w_res = self.descr_mul(space, other) + w_res = self.descr_mul(space, other, w_out) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space, space.wrap(-1)) + return w_res.descr_sum(space, space.wrap(-1), w_out) dtype = interp_ufuncs.find_binop_result_dtype(space, self.find_dtype(), other.find_dtype()) if self.size < 1 and other.size < 1: @@ -707,11 +710,12 @@ """ Class for representing virtual arrays, such as binary ops or ufuncs """ - def __init__(self, name, shape, res_dtype): + def __init__(self, name, shape, res_dtype, out_arg=None): BaseArray.__init__(self, shape) self.forced_result = None self.res_dtype = res_dtype self.name = name + self.res = out_arg def _del_sources(self): # Function for deleting references to source arrays, @@ -719,7 +723,8 @@ raise NotImplementedError def compute(self): - ra = ResultArray(self, self.size, self.shape, self.res_dtype) + ra = ResultArray(self, self.size, self.shape, self.res_dtype, + self.res) loop.compute(ra) return ra.left @@ -766,8 +771,9 @@ class Call1(VirtualArray): - def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values): - VirtualArray.__init__(self, name, shape, res_dtype) + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values, + out_arg=None): + VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.values = values self.size = values.size self.ufunc = ufunc @@ -788,8 +794,9 @@ """ _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): - VirtualArray.__init__(self, name, shape, res_dtype) + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right, + out_arg=None): + VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.ufunc = ufunc self.left = left self.right = right diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -28,14 +28,18 @@ return self.identity def descr_call(self, space, __args__): + from interp_numarray import BaseArray args_w, kwds_w = __args__.unpack() # it occurs to me that we don't support any datatypes that # require casting, change it later when we do kwds_w.pop('casting', None) w_subok = kwds_w.pop('subok', None) w_out = kwds_w.pop('out', space.w_None) - if ((w_subok is not None and space.is_true(w_subok)) or - not space.is_w(w_out, space.w_None)): + if space.is_w(w_out, space.w_None): + out = None + else: + out = w_out + if (w_subok is not None and space.is_true(w_subok)): raise OperationError(space.w_NotImplementedError, space.wrap("parameters unsupported")) if kwds_w or len(args_w) < self.argcount: @@ -43,11 +47,14 @@ space.wrap("invalid number of arguments") ) elif len(args_w) > self.argcount: - # The extra arguments should actually be the output array, but we - # don't support that yet. raise OperationError(space.w_TypeError, space.wrap("invalid number of arguments") ) + elif out is not None: + args_w = args_w[:] + [out] + if args_w[-1] and not isinstance(args_w[-1], BaseArray): + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) return self.call(space, args_w) @unwrap_spec(skipna=bool, keepdims=bool) @@ -105,6 +112,7 @@ array([[ 1, 5], [ 9, 13]]) """ + from pypy.module.micronumpy.interp_numarray import BaseArray if w_axis is None: axis = 0 elif space.is_w(w_axis, space.w_None): @@ -113,7 +121,7 @@ axis = space.int_w(w_axis) if space.is_w(w_out, space.w_None): out = None - elif not isinstance(w_out, W_NDimArray): + elif not isinstance(w_out, BaseArray): raise OperationError(space.w_TypeError, space.wrap( 'output must be an array')) else: @@ -165,8 +173,11 @@ ' does not have enough dimensions', self.name) elif out.shape != shape: raise operationerrfmt(space.w_ValueError, - 'output parameter shape mismatch, expecting %s' + - ' , got %s', str(shape), str(out.shape)) + 'output parameter shape mismatch, expecting [%s]' + + ' , got [%s]', + ",".join([str(x) for x in shape]), + ",".join([str(x) for x in out.shape]), + ) #Test for dtype agreement, perhaps create an itermediate #if out.dtype != dtype: # raise OperationError(space.w_TypeError, space.wrap( @@ -182,9 +193,12 @@ " dimensions",self.name) arr = ReduceArray(self.func, self.name, self.identity, obj, out.find_dtype()) + val = loop.compute(arr) + assert isinstance(out, Scalar) + out.value = val else: arr = ReduceArray(self.func, self.name, self.identity, obj, dtype) - val = loop.compute(arr) + val = loop.compute(arr) return val def do_axis_reduce(self, obj, dtype, axis, result): @@ -211,7 +225,7 @@ from pypy.module.micronumpy.interp_numarray import (Call1, convert_to_array, Scalar) - [w_obj] = args_w + [w_obj, w_out] = args_w w_obj = convert_to_array(space, w_obj) calc_dtype = find_unaryop_result_dtype(space, w_obj.find_dtype(), @@ -244,17 +258,25 @@ def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, - convert_to_array, Scalar, shape_agreement) + convert_to_array, Scalar, shape_agreement, BaseArray) - [w_lhs, w_rhs] = args_w + [w_lhs, w_rhs, w_out] = args_w w_lhs = convert_to_array(space, w_lhs) w_rhs = convert_to_array(space, w_rhs) - calc_dtype = find_binop_result_dtype(space, - w_lhs.find_dtype(), w_rhs.find_dtype(), - int_only=self.int_only, - promote_to_float=self.promote_to_float, - promote_bools=self.promote_bools, - ) + if space.is_w(w_out, space.w_None) or not w_out: + out = None + calc_dtype = find_binop_result_dtype(space, + w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, + promote_to_float=self.promote_to_float, + promote_bools=self.promote_bools, + ) + elif not isinstance(w_out, BaseArray): + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) + else: + out = w_out + calc_dtype = out.find_dtype() if self.comparison_func: res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype else: @@ -265,9 +287,10 @@ w_rhs.value.convert_to(calc_dtype) )) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) + # Test correctness of out.shape w_res = Call2(self.func, self.name, new_shape, calc_dtype, - res_dtype, w_lhs, w_rhs) + res_dtype, w_lhs, w_rhs, out) w_lhs.add_invalidates(w_res) w_rhs.add_invalidates(w_res) return w_res diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -862,7 +862,7 @@ assert (arange(10).reshape(5, 2).mean(axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_sum(self): - from _numpypy import array,zeros + from _numpypy import array a = array(range(5)) assert a.sum() == 10 assert a[:4].sum() == 6 @@ -874,7 +874,7 @@ d = array(0.) b = a.sum(out=d) assert b == d - assert b.dtype == d.dtype + assert isinstance(b, float) def test_reduce_nd(self): from numpypy import arange, array, multiply From noreply at buildbot.pypy.org Mon Feb 13 02:19:21 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 13 Feb 2012 02:19:21 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-type-pure-python: TODO -- for fijal :) Message-ID: <20120213011921.2F6F98203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-record-type-pure-python Changeset: r52406:877cf1960916 Date: 2012-02-12 20:19 -0500 http://bitbucket.org/pypy/pypy/changeset/877cf1960916/ Log: TODO -- for fijal :) diff --git a/TODO.rst b/TODO.rst new file mode 100644 --- /dev/null +++ b/TODO.rst @@ -0,0 +1,6 @@ +TODO +==== + +* Do something with endianess markers. +* Actually implement like half the things which are currently ``NameError``. +* Bring in the Python code needed for the above to work. From noreply at buildbot.pypy.org Mon Feb 13 10:22:08 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 13 Feb 2012 10:22:08 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Add a paragraph "Abuse of itertools". Message-ID: <20120213092208.B754C8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r327:aa71b020271b Date: 2012-02-13 10:15 +0100 http://bitbucket.org/pypy/pypy.org/changeset/aa71b020271b/ Log: Add a paragraph "Abuse of itertools". diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -68,6 +68,15 @@ sometimes not. In most cases (like ``csv`` and ``cPickle``), we're slower than cPython, with the notable exception of ``json`` and ``heapq``. +* **Abuse of itertools**: The itertools module is often "abused" in the + sense that it is used for the wrong purposes. From our point of view, + itertools is great if you have iterations over millions of items, but + not for most other cases. It gives you 3 lines in functional style + that replace 10 lines of Python loops (longer but arguably much easier + to read). The pure Python version is generally not slower even on + CPython, and on PyPy it allows the JIT to work much better --- simple + Python code is fast. + We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, please report it to our `bug tracker`_ for investigation. From noreply at buildbot.pypy.org Mon Feb 13 10:22:09 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 13 Feb 2012 10:22:09 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Complete Message-ID: <20120213092209.C308B8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r328:ec3e1b912cd9 Date: 2012-02-13 10:21 +0100 http://bitbucket.org/pypy/pypy.org/changeset/ec3e1b912cd9/ Log: Complete diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -75,7 +75,10 @@ that replace 10 lines of Python loops (longer but arguably much easier to read). The pure Python version is generally not slower even on CPython, and on PyPy it allows the JIT to work much better --- simple - Python code is fast. + Python code is fast. The same argument also applies to ``filter()``, + ``reduce()``, and to some extend ``map()`` (although the simple case + is JITted), and to all usages of the ``operator`` module we can think + of. We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, From noreply at buildbot.pypy.org Mon Feb 13 10:22:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 13 Feb 2012 10:22:10 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: merge heads Message-ID: <20120213092210.D28B58203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r329:656ba44c4a50 Date: 2012-02-13 10:21 +0100 http://bitbucket.org/pypy/pypy.org/changeset/656ba44c4a50/ Log: merge heads diff --git a/performance.html b/performance.html --- a/performance.html +++ b/performance.html @@ -1,7 +1,7 @@ - PyPy :: PyPy + PyPy :: Performance @@ -44,7 +44,7 @@

    -

    PyPy

    +

    Performance

    One of the goals of the PyPy project is to provide a fast and compliant python interpreter. Some of the ways we achieve this are by providing a high-performance garbage collector (GC) and a high-performance diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -1,6 +1,6 @@ --- layout: page -title: PyPy +title: Performance --- One of the goals of the PyPy project is to provide a fast and compliant @@ -63,10 +63,11 @@ that uses something like ``ctypes`` for the interface. * **Missing RPython modules**: A few modules of the standard library - (like ``csv`` and ``cPickle``) are in C in CPython, but in pure Python - in PyPy. Sometimes the JIT is able to do a good job on them, and - sometimes not. In most cases (like ``csv`` and ``cPickle``), we're slower - than cPython, with the notable exception of ``json`` and ``heapq``. + (like ``csv`` and ``cPickle``) are written in C in CPython, but written + natively in pure Python in PyPy. Sometimes the JIT is able to do a + good job on them, and sometimes not. In most cases (like ``csv`` and + ``cPickle``), we're slower than cPython, with the notable exception of + ``json`` and ``heapq``. * **Abuse of itertools**: The itertools module is often "abused" in the sense that it is used for the wrong purposes. From our point of view, From noreply at buildbot.pypy.org Mon Feb 13 10:33:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 13 Feb 2012 10:33:58 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Typo in capitalization. Regen. Message-ID: <20120213093358.7345C8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r330:068ecf60cdf8 Date: 2012-02-13 10:33 +0100 http://bitbucket.org/pypy/pypy.org/changeset/068ecf60cdf8/ Log: Typo in capitalization. Regen. diff --git a/performance.html b/performance.html --- a/performance.html +++ b/performance.html @@ -99,10 +99,22 @@ might be worthwhile to consider rewriting it as a pure Python version that uses something like ctypes for the interface.

  • Missing RPython modules: A few modules of the standard library -(like csv and cPickle) are in C in CPython, but in pure Python -in PyPy. Sometimes the JIT is able to do a good job on them, and -sometimes not. In most cases (like csv and cPickle), we're slower -than cPython, with the notable exception of json and heapq.
  • +(like csv and cPickle) are written in C in CPython, but written +natively in pure Python in PyPy. Sometimes the JIT is able to do a +good job on them, and sometimes not. In most cases (like csv and +cPickle), we're slower than CPython, with the notable exception of +json and heapq. +
  • Abuse of itertools: The itertools module is often “abused” in the +sense that it is used for the wrong purposes. From our point of view, +itertools is great if you have iterations over millions of items, but +not for most other cases. It gives you 3 lines in functional style +that replace 10 lines of Python loops (longer but arguably much easier +to read). The pure Python version is generally not slower even on +CPython, and on PyPy it allows the JIT to work much better – simple +Python code is fast. The same argument also applies to filter(), +reduce(), and to some extend map() (although the simple case +is JITted), and to all usages of the operator module we can think +of.
  • We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -66,7 +66,7 @@ (like ``csv`` and ``cPickle``) are written in C in CPython, but written natively in pure Python in PyPy. Sometimes the JIT is able to do a good job on them, and sometimes not. In most cases (like ``csv`` and - ``cPickle``), we're slower than cPython, with the notable exception of + ``cPickle``), we're slower than CPython, with the notable exception of ``json`` and ``heapq``. * **Abuse of itertools**: The itertools module is often "abused" in the From noreply at buildbot.pypy.org Mon Feb 13 10:41:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 13 Feb 2012 10:41:45 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: implement void read Message-ID: <20120213094145.BA97D8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52407:bb07db6244fc Date: 2012-02-13 11:41 +0200 http://bitbucket.org/pypy/pypy/changeset/bb07db6244fc/ Log: implement void read diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.error import operationerrfmt, OperationError +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.stringtype import str_typedef @@ -8,6 +8,7 @@ from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name +from pypy.rpython.lltypesystem import lltype MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () @@ -169,7 +170,24 @@ pass class W_VoidBox(W_FlexibleBox): - pass + def __init__(self, dtype, arr): + self.arr = arr + self.dtype = dtype + + def get_dtype(self, space): + return self.dtype + + @unwrap_spec(item=str) + def descr_getitem(self, space, item): + try: + ofs, dtype = self.dtype.fields[item] + except KeyError: + raise OperationError(space.w_KeyError, space.wrap("Field %s does not exist" % item)) + return dtype.itemtype.read(dtype, self.arr, + dtype.itemtype.get_element_size(), 0, ofs) + + def __del__(self): + lltype.free(self.arr, flavor='raw', track_allocation=False) class W_CharacterBox(W_FlexibleBox): pass @@ -309,6 +327,7 @@ W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, __module__ = "numpypy", + __getitem__ = interp2app(W_VoidBox.descr_getitem), ) W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -39,10 +39,10 @@ def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, self.itemtype.get_element_size() * length, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True - ) + return lltype.malloc(VOID_STORAGE, + self.itemtype.get_element_size() * length, + zero=True, flavor="raw", + track_allocation=False, add_memory_pressure=True) @specialize.argtype(1) def box(self, value): @@ -52,7 +52,7 @@ return self.itemtype.coerce(space, w_item) def getitem(self, storage, i): - return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + return self.itemtype.read(self, storage, self.itemtype.get_element_size(), i, 0) def getitem_bool(self, storage, i): isize = self.itemtype.get_element_size() diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -515,6 +515,7 @@ raises(KeyError, 'd.fields["xyz"]') def test_create_from_dict(self): + skip("not yet") from _numpypy import dtype d = dtype({'names': ['a', 'b', 'c'], }) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1791,3 +1791,11 @@ cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str + +class AppTestRecordDtype(BaseNumpyAppTest): + def test_zeros(self): + from _numpypy import zeros + a = zeros(2, dtype=[('x', int), ('y', float)]) + raises(KeyError, 'a[0]["xyz"]') + assert a[0]['x'] == 0 + assert a[0]['y'] == 0 diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -104,7 +104,7 @@ return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), width, storage, i, offset) - def read(self, storage, width, i, offset): + def read(self, dtype, storage, width, i, offset): return self.box(self._read(storage, width, i, offset)) def read_bool(self, storage, width, i, offset): @@ -624,7 +624,11 @@ NonNativeUnicodeType = UnicodeType class RecordType(CompositeType): - pass + def read(self, dtype, storage, width, i, offset): + arr = dtype.malloc(1) + for j in range(width): + arr[j] = storage[i + j] + return interp_boxes.W_VoidBox(dtype, arr) for tp in [Int32, Int64]: if tp.T == lltype.Signed: From noreply at buildbot.pypy.org Mon Feb 13 11:31:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 13 Feb 2012 11:31:05 +0100 (CET) Subject: [pypy-commit] pypy default: Remove the pyexpat module in pure Python. Unless someone Message-ID: <20120213103105.BFDA48203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52408:347ee7a31d2d Date: 2012-02-13 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/347ee7a31d2d/ Log: Remove the pyexpat module in pure Python. Unless someone wants to resurrect it, it is confusing because it is only a Python-2.5-compatible version. diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. From noreply at buildbot.pypy.org Mon Feb 13 15:26:56 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 15:26:56 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add _build_malloc_slowpath, call it from setup_once. Add malloc_cond. Message-ID: <20120213142656.39E158203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52409:da5a4a161d9c Date: 2012-02-13 09:26 -0500 http://bitbucket.org/pypy/pypy/changeset/da5a4a161d9c/ Log: Add _build_malloc_slowpath, call it from setup_once. Add malloc_cond. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -7,7 +7,7 @@ from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup -from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc.codebuilder import PPCBuilder, OverwritingBuilder from pypy.jit.backend.ppc.jump import remap_frame_layout from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, NONVOLATILES, MAX_REG_PARAMS, @@ -20,6 +20,7 @@ decode32, decode64, count_reg_args, Saved_Volatiles) +from pypy.jit.backend.ppc.helper.regalloc import _check_imm_arg import pypy.jit.backend.ppc.register as r import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, @@ -279,6 +280,28 @@ locs.append(loc) return locs + def _build_malloc_slowpath(self): + mc = PPCBuilder() + with Saved_Volatiles(mc): + # Values to compute size stored in r3 and r4 + mc.subf(r.r3.value, r.r3.value, r.r4.value) + addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + mc.call(addr) + + mc.cmp_op(0, r.r3.value, 0, imm=True) + jmp_pos = mc.currpos() + mc.nop() + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() + mc.load_imm(r.r4, nursery_free_adr) + mc.load(r.r4.value, r.r4.value, 0) + + pmc = OverwritingBuilder(mc, jmp_pos, 1) + pmc.bc(4, 2, jmp_pos) # jump if the two values are equal + pmc.overwrite() + mc.b_abs(self.propagate_exception_path) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + self.malloc_slowpath = rawstart + def _build_propagate_exception_path(self): if self.cpu.propagate_exception_v < 0: return @@ -383,8 +406,8 @@ gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() self._build_propagate_exception_path() - #if gc_ll_descr.get_malloc_slowpath_addr is not None: - # self._build_malloc_slowpath() + if gc_ll_descr.get_malloc_slowpath_addr is not None: + self._build_malloc_slowpath() if gc_ll_descr.gcrootmap and gc_ll_descr.gcrootmap.is_shadow_stack: self._build_release_gil(gc_ll_descr.gcrootmap) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) @@ -892,6 +915,51 @@ else: self.mc.extsw(resloc.value, resloc.value) + def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): + assert size & (WORD-1) == 0 # must be correctly aligned + size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) + size = (size + WORD - 1) & ~(WORD - 1) # round up + + self.mc.load_imm(r.r3, nursery_free_adr) + self.mc.load(r.r3.value, r.r3.value, 0) + + if _check_imm_arg(size): + self.mc.addi(r.r4.value, r.r3.value, size) + else: + self.mc.load_imm(r.r4, size) + self.mc.add(r.r4,value, r.r3.value, r.r4.value) + + # XXX maybe use an offset from the value nursery_free_addr + self.mc.load_imm(r.r3.value, nursery_top_adr) + self.mc.load(r.r3.value, r.r3.value, 0) + + self.mc.cmp_op(0, r.r4.value, r.r3.value, signed=False) + + fast_jmp_pos = self.mc.currpos() + self.mc.nop() + + # XXX update + # See comments in _build_malloc_slowpath for the + # details of the two helper functions that we are calling below. + # First, we need to call two of them and not just one because we + # need to have a mark_gc_roots() in between. Then the calling + # convention of slowpath_addr{1,2} are tweaked a lot to allow + # the code here to be just two CALLs: slowpath_addr1 gets the + # size of the object to allocate from (EDX-EAX) and returns the + # result in EAX; self.malloc_slowpath additionally returns in EDX a + # copy of heap(nursery_free_adr), so that the final MOV below is + # a no-op. + self.mark_gc_roots(self.write_new_force_index(), + use_copy_area=True) + self.mc.call(self.malloc_slowpath) + + offset = self.mc.currpos() - fast_jmp_pos + pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) + pmc.bc(4, 1, offset) # jump if LE (not GT) + + self.mc.gen_imm(r.r3, nursery_free_adr) + self.mc.store(r.r4.value, r.r3.value, 0) + def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: return # not needed From noreply at buildbot.pypy.org Mon Feb 13 15:26:57 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 15:26:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add prepare_call_malloc_nursery, get_mark_gc_roots. Message-ID: <20120213142657.A16168204C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52410:268d435de9ca Date: 2012-02-13 09:26 -0500 http://bitbucket.org/pypy/pypy/changeset/268d435de9ca/ Log: Add prepare_call_malloc_nursery, get_mark_gc_roots. diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -777,6 +777,43 @@ args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args + def prepare_call_malloc_nursery(self, op): + size_box = op.getarg(0) + assert isinstance(size_box, ConstInt) + size = size_box.getint() + + self.rm.force_allocate_reg(op.result, selected_reg=r.r3) + t = TempInt() + self.rm.force_allocate_reg(t, selected_reg=r.r4) + self.possibly_free_var(op.result) + self.possibly_free_var(t) + + gc_ll_descr = self.assembler.cpu.gc_ll_descr + self.assembler.malloc_cond( + gc_ll_descr.get_nursery_free_addr(), + gc_ll_descr.get_nursery_top_addr(), + size + ) + + def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): + shape = gcrootmap.get_basic_shape(False) + for v, val in self.frame_manager.bindings.items(): + if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): + assert val.is_stack() + gcrootmap.add_frame_offset(shape, val.position * -WORD) + for v, reg in self.rm.reg_bindings.items(): + if reg is r.r3: + continue + if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): + if use_copy_area: + assert reg in self.rm.REGLOC_TO_COPY_AREA_OFS + area_offset = self.rm.REGLOC_TO_COPY_AREA_OFS[reg] + gcrootmap.add_frame_offset(shape, area_offset) + else: + assert 0, 'sure??' + return gcrootmap.compress_callshape(shape, + self.assembler.datablockwrapper) + prepare_debug_merge_point = void prepare_jit_debug = void From noreply at buildbot.pypy.org Mon Feb 13 16:23:43 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 16:23:43 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (arigo): Fix comma. Message-ID: <20120213152343.B34EF8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52411:c1cfc516dd53 Date: 2012-02-13 09:40 -0500 http://bitbucket.org/pypy/pypy/changeset/c1cfc516dd53/ Log: (arigo): Fix comma. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -927,7 +927,7 @@ self.mc.addi(r.r4.value, r.r3.value, size) else: self.mc.load_imm(r.r4, size) - self.mc.add(r.r4,value, r.r3.value, r.r4.value) + self.mc.add(r.r4.value, r.r3.value, r.r4.value) # XXX maybe use an offset from the value nursery_free_addr self.mc.load_imm(r.r3.value, nursery_top_adr) From noreply at buildbot.pypy.org Mon Feb 13 16:24:57 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 16:24:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Fix typos. Message-ID: <20120213152457.6CCEC8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52412:9db342318ddb Date: 2012-02-13 10:19 -0500 http://bitbucket.org/pypy/pypy/changeset/9db342318ddb/ Log: Fix typos. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -930,7 +930,7 @@ self.mc.add(r.r4.value, r.r3.value, r.r4.value) # XXX maybe use an offset from the value nursery_free_addr - self.mc.load_imm(r.r3.value, nursery_top_adr) + self.mc.load_imm(r.r3, nursery_top_adr) self.mc.load(r.r3.value, r.r3.value, 0) self.mc.cmp_op(0, r.r4.value, r.r3.value, signed=False) @@ -957,7 +957,7 @@ pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) pmc.bc(4, 1, offset) # jump if LE (not GT) - self.mc.gen_imm(r.r3, nursery_free_adr) + self.mc.load_imm(r.r3, nursery_free_adr) self.mc.store(r.r4.value, r.r3.value, 0) def mark_gc_roots(self, force_index, use_copy_area=False): From noreply at buildbot.pypy.org Mon Feb 13 16:24:59 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 16:24:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Import BoxPtr. Message-ID: <20120213152459.0D65D8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52413:49ee8745859b Date: 2012-02-13 10:19 -0500 http://bitbucket.org/pypy/pypy/changeset/49ee8745859b/ Log: Import BoxPtr. diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -11,8 +11,9 @@ prepare_binary_int_op, prepare_binary_int_op_with_imm, prepare_unary_cmp) -from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, - ConstPtr, Box) +from pypy.jit.metainterp.history import (Const, ConstInt, ConstFloat, ConstPtr, + Box, BoxPtr, + INT, REF, FLOAT) from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.ppc import locations From noreply at buildbot.pypy.org Mon Feb 13 17:08:22 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 17:08:22 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove argboxes from prepare_guard_call_release_gil and prepare_cond_call_gc_wb. Message-ID: <20120213160822.06BA98203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52414:3a24d21c440b Date: 2012-02-13 11:08 -0500 http://bitbucket.org/pypy/pypy/changeset/3a24d21c440b/ Log: Remove argboxes from prepare_guard_call_release_gil and prepare_cond_call_gc_wb. diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -508,17 +508,14 @@ gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap: arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(op.numargs()): - loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - argboxes.append(box) self.assembler.call_release_gil(gcrootmap, arglocs) - self.possibly_free_vars(argboxes) # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack @@ -825,11 +822,10 @@ # because it will be needed anyway by the following setfield_gc # or setarrayitem_gc. It avoids loading it twice from the memory. arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(N): - loc = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - self.rm.possibly_free_vars(argboxes) return arglocs prepare_cond_call_gc_wb_array = prepare_cond_call_gc_wb From noreply at buildbot.pypy.org Mon Feb 13 18:27:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:03 +0100 (CET) Subject: [pypy-commit] pypy py3k: - allow to pass string to raises when we run tests with -A Message-ID: <20120213172703.440F78203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52415:689ab9a80822 Date: 2012-02-10 18:03 +0100 http://bitbucket.org/pypy/pypy/changeset/689ab9a80822/ Log: - allow to pass string to raises when we run tests with -A - split test_reraise into multiple tests (most of them failing) diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -206,7 +206,10 @@ raise SystemExit(0) def raises(exc, func, *args, **kwargs): try: - func(*args, **kwargs) + if isinstance(func, str): + exec("if 1:\\n" + func) + else: + func(*args, **kwargs) except exc: pass else: diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -65,9 +65,7 @@ assert exc_val is exc_val2 assert exc_tb is exc_tb2.tb_next - def test_reraise(self): - # some collection of funny code - import sys + def test_reraise_1(self): raises(ValueError, """ import sys try: @@ -79,6 +77,8 @@ assert sys.exc_info()[0] is ValueError raise """) + + def test_reraise_2(self): raises(ValueError, """ def foo(): import sys @@ -92,6 +92,8 @@ finally: foo() """) + + def test_reraise_3(self): raises(IndexError, """ def spam(): import sys @@ -109,6 +111,8 @@ spam() """) + def test_reraise_4(self): + import sys try: raise ValueError except: @@ -118,6 +122,7 @@ ok = sys.exc_info()[0] is KeyError assert ok + def test_reraise_5(self): raises(IndexError, """ import sys try: From noreply at buildbot.pypy.org Mon Feb 13 18:27:09 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:09 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill this test, we can no longer raise tuples Message-ID: <20120213172709.6A92882B69@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52419:3e27a580412d Date: 2012-02-11 11:50 +0100 http://bitbucket.org/pypy/pypy/changeset/3e27a580412d/ Log: kill this test, we can no longer raise tuples diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -139,11 +139,6 @@ assert sys.exc_info()[2].tb_next is some_traceback """) - def test_tuple_type(self): - def f(): - raise ((StopIteration, 123), 456, 789) - raises(StopIteration, f) - def test_userclass(self): # new-style classes can't be raised unless they inherit from # BaseException From noreply at buildbot.pypy.org Mon Feb 13 18:27:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: python3 changed the behavior in case we raise an exception from within a finally block: in python2 it was ignored and the main exception went through, in python3 the new exception is raised and the old one is set as __context__. Fix two tests to account for this change, they now pass with -A (but still fail on py.py) Message-ID: <20120213172704.CED908204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52416:c5a89c7f55c5 Date: 2012-02-11 02:16 +0100 http://bitbucket.org/pypy/pypy/changeset/c5a89c7f55c5/ Log: python3 changed the behavior in case we raise an exception from within a finally block: in python2 it was ignored and the main exception went through, in python3 the new exception is raised and the old one is set as __context__. Fix two tests to account for this change, they now pass with -A (but still fail on py.py) diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -66,7 +66,7 @@ assert exc_tb is exc_tb2.tb_next def test_reraise_1(self): - raises(ValueError, """ + raises(IndexError, """ import sys try: raise ValueError @@ -74,15 +74,15 @@ try: raise IndexError finally: - assert sys.exc_info()[0] is ValueError + assert sys.exc_info()[0] is IndexError raise """) def test_reraise_2(self): - raises(ValueError, """ + raises(IndexError, """ def foo(): import sys - assert sys.exc_info()[0] is ValueError + assert sys.exc_info()[0] is IndexError raise try: raise ValueError From noreply at buildbot.pypy.org Mon Feb 13 18:27:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:10 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill this test as well, we can only raise subclasses of BaseException nowadays Message-ID: <20120213172710.EB1388203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52420:58bf7b8c4517 Date: 2012-02-11 11:51 +0100 http://bitbucket.org/pypy/pypy/changeset/58bf7b8c4517/ Log: kill this test as well, we can only raise subclasses of BaseException nowadays diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -164,51 +164,6 @@ except KeyError: pass - def test_oldstyle_userclass(self): - class A: - def __init__(self, val=None): - self.val = val - class Sub(A): - pass - - try: - raise Sub - except IndexError: - assert 0 - except A, a: - assert a.__class__ is Sub - - sub = Sub() - try: - raise sub - except IndexError: - assert 0 - except A, a: - assert a is sub - - try: - raise A, sub - except IndexError: - assert 0 - except A, a: - assert a is sub - assert sub.val is None - - try: - raise Sub, 42 - except IndexError: - assert 0 - except A, a: - assert a.__class__ is Sub - assert a.val == 42 - - try: - {}[5] - except A, a: - assert 0 - except KeyError: - pass - def test_catch_tuple(self): class A: pass From noreply at buildbot.pypy.org Mon Feb 13 18:27:06 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: f_exc_type no longer exists. Rewrite the test using exc_info(), and also the type of the exception is different Message-ID: <20120213172706.5CB2982B1F@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52417:46a8ec490951 Date: 2012-02-11 11:33 +0100 http://bitbucket.org/pypy/pypy/changeset/46a8ec490951/ Log: f_exc_type no longer exists. Rewrite the test using exc_info(), and also the type of the exception is different diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -101,7 +101,7 @@ raise KeyError except KeyError: pass - assert sys._getframe().f_exc_type is ValueError + assert sys.exc_info()[0] is IndexError try: raise ValueError except: From noreply at buildbot.pypy.org Mon Feb 13 18:27:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:12 +0100 (CET) Subject: [pypy-commit] pypy py3k: again, we need to subclass Exception to raise/catch exceptions Message-ID: <20120213172712.2101D8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52421:6e48dcb29be4 Date: 2012-02-11 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/6e48dcb29be4/ Log: again, we need to subclass Exception to raise/catch exceptions diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -165,7 +165,7 @@ pass def test_catch_tuple(self): - class A: + class A(Exception): pass try: From noreply at buildbot.pypy.org Mon Feb 13 18:27:07 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: try to refactor this test to pass on python3 with -A Message-ID: <20120213172707.DFA5382B68@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52418:86bdb9ba2780 Date: 2012-02-11 11:49 +0100 http://bitbucket.org/pypy/pypy/changeset/86bdb9ba2780/ Log: try to refactor this test to pass on python3 with -A diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -133,10 +133,10 @@ raise KeyError except: try: - raise IndexError, IndexError(), some_traceback + raise IndexError().with_traceback(some_traceback) finally: - assert sys.exc_info()[0] is KeyError - assert sys.exc_info()[2] is not some_traceback + assert sys.exc_info()[0] is IndexError + assert sys.exc_info()[2].tb_next is some_traceback """) def test_tuple_type(self): From noreply at buildbot.pypy.org Mon Feb 13 18:27:13 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:13 +0100 (CET) Subject: [pypy-commit] pypy py3k: we cannot catch '42' in py3. Not sure whether it's essential for the point of the test, removing this except clause does not change anything even on the default branch Message-ID: <20120213172713.462D88203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52422:cc89478806f3 Date: 2012-02-11 11:58 +0100 http://bitbucket.org/pypy/pypy/changeset/cc89478806f3/ Log: we cannot catch '42' in py3. Not sure whether it's essential for the point of the test, removing this except clause does not change anything even on the default branch diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -206,8 +206,6 @@ a = A() flag = True raise a - except 42: - pass except A: pass From noreply at buildbot.pypy.org Mon Feb 13 18:27:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:14 +0100 (CET) Subject: [pypy-commit] pypy py3k: improve the raises() function which is used inside -A tests. Now test_raises -A passes on cpython3 Message-ID: <20120213172714.C41438203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52423:aaee3698b598 Date: 2012-02-13 11:35 +0100 http://bitbucket.org/pypy/pypy/changeset/aaee3698b598/ Log: improve the raises() function which is used inside -A tests. Now test_raises -A passes on cpython3 diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -200,14 +200,18 @@ def run_with_python(python, target): if python is None: py.test.skip("Cannot find the default python3 interpreter to run with -A") - helpers = """if 1: + helpers = r"""if 1: def skip(message): print(message) raise SystemExit(0) def raises(exc, func, *args, **kwargs): try: if isinstance(func, str): - exec("if 1:\\n" + func) + if func.startswith(" ") or func.startswith("\n"): + # it's probably an indented block, so we prefix if True: + # to avoid SyntaxError + func = "if True:\n" + func + exec(func) else: func(*args, **kwargs) except exc: From noreply at buildbot.pypy.org Mon Feb 13 18:27:15 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:15 +0100 (CET) Subject: [pypy-commit] pypy py3k: simplify the code in RAISE_VARARGS now that we no longer support the form with three arguments Message-ID: <20120213172715.F03D88203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52424:fd1a90055027 Date: 2012-02-13 12:01 +0100 http://bitbucket.org/pypy/pypy/changeset/fd1a90055027/ Log: simplify the code in RAISE_VARARGS now that we no longer support the form with three arguments diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -467,6 +467,8 @@ @jit.unroll_safe def RAISE_VARARGS(self, nbargs, next_instr): space = self.space + if nbargs > 2: + raise BytecodeCorruption("bad RAISE_VARARGS oparg") if nbargs == 0: frame = self ec = self.space.getexecutioncontext() @@ -481,12 +483,10 @@ # re-raise, no new traceback obj will be attached self.last_exception = operror raise Reraise - w_value = w_cause = space.w_None - if nbargs >= 2: + if nbargs == 2: w_cause = self.popvalue() - if 1: - w_value = self.popvalue() + w_value = self.popvalue() if space.exception_is_valid_obj_as_class_w(w_value): w_type = w_value w_value = space.call_function(w_type) @@ -494,16 +494,7 @@ w_type = space.type(w_value) operror = OperationError(w_type, w_value, w_cause=w_cause) operror.normalize_exception(space) - w_traceback = space.w_None # XXX with_traceback? - if not space.full_exceptions or space.is_w(w_traceback, space.w_None): - # common case - raise operror - else: - msg = "raise: arg 3 must be a traceback or None" - tb = pytraceback.check_traceback(space, w_traceback, msg) - operror.set_traceback(tb) - # special 3-arguments raise, no new traceback obj will be attached - raise RaiseWithExplicitTraceback(operror) + raise operror def LOAD_LOCALS(self, oparg, next_instr): self.pushvalue(self.w_locals) From noreply at buildbot.pypy.org Mon Feb 13 18:27:17 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: in python3, when we enter the finally: block because of an exception sys.exc_info() returns the current one, as it happens for except: blocks. In python2, the current exception was updated only inside the latters Message-ID: <20120213172717.239D88203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52425:7ab281a6cae3 Date: 2012-02-13 13:46 +0100 http://bitbucket.org/pypy/pypy/changeset/7ab281a6cae3/ Log: in python3, when we enter the finally: block because of an exception sys.exc_info() returns the current one, as it happens for except: blocks. In python2, the current exception was updated only inside the latters diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1298,9 +1298,16 @@ # the block unrolling and the entering the finally: handler. # see comments in cleanup(). self.cleanupstack(frame) + operationerr = None + if isinstance(unroller, SApplicationException): + operationerr = unroller.operr + if frame.space.full_exceptions: + operationerr.normalize_exception(frame.space) frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) + if operationerr: + frame.last_exception = operationerr return self.handlerposition # jump to the handler From noreply at buildbot.pypy.org Mon Feb 13 18:27:18 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that .dump() works again Message-ID: <20120213172718.4E2628203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52426:9e5364a377d1 Date: 2012-02-13 16:56 +0100 http://bitbucket.org/pypy/pypy/changeset/9e5364a377d1/ Log: make sure that .dump() works again diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -262,8 +262,9 @@ else: consts[num] = self.space.unwrap(w) num += 1 + assert self.co_kwonlyargcount == 0, 'kwonlyargcount is py3k only, cannot turn this code object into a Python2 one' return new.code( self.co_argcount, - self.co_kwonlyargcount, + #self.co_kwonlyargcount, # this does not exists in python2 self.co_nlocals, self.co_stacksize, self.co_flags, @@ -276,7 +277,7 @@ self.co_firstlineno, self.co_lnotab, tuple(self.co_freevars), - tuple(self.co_cellvars) ) + tuple(self.co_cellvars)) def exec_host_bytecode(self, w_globals, w_locals): from pypy.interpreter.pyframe import CPythonFrame From noreply at buildbot.pypy.org Mon Feb 13 18:27:19 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 13 Feb 2012 18:27:19 +0100 (CET) Subject: [pypy-commit] pypy py3k: add two tests which fails because we don't emit/implement POP_EXCEPT Message-ID: <20120213172719.763DC8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52427:b4da0d057eac Date: 2012-02-13 18:26 +0100 http://bitbucket.org/pypy/pypy/changeset/b4da0d057eac/ Log: add two tests which fails because we don't emit/implement POP_EXCEPT diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -50,6 +50,29 @@ else: raise AssertionError("shouldn't be able to raise 1") + def test_revert_exc_info_1(self): + import sys + assert sys.exc_info() == (None, None, None) + try: + raise ValueError + except: + pass + assert sys.exc_info() == (None, None, None) + + def test_revert_exc_info_2(self): + import sys + assert sys.exc_info() == (None, None, None) + try: + raise ValueError + except: + try: + raise IndexError + except: + assert sys.exc_info()[0] is IndexError + assert sys.exc_info()[0] is ValueError + assert sys.exc_info() == (None, None, None) + + def test_raise_with___traceback__(self): import sys try: From noreply at buildbot.pypy.org Mon Feb 13 18:38:54 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 13 Feb 2012 18:38:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Fix bug in emit_unicodesetitem. Do not overwrite managed locations! Message-ID: <20120213173854.C772A8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52428:1dfbe17803ed Date: 2012-02-13 18:34 +0100 http://bitbucket.org/pypy/pypy/changeset/1dfbe17803ed/ Log: (bivab, hager): Fix bug in emit_unicodesetitem. Do not overwrite managed locations! diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -814,15 +814,16 @@ emit_unicodelen = StrOpAssembler.emit_strlen - # XXX 64 bit adjustment def emit_unicodegetitem(self, op, arglocs, regalloc): + # res is used as a temporary location + # => it is save to use it before loading the result res, base_loc, ofs_loc, scale, basesize, itemsize = arglocs if IS_PPC_32: - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + self.mc.slwi(res.value, ofs_loc.value, scale.value) else: - self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) - self.mc.add(res.value, base_loc.value, ofs_loc.value) + self.mc.sldi(res.value, ofs_loc.value, scale.value) + self.mc.add(res.value, base_loc.value, res.value) if scale.value == 2: self.mc.lwz(res.value, res.value, basesize.value) @@ -831,20 +832,19 @@ else: assert 0, itemsize.value - # XXX 64 bit adjustment def emit_unicodesetitem(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, scale, basesize, itemsize = arglocs + value_loc, base_loc, ofs_loc, temp_loc, scale, basesize, itemsize = arglocs if IS_PPC_32: - self.mc.slwi(ofs_loc.value, ofs_loc.value, scale.value) + self.mc.slwi(temp_loc.value, ofs_loc.value, scale.value) else: - self.mc.sldi(ofs_loc.value, ofs_loc.value, scale.value) - self.mc.add(base_loc.value, base_loc.value, ofs_loc.value) + self.mc.sldi(temp_loc.value, ofs_loc.value, scale.value) + self.mc.add(temp_loc.value, base_loc.value, temp_loc.value) if scale.value == 2: - self.mc.stw(value_loc.value, base_loc.value, basesize.value) + self.mc.stw(value_loc.value, temp_loc.value, basesize.value) elif scale.value == 1: - self.mc.sth(value_loc.value, base_loc.value, basesize.value) + self.mc.sth(value_loc.value, temp_loc.value, basesize.value) else: assert 0, itemsize.value diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -747,10 +747,11 @@ base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) value_loc = self._ensure_value_is_boxed(boxes[2], boxes) + temp_loc = self.get_scratch_reg(INT, boxes) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, self.cpu.translate_support_code) scale = itemsize / 2 - return [value_loc, base_loc, ofs_loc, + return [value_loc, base_loc, ofs_loc, temp_loc, imm(scale), imm(basesize), imm(itemsize)] def prepare_same_as(self, op): From noreply at buildbot.pypy.org Mon Feb 13 18:38:56 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 13 Feb 2012 18:38:56 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120213173856.5886F8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52429:68416fec227f Date: 2012-02-13 18:38 +0100 http://bitbucket.org/pypy/pypy/changeset/68416fec227f/ Log: merge diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -7,7 +7,7 @@ from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup -from pypy.jit.backend.ppc.codebuilder import PPCBuilder +from pypy.jit.backend.ppc.codebuilder import PPCBuilder, OverwritingBuilder from pypy.jit.backend.ppc.jump import remap_frame_layout from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, NONVOLATILES, MAX_REG_PARAMS, @@ -20,6 +20,7 @@ decode32, decode64, count_reg_args, Saved_Volatiles) +from pypy.jit.backend.ppc.helper.regalloc import _check_imm_arg import pypy.jit.backend.ppc.register as r import pypy.jit.backend.ppc.condition as c from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, @@ -279,6 +280,28 @@ locs.append(loc) return locs + def _build_malloc_slowpath(self): + mc = PPCBuilder() + with Saved_Volatiles(mc): + # Values to compute size stored in r3 and r4 + mc.subf(r.r3.value, r.r3.value, r.r4.value) + addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + mc.call(addr) + + mc.cmp_op(0, r.r3.value, 0, imm=True) + jmp_pos = mc.currpos() + mc.nop() + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() + mc.load_imm(r.r4, nursery_free_adr) + mc.load(r.r4.value, r.r4.value, 0) + + pmc = OverwritingBuilder(mc, jmp_pos, 1) + pmc.bc(4, 2, jmp_pos) # jump if the two values are equal + pmc.overwrite() + mc.b_abs(self.propagate_exception_path) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + self.malloc_slowpath = rawstart + def _build_propagate_exception_path(self): if self.cpu.propagate_exception_v < 0: return @@ -383,8 +406,8 @@ gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() self._build_propagate_exception_path() - #if gc_ll_descr.get_malloc_slowpath_addr is not None: - # self._build_malloc_slowpath() + if gc_ll_descr.get_malloc_slowpath_addr is not None: + self._build_malloc_slowpath() if gc_ll_descr.gcrootmap and gc_ll_descr.gcrootmap.is_shadow_stack: self._build_release_gil(gc_ll_descr.gcrootmap) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) @@ -892,6 +915,51 @@ else: self.mc.extsw(resloc.value, resloc.value) + def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): + assert size & (WORD-1) == 0 # must be correctly aligned + size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) + size = (size + WORD - 1) & ~(WORD - 1) # round up + + self.mc.load_imm(r.r3, nursery_free_adr) + self.mc.load(r.r3.value, r.r3.value, 0) + + if _check_imm_arg(size): + self.mc.addi(r.r4.value, r.r3.value, size) + else: + self.mc.load_imm(r.r4, size) + self.mc.add(r.r4.value, r.r3.value, r.r4.value) + + # XXX maybe use an offset from the value nursery_free_addr + self.mc.load_imm(r.r3, nursery_top_adr) + self.mc.load(r.r3.value, r.r3.value, 0) + + self.mc.cmp_op(0, r.r4.value, r.r3.value, signed=False) + + fast_jmp_pos = self.mc.currpos() + self.mc.nop() + + # XXX update + # See comments in _build_malloc_slowpath for the + # details of the two helper functions that we are calling below. + # First, we need to call two of them and not just one because we + # need to have a mark_gc_roots() in between. Then the calling + # convention of slowpath_addr{1,2} are tweaked a lot to allow + # the code here to be just two CALLs: slowpath_addr1 gets the + # size of the object to allocate from (EDX-EAX) and returns the + # result in EAX; self.malloc_slowpath additionally returns in EDX a + # copy of heap(nursery_free_adr), so that the final MOV below is + # a no-op. + self.mark_gc_roots(self.write_new_force_index(), + use_copy_area=True) + self.mc.call(self.malloc_slowpath) + + offset = self.mc.currpos() - fast_jmp_pos + pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) + pmc.bc(4, 1, offset) # jump if LE (not GT) + + self.mc.load_imm(r.r3, nursery_free_adr) + self.mc.store(r.r4.value, r.r3.value, 0) + def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: return # not needed diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -11,8 +11,9 @@ prepare_binary_int_op, prepare_binary_int_op_with_imm, prepare_unary_cmp) -from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, - ConstPtr, Box) +from pypy.jit.metainterp.history import (Const, ConstInt, ConstFloat, ConstPtr, + Box, BoxPtr, + INT, REF, FLOAT) from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.ppc import locations @@ -507,17 +508,14 @@ gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap: arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(op.numargs()): - loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - argboxes.append(box) self.assembler.call_release_gil(gcrootmap, arglocs) - self.possibly_free_vars(argboxes) # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack @@ -778,6 +776,43 @@ args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args + def prepare_call_malloc_nursery(self, op): + size_box = op.getarg(0) + assert isinstance(size_box, ConstInt) + size = size_box.getint() + + self.rm.force_allocate_reg(op.result, selected_reg=r.r3) + t = TempInt() + self.rm.force_allocate_reg(t, selected_reg=r.r4) + self.possibly_free_var(op.result) + self.possibly_free_var(t) + + gc_ll_descr = self.assembler.cpu.gc_ll_descr + self.assembler.malloc_cond( + gc_ll_descr.get_nursery_free_addr(), + gc_ll_descr.get_nursery_top_addr(), + size + ) + + def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): + shape = gcrootmap.get_basic_shape(False) + for v, val in self.frame_manager.bindings.items(): + if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): + assert val.is_stack() + gcrootmap.add_frame_offset(shape, val.position * -WORD) + for v, reg in self.rm.reg_bindings.items(): + if reg is r.r3: + continue + if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): + if use_copy_area: + assert reg in self.rm.REGLOC_TO_COPY_AREA_OFS + area_offset = self.rm.REGLOC_TO_COPY_AREA_OFS[reg] + gcrootmap.add_frame_offset(shape, area_offset) + else: + assert 0, 'sure??' + return gcrootmap.compress_callshape(shape, + self.assembler.datablockwrapper) + prepare_debug_merge_point = void prepare_jit_debug = void @@ -788,11 +823,10 @@ # because it will be needed anyway by the following setfield_gc # or setarrayitem_gc. It avoids loading it twice from the memory. arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(N): - loc = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - self.rm.possibly_free_vars(argboxes) return arglocs prepare_cond_call_gc_wb_array = prepare_cond_call_gc_wb From noreply at buildbot.pypy.org Mon Feb 13 19:05:46 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 19:05:46 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Comment out Loop start message. Message-ID: <20120213180546.E1BE28203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52430:7cd5de17030d Date: 2012-02-13 13:04 -0500 http://bitbucket.org/pypy/pypy/changeset/7cd5de17030d/ Log: Comment out Loop start message. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -715,8 +715,8 @@ allblocks = self.get_asmmemmgr_blocks(looptoken) start = self.mc.materialize(self.cpu.asmmemmgr, allblocks, self.cpu.gc_ll_descr.gcrootmap) - from pypy.rlib.rarithmetic import r_uint - print "=== Loop start is at %s ===" % hex(r_uint(start)) + #from pypy.rlib.rarithmetic import r_uint + #print "=== Loop start is at %s ===" % hex(r_uint(start)) return start def write_pending_failure_recoveries(self): From noreply at buildbot.pypy.org Mon Feb 13 19:05:48 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Mon, 13 Feb 2012 19:05:48 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Don't use for loop. Message-ID: <20120213180548.1508F8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52431:53603cb4101c Date: 2012-02-13 13:05 -0500 http://bitbucket.org/pypy/pypy/changeset/53603cb4101c/ Log: Don't use for loop. diff --git a/pypy/jit/backend/ppc/helper/assembler.py b/pypy/jit/backend/ppc/helper/assembler.py --- a/pypy/jit/backend/ppc/helper/assembler.py +++ b/pypy/jit/backend/ppc/helper/assembler.py @@ -82,9 +82,14 @@ mem[i] = chr((n >> 56) & 0xFF) def decode64(mem, index): - value = 0 - for x in range(8): - value |= (ord(mem[index + x]) << (56 - x * 8)) + value = ( ord(mem[index+7]) + | ord(mem[index+6]) << 8 + | ord(mem[index+5]) << 16 + | ord(mem[index+4]) << 24 + | ord(mem[index+3]) << 32 + | ord(mem[index+2]) << 40 + | ord(mem[index+1]) << 48 + | ord(mem[index]) << 56) return intmask(value) def count_reg_args(args): From noreply at buildbot.pypy.org Mon Feb 13 21:54:16 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Mon, 13 Feb 2012 21:54:16 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: expose PyClass_Type. Message-ID: <20120213205416.73BD68203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52432:fd14bc0aec12 Date: 2012-02-13 21:53 +0100 http://bitbucket.org/pypy/pypy/changeset/fd14bc0aec12/ Log: cpyext: expose PyClass_Type. diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) From notifications-noreply at bitbucket.org Mon Feb 13 22:20:39 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Mon, 13 Feb 2012 21:20:39 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120213212039.5003.98448@bitbucket03.managed.contegix.com> You have received a notification from Manuel Jacob. Hi, I forked pypy. My fork is at https://bitbucket.org/mjacob/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Mon Feb 13 22:40:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 13 Feb 2012 22:40:36 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: progress on storing record boxes (and reading). Not quite working , pdb left Message-ID: <20120213214036.F00668203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52433:bd3a6f174333 Date: 2012-02-13 23:40 +0200 http://bitbucket.org/pypy/pypy/changeset/bd3a6f174333/ Log: progress on storing record boxes (and reading). Not quite working , pdb left where it's left diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -170,24 +170,35 @@ pass class W_VoidBox(W_FlexibleBox): - def __init__(self, dtype, arr): - self.arr = arr - self.dtype = dtype + def __init__(self, arr, i): + self.arr = arr # we have to keep array alive + self.i = i def get_dtype(self, space): - return self.dtype + return self.arr.dtype @unwrap_spec(item=str) def descr_getitem(self, space, item): try: - ofs, dtype = self.dtype.fields[item] + ofs, dtype = self.arr.dtype.fields[item] except KeyError: - raise OperationError(space.w_KeyError, space.wrap("Field %s does not exist" % item)) - return dtype.itemtype.read(dtype, self.arr, - dtype.itemtype.get_element_size(), 0, ofs) + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + self.arr.dtype.itemtype.get_element_size() + return dtype.itemtype.read(self.arr, + dtype.itemtype.get_element_size(), self.i, + ofs) - def __del__(self): - lltype.free(self.arr, flavor='raw', track_allocation=False) + @unwrap_spec(item=str) + def descr_setitem(self, space, item, w_value): + try: + ofs, dtype = self.arr.dtype.fields[item] + except KeyError: + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + dtype.itemtype.store(self.arr, + dtype.itemtype.get_element_size(), 0, ofs, + dtype.coerce(space, w_value)) class W_CharacterBox(W_FlexibleBox): pass @@ -328,6 +339,7 @@ W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, __module__ = "numpypy", __getitem__ = interp2app(W_VoidBox.descr_getitem), + __setitem__ = interp2app(W_VoidBox.descr_setitem), ) W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -8,7 +8,6 @@ from pypy.module.micronumpy import types, interp_boxes from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT, r_longlong, r_ulonglong -from pypy.rpython.lltypesystem import lltype UNSIGNEDLTR = "u" @@ -19,8 +18,6 @@ STRINGLTR = 'S' UNICODELTR = 'U' -VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) - class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] @@ -37,29 +34,22 @@ self.fields = fields self.fieldnames = fieldnames - def malloc(self, length): - # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, - self.itemtype.get_element_size() * length, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True) - @specialize.argtype(1) def box(self, value): return self.itemtype.box(value) def coerce(self, space, w_item): - return self.itemtype.coerce(space, w_item) + return self.itemtype.coerce(space, self, w_item) - def getitem(self, storage, i): - return self.itemtype.read(self, storage, self.itemtype.get_element_size(), i, 0) + def getitem(self, arr, i): + return self.itemtype.read(arr, self.itemtype.get_element_size(), i, 0) - def getitem_bool(self, storage, i): + def getitem_bool(self, arr, i): isize = self.itemtype.get_element_size() - return self.itemtype.read_bool(storage, isize, i, 0) + return self.itemtype.read_bool(arr.storage, isize, i, 0) - def setitem(self, storage, i, box): - self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) + def setitem(self, arr, i, box): + self.itemtype.store(arr, self.itemtype.get_element_size(), i, 0, box) def fill(self, storage, box, start, stop): self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) @@ -138,7 +128,7 @@ ofs_and_items.append((offset, subdtype.itemtype)) offset += subdtype.itemtype.get_element_size() fieldnames.append(fldname) - itemtype = types.RecordType(ofs_and_items) + itemtype = types.RecordType(ofs_and_items, offset) return W_Dtype(itemtype, 20, VOIDLTR, "void" + str(8 * itemtype.get_element_size()), "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, fieldnames=fieldnames) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -340,7 +340,7 @@ count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, shapelen=shapelen) iter = frame.get_final_iter() - s += arr.dtype.getitem_bool(arr.storage, iter.offset) + s += arr.dtype.getitem_bool(arr, iter.offset) frame.next(shapelen) return s @@ -361,7 +361,7 @@ filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, frame=frame, v=v, res=res, sig=sig, shapelen=shapelen, self=self) - if concr.dtype.getitem_bool(concr.storage, argi.offset): + if concr.dtype.getitem_bool(concr, argi.offset): v = sig.eval(frame, self) res.setitem(ri.offset, v) ri = ri.next(1) @@ -382,7 +382,7 @@ filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, frame=frame, arr=arr, shapelen=shapelen) - if idx.dtype.getitem_bool(idx.storage, idxi.offset): + if idx.dtype.getitem_bool(idx, idxi.offset): sig.eval(frame, arr) frame.next_from_second(1) frame.next_first(shapelen) @@ -904,7 +904,7 @@ if parent is not None: self.storage = parent.storage else: - self.storage = dtype.malloc(size) + self.storage = dtype.itemtype.malloc(size) self.order = order self.dtype = dtype if self.strides is None: @@ -923,7 +923,7 @@ return self.dtype def getitem(self, item): - return self.dtype.getitem(self.storage, item) + return self.dtype.getitem(self, item) def setitem(self, item, value): self.invalidated() diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -11,8 +11,10 @@ class MockDtype(object): - def malloc(self, size): - return None + class itemtype(object): + @classmethod + def malloc(size): + return None class TestNumArrayDirect(object): @@ -1796,6 +1798,11 @@ def test_zeros(self): from _numpypy import zeros a = zeros(2, dtype=[('x', int), ('y', float)]) - raises(KeyError, 'a[0]["xyz"]') + raises(IndexError, 'a[0]["xyz"]') assert a[0]['x'] == 0 assert a[0]['y'] == 0 + raises(ValueError, "a[0] = (1, 2, 3)") + a[0]['x'] = 13 + assert a[0]['x'] == 13 + a[1] = (1, 2) + assert a[1]['y'] == 2 diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -11,6 +11,10 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack from pypy.tool.sourcetools import func_with_new_name +from pypy.rlib import jit + +VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, + 'render_as_void': True}) def simple_unary_op(func): specialize.argtype(1)(func) @@ -65,6 +69,13 @@ # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ # arctanh = _unimplemented_ufunc + def malloc(self, length): + # XXX find out why test_zjit explodes with tracking of allocations + return lltype.malloc(VOID_STORAGE, + self.get_element_size() * length, + zero=True, flavor="raw", + track_allocation=False, add_memory_pressure=True) + class Primitive(object): _mixin_ = True @@ -79,7 +90,7 @@ assert isinstance(box, self.BoxType) return box.value - def coerce(self, space, w_item): + def coerce(self, space, dtype, w_item): if isinstance(w_item, self.BoxType): return w_item return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) @@ -104,19 +115,19 @@ return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), width, storage, i, offset) - def read(self, dtype, storage, width, i, offset): - return self.box(self._read(storage, width, i, offset)) + def read(self, arr, width, i, offset): + return self.box(self._read(arr.storage, width, i, offset)) - def read_bool(self, storage, width, i, offset): - return bool(self.for_computation(self._read(storage, width, i, offset))) + def read_bool(self, arr, width, i, offset): + return bool(self.for_computation(self._read(arr.storage, width, i, offset))) def _write(self, storage, width, i, offset, value): libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), width, storage, i, offset, value) - def store(self, storage, width, i, offset, box): - self._write(storage, width, i, offset, self.unbox(box)) + def store(self, arr, width, i, offset, box): + self._write(arr.storage, width, i, offset, self.unbox(box)) def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) @@ -595,10 +606,9 @@ format_code = "d" class CompositeType(BaseType): - def __init__(self, offsets_and_types): - self.offsets_and_types = offsets_and_types - last_item = offsets_and_types[-1] - self.size = last_item[0] + last_item[1].get_element_size() + def __init__(self, offsets_and_fields, size): + self.offsets_and_fields = offsets_and_fields + self.size = size def get_element_size(self): return self.size @@ -614,7 +624,10 @@ class StringType(BaseType, BaseStringType): T = lltype.Char -VoidType = StringType # why not? + +class VoidType(BaseType, BaseStringType): + T = lltype.Char + NonNativeVoidType = VoidType NonNativeStringType = StringType @@ -624,11 +637,41 @@ NonNativeUnicodeType = UnicodeType class RecordType(CompositeType): - def read(self, dtype, storage, width, i, offset): - arr = dtype.malloc(1) - for j in range(width): - arr[j] = storage[i + j] - return interp_boxes.W_VoidBox(dtype, arr) + T = lltype.Char + + def read(self, arr, width, i, offset): + return interp_boxes.W_VoidBox(arr, i) + + @jit.unroll_safe + def coerce(self, space, dtype, w_item): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + # we treat every sequence as sequence, no special support + # for arrays + if not space.issequence_w(w_item): + raise OperationError(space.w_TypeError, space.wrap( + "expected sequence")) + if len(self.offsets_and_fields) != space.int_w(space.len(w_item)): + raise OperationError(space.w_ValueError, space.wrap( + "wrong length")) + items_w = space.fixedview(w_item) + # XXX optimize it out one day, but for now we just allocate an + # array + arr = W_NDimArray(1, [1], dtype) + for i in range(len(items_w)): + subdtype = dtype.fields[dtype.fieldnames[i]][1] + ofs, itemtype = self.offsets_and_fields[i] + w_item = items_w[i] + w_box = itemtype.coerce(space, subdtype, w_item) + width = itemtype.get_element_size() + import pdb + pdb.set_trace() + itemtype.store(arr, width, 0, ofs, w_box) + return interp_boxes.W_VoidBox(arr, 0) + + @jit.unroll_safe + def store(self, arr, width, i, ofs, box): + for k in range(width): + arr[k + i] = box.arr.storage[k + box.i] for tp in [Int32, Int64]: if tp.T == lltype.Signed: From noreply at buildbot.pypy.org Mon Feb 13 23:23:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 13 Feb 2012 23:23:42 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: (fijal, agaynor) start writing slides Message-ID: <20120213222342.B66288203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4086:0847e4b90c62 Date: 2012-02-13 23:23 +0100 http://bitbucket.org/pypy/extradoc/changeset/0847e4b90c62/ Log: (fijal, agaynor) start writing slides diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst new file mode 100644 --- /dev/null +++ b/talk/sea2012/talk.rst @@ -0,0 +1,103 @@ +Fast numeric in Python - NumPy and PyPy +======================================= + +What is this talk about? +------------------------ + +* what is pypy and why +* numeric landscape in python +* what we achieved in pypy +* where we're going + +What is PyPy? +------------- + +* **An efficient implementation of Python language** + +* A framework for writing efficient dynamic language implementations + +* An open source project with a lot of volunteer effort + +* I'll talk today about the first part (mostly) + +PyPy status right now +--------------------- + +* An efficient just in time compiler for the Python language + +* Relatively "good" on numerics (compared to other dynamic languages) + +* Example - real time video processing + +* Some comparisons + +Why would you care? +------------------- + +* "If I write this stuff in C it'll be faster anyway" + +* maybe, but ... + +Why would you care (2) +---------------------- + +* Experimentation is important + +* Implementing something faster, in human time, leaves more time for optimizations and improvements + +* For novel algorithms, being clearly expressed in code makes them easier to evaluate (Python is cleaner than C often) + +* Example - memcached server (?) XXX think about it + +Numerics in Python +------------------ + +XXX numeric expressions, plots etc. + +Problems with numerics in python +-------------------------------- + +* Stuff is reasonably fast, but... + +* Only if you don't actually write much Python + +* Array operations are fine as long as they're vectorized + +* Not everything is expressable that way + +* Numpy allocates intermediates for each operation, trashing caches + +Our approach +------------ + +* Build a tree of operations + +* Compile assembler specialized for aliasing and operations + +* Execute the specialized assembler + +Examples +-------- + +* ``a + a`` would generate different code than ``a + b`` + +* ``a + b * c`` is as fast as a loop + +Status +------ + +* This works reasonably well + +* Far from implementing the entire numpy, although it's in progress + +* Assembler generation backend needs works + +* No vectorization yet + +Status benchmarks +----------------- + +This is just the beginning... +----------------------------- + +* PyPy is an easy platform to experiment with From noreply at buildbot.pypy.org Tue Feb 14 00:16:05 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 00:16:05 +0100 (CET) Subject: [pypy-commit] pypy revive-dlltool: A branch to make dlltool work again: build .so that are not extension modules Message-ID: <20120213231605.28BA78203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: revive-dlltool Changeset: r52434:1922fd51a332 Date: 2012-02-12 23:17 +0100 http://bitbucket.org/pypy/pypy/changeset/1922fd51a332/ Log: A branch to make dlltool work again: build .so that are not extension modules From noreply at buildbot.pypy.org Tue Feb 14 00:16:06 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 00:16:06 +0100 (CET) Subject: [pypy-commit] pypy revive-dlltool: Fix dlltool: a way to build a .so or .dll which is not Message-ID: <20120213231606.B006A8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: revive-dlltool Changeset: r52435:a1819d03772d Date: 2012-02-14 00:14 +0100 http://bitbucket.org/pypy/pypy/changeset/a1819d03772d/ Log: Fix dlltool: a way to build a .so or .dll which is not a CPython extension module. diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s From noreply at buildbot.pypy.org Tue Feb 14 02:00:37 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 14 Feb 2012 02:00:37 +0100 (CET) Subject: [pypy-commit] pypy default: datetime shouldn't allow float arguments for various things Message-ID: <20120214010037.F1FA58203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52436:ea7cd308d573 Date: 2012-02-13 20:00 -0500 http://bitbucket.org/pypy/pypy/changeset/ea7cd308d573/ Log: datetime shouldn't allow float arguments for various things diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file From noreply at buildbot.pypy.org Tue Feb 14 09:38:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 09:38:11 +0100 (CET) Subject: [pypy-commit] pypy default: (wlav) fix the error message Message-ID: <20120214083811.83CD88203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52437:71ccfad2a7cd Date: 2012-02-14 09:37 +0100 http://bitbucket.org/pypy/pypy/changeset/71ccfad2a7cd/ Log: (wlav) fix the error message diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first From noreply at buildbot.pypy.org Tue Feb 14 10:50:28 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 10:50:28 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add test to show that the temporay location in emit_unicodesetitem is really needed. Message-ID: <20120214095028.5AD7B8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52438:ec29f1a1aa90 Date: 2012-02-13 20:19 +0100 http://bitbucket.org/pypy/pypy/changeset/ec29f1a1aa90/ Log: add test to show that the temporay location in emit_unicodesetitem is really needed. For some reason, it still fails. diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -11,6 +11,7 @@ ConstObj, BoxFloat, ConstFloat) from pypy.rpython.lltypesystem import lltype, llmemory, rstr, rffi, rclass from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.resoperation import ResOperation, rop import py class FakeStats(object): @@ -93,3 +94,36 @@ for i in range(numargs): assert self.cpu.get_latest_value_int(i) == i + 1 + def test_unicodesetitem_really_needs_temploc(self): + py.test.xfail("problems with longevity") + u_box = self.alloc_unicode(u"abcdefg") + + i0 = BoxInt() + i1 = BoxInt() + i2 = BoxInt() + i3 = BoxInt() + i4 = BoxInt() + i5 = BoxInt() + i6 = BoxInt() + i7 = BoxInt() + i8 = BoxInt() + i9 = BoxInt() + + inputargs = [i0,i1,i2,i3,i4,i5,i6,i7,i8,i9] + looptoken = JitCellToken() + targettoken = TargetToken() + faildescr = BasicFailDescr(1) + + operations = [ + ResOperation(rop.LABEL, inputargs, None, descr=targettoken), + ResOperation(rop.UNICODESETITEM, + [u_box, BoxInt(4), BoxInt(123)], None), + ResOperation(rop.FINISH, inputargs, None, descr=faildescr) + ] + + args = [(i + 1) for i in range(10)] + self.cpu.compile_loop(inputargs, operations, looptoken) + fail = self.cpu.execute_token(looptoken, *args) + assert fail.identifier == 1 + for i in range(10): + assert self.cpu.get_latest_value_int(i) == args[i] From noreply at buildbot.pypy.org Tue Feb 14 10:50:29 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 10:50:29 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge heads Message-ID: <20120214095029.D7B658204C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52439:f2eb1e6f9cc1 Date: 2012-02-14 10:45 +0100 http://bitbucket.org/pypy/pypy/changeset/f2eb1e6f9cc1/ Log: merge heads diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -11,6 +11,7 @@ ConstObj, BoxFloat, ConstFloat) from pypy.rpython.lltypesystem import lltype, llmemory, rstr, rffi, rclass from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.metainterp.resoperation import ResOperation, rop import py class FakeStats(object): @@ -93,3 +94,36 @@ for i in range(numargs): assert self.cpu.get_latest_value_int(i) == i + 1 + def test_unicodesetitem_really_needs_temploc(self): + py.test.xfail("problems with longevity") + u_box = self.alloc_unicode(u"abcdefg") + + i0 = BoxInt() + i1 = BoxInt() + i2 = BoxInt() + i3 = BoxInt() + i4 = BoxInt() + i5 = BoxInt() + i6 = BoxInt() + i7 = BoxInt() + i8 = BoxInt() + i9 = BoxInt() + + inputargs = [i0,i1,i2,i3,i4,i5,i6,i7,i8,i9] + looptoken = JitCellToken() + targettoken = TargetToken() + faildescr = BasicFailDescr(1) + + operations = [ + ResOperation(rop.LABEL, inputargs, None, descr=targettoken), + ResOperation(rop.UNICODESETITEM, + [u_box, BoxInt(4), BoxInt(123)], None), + ResOperation(rop.FINISH, inputargs, None, descr=faildescr) + ] + + args = [(i + 1) for i in range(10)] + self.cpu.compile_loop(inputargs, operations, looptoken) + fail = self.cpu.execute_token(looptoken, *args) + assert fail.identifier == 1 + for i in range(10): + assert self.cpu.get_latest_value_int(i) == args[i] From noreply at buildbot.pypy.org Tue Feb 14 10:50:31 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 10:50:31 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): make test hit the issue Message-ID: <20120214095031.5FCD68203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52440:b9747ad2f590 Date: 2012-02-14 10:50 +0100 http://bitbucket.org/pypy/pypy/changeset/b9747ad2f590/ Log: (bivab, hager): make test hit the issue diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -95,8 +95,7 @@ assert self.cpu.get_latest_value_int(i) == i + 1 def test_unicodesetitem_really_needs_temploc(self): - py.test.xfail("problems with longevity") - u_box = self.alloc_unicode(u"abcdefg") + u_box = self.alloc_unicode(u"abcdsdasdsaddefg") i0 = BoxInt() i1 = BoxInt() @@ -108,8 +107,9 @@ i7 = BoxInt() i8 = BoxInt() i9 = BoxInt() + p10 = BoxPtr() - inputargs = [i0,i1,i2,i3,i4,i5,i6,i7,i8,i9] + inputargs = [i0,i1,i2,i3,i4,i5,i6,i7,i8,i9,p10] looptoken = JitCellToken() targettoken = TargetToken() faildescr = BasicFailDescr(1) @@ -117,11 +117,11 @@ operations = [ ResOperation(rop.LABEL, inputargs, None, descr=targettoken), ResOperation(rop.UNICODESETITEM, - [u_box, BoxInt(4), BoxInt(123)], None), + [p10, i6, ConstInt(123)], None), ResOperation(rop.FINISH, inputargs, None, descr=faildescr) ] - args = [(i + 1) for i in range(10)] + args = [(i + 1) for i in range(10)] + [u_box.getref_base()] self.cpu.compile_loop(inputargs, operations, looptoken) fail = self.cpu.execute_token(looptoken, *args) assert fail.identifier == 1 From noreply at buildbot.pypy.org Tue Feb 14 10:52:43 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 10:52:43 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: a para about ctypes, add to progress bar Message-ID: <20120214095243.881CD8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r331:b18b2d4b52f3 Date: 2012-02-14 11:52 +0200 http://bitbucket.org/pypy/pypy.org/changeset/b18b2d4b52f3/ Log: a para about ctypes, add to progress bar diff --git a/don3.html b/don3.html --- a/don3.html +++ b/don3.html @@ -8,12 +8,12 @@ - $41329 of $60000 (68.0%) + $41480 of $60000 (69.1%)

    diff --git a/performance.html b/performance.html --- a/performance.html +++ b/performance.html @@ -115,6 +115,10 @@ reduce(), and to some extend map() (although the simple case is JITted), and to all usages of the operator module we can think of. +
  • Ctypes: Ctypes is a mixed bunch. If you're lucky you'll hit the +sweetspot and be really fast. If you're unlucky, you'll miss the +sweetspot and hit the slowpath which is much slower than CPython (2-10x +has been reported).
  • We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, diff --git a/source/performance.txt b/source/performance.txt --- a/source/performance.txt +++ b/source/performance.txt @@ -81,6 +81,11 @@ is JITted), and to all usages of the ``operator`` module we can think of. +* **Ctypes**: Ctypes is a mixed bunch. If you're lucky you'll hit the + sweetspot and be **really** fast. If you're unlucky, you'll miss the + sweetspot and hit the slowpath which is much slower than CPython (2-10x + has been reported). + We generally consider things that are slower on PyPy than CPython to be bugs of PyPy. If you find some issue that is not documented here, please report it to our `bug tracker`_ for investigation. From noreply at buildbot.pypy.org Tue Feb 14 11:21:27 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 11:21:27 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use temp_loc in emit_STRSETITEM Message-ID: <20120214102127.59FE88203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52441:971b484130cc Date: 2012-02-14 11:21 +0100 http://bitbucket.org/pypy/pypy/changeset/971b484130cc/ Log: use temp_loc in emit_STRSETITEM diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -688,12 +688,12 @@ self.mc.lbz(res.value, res.value, basesize.value) def emit_strsetitem(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, basesize = arglocs + value_loc, base_loc, ofs_loc, temp_loc, basesize = arglocs if ofs_loc.is_imm(): - self.mc.addi(base_loc.value, base_loc.value, ofs_loc.getint()) + self.mc.addi(temp_loc.value, base_loc.value, ofs_loc.getint()) else: - self.mc.add(base_loc.value, base_loc.value, ofs_loc.value) - self.mc.stb(value_loc.value, base_loc.value, basesize.value) + self.mc.add(temp_loc.value, base_loc.value, ofs_loc.value) + self.mc.stb(value_loc.value, temp_loc.value, basesize.value) #from ../x86/regalloc.py:928 ff. def emit_copystrcontent(self, op, arglocs, regalloc): diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -701,10 +701,11 @@ base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) value_loc = self._ensure_value_is_boxed(boxes[2], boxes) + temp_loc = self.get_scratch_reg(INT, boxes) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, self.cpu.translate_support_code) assert itemsize == 1 - return [value_loc, base_loc, ofs_loc, imm(basesize)] + return [value_loc, base_loc, ofs_loc, temp_loc, imm(basesize)] prepare_copystrcontent = void prepare_copyunicodecontent = void From noreply at buildbot.pypy.org Tue Feb 14 11:26:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 11:26:45 +0100 (CET) Subject: [pypy-commit] pypy default: expose transpose under a yet-different name Message-ID: <20120214102645.1EC3C8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52442:44066f86ba12 Date: 2012-02-14 12:25 +0200 http://bitbucket.org/pypy/pypy/changeset/44066f86ba12/ Log: expose transpose under a yet-different name diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1297,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1487,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange From noreply at buildbot.pypy.org Tue Feb 14 11:26:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 11:26:46 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120214102646.F282A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52443:c8ebd9df585a Date: 2012-02-14 12:26 +0200 http://bitbucket.org/pypy/pypy/changeset/c8ebd9df585a/ Log: merge diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first From noreply at buildbot.pypy.org Tue Feb 14 11:47:14 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 11:47:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: alloc SCRATCH reg in emit_getinteriorfield_gc Message-ID: <20120214104714.900E38203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52444:da57d1e43b4c Date: 2012-02-14 11:46 +0100 http://bitbucket.org/pypy/pypy/changeset/da57d1e43b4c/ Log: alloc SCRATCH reg in emit_getinteriorfield_gc diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -542,6 +542,7 @@ def emit_getinteriorfield_gc(self, op, arglocs, regalloc): (base_loc, index_loc, res_loc, ofs_loc, ofs, itemsize, fieldsize) = arglocs + self.mc.alloc_scratch_reg() self.mc.load_imm(r.SCRATCH, itemsize.value) self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) if ofs.value > 0: @@ -560,6 +561,7 @@ self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) else: assert 0 + self.mc.free_scratch_reg() #XXX Hack, Hack, Hack if not we_are_translated(): From noreply at buildbot.pypy.org Tue Feb 14 13:26:50 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 13:26:50 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): do not omit call to _write_fail_index because it crashed on PPC32 Message-ID: <20120214122650.9C8708203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52445:274177b422bf Date: 2012-02-14 13:24 +0100 http://bitbucket.org/pypy/pypy/changeset/274177b422bf/ Log: (bivab, hager): do not omit call to _write_fail_index because it crashed on PPC32 diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -516,6 +516,7 @@ # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) + self.assembler._write_fail_index(fail_index) args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack From noreply at buildbot.pypy.org Tue Feb 14 13:26:52 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 13:26:52 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): set save_exc in emit_guard_call_may_force Message-ID: <20120214122652.27C808203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52446:6c5b993e84ac Date: 2012-02-14 13:26 +0100 http://bitbucket.org/pypy/pypy/changeset/6c5b993e84ac/ Log: (bivab, hager): set save_exc in emit_guard_call_may_force diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -1095,7 +1095,7 @@ self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self.mc.free_scratch_reg() - self._emit_guard(guard_op, arglocs, c.LT) + self._emit_guard(guard_op, arglocs, c.LT, save_exc=True) emit_guard_call_release_gil = emit_guard_call_may_force From noreply at buildbot.pypy.org Tue Feb 14 14:27:33 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 14:27:33 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge with arm-backend-2 Message-ID: <20120214132733.EB6CD8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52447:f8300fcc9545 Date: 2012-02-14 14:26 +0100 http://bitbucket.org/pypy/pypy/changeset/f8300fcc9545/ Log: merge with arm-backend-2 diff too long, truncating to 10000 out of 165250 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -3,6 +3,9 @@ *.sw[po] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 From noreply at buildbot.pypy.org Tue Feb 14 14:45:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 14:45:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: make record dtypes work Message-ID: <20120214134537.BAD708203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52448:3c80e27e4f2e Date: 2012-02-14 15:45 +0200 http://bitbucket.org/pypy/pypy/changeset/3c80e27e4f2e/ Log: make record dtypes work diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -184,10 +184,8 @@ except KeyError: raise OperationError(space.w_IndexError, space.wrap("Field %s does not exist" % item)) - self.arr.dtype.itemtype.get_element_size() - return dtype.itemtype.read(self.arr, - dtype.itemtype.get_element_size(), self.i, - ofs) + width = self.arr.dtype.itemtype.get_element_size() + return dtype.itemtype.read(self.arr, width, self.i, ofs) @unwrap_spec(item=str) def descr_setitem(self, space, item, w_value): @@ -196,8 +194,8 @@ except KeyError: raise OperationError(space.w_IndexError, space.wrap("Field %s does not exist" % item)) - dtype.itemtype.store(self.arr, - dtype.itemtype.get_element_size(), 0, ofs, + width = self.arr.dtype.itemtype.get_element_size() + dtype.itemtype.store(self.arr, width, self.i, ofs, dtype.coerce(space, w_value)) class W_CharacterBox(W_FlexibleBox): diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -112,6 +112,11 @@ def is_bool_type(self): return self.kind == BOOLLTR + def __repr__(self): + if self.fields is not None: + return '' % self.fields + return '' % self.itemtype + def dtype_from_list(space, w_lst): lst_w = space.listview(w_lst) fields = {} diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1806,3 +1806,7 @@ assert a[0]['x'] == 13 a[1] = (1, 2) assert a[1]['y'] == 2 + b = zeros(2, dtype=[('x', int), ('y', float)]) + b[1] = a[1] + assert a[1]['y'] == 2 + diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -76,6 +76,9 @@ zero=True, flavor="raw", track_allocation=False, add_memory_pressure=True) + def __repr__(self): + return self.__class__.__name__ + class Primitive(object): _mixin_ = True @@ -644,6 +647,8 @@ @jit.unroll_safe def coerce(self, space, dtype, w_item): + if isinstance(w_item, interp_boxes.W_VoidBox): + return w_item from pypy.module.micronumpy.interp_numarray import W_NDimArray # we treat every sequence as sequence, no special support # for arrays @@ -663,15 +668,13 @@ w_item = items_w[i] w_box = itemtype.coerce(space, subdtype, w_item) width = itemtype.get_element_size() - import pdb - pdb.set_trace() itemtype.store(arr, width, 0, ofs, w_box) return interp_boxes.W_VoidBox(arr, 0) @jit.unroll_safe def store(self, arr, width, i, ofs, box): for k in range(width): - arr[k + i] = box.arr.storage[k + box.i] + arr[k + i * width] = box.arr.storage[k + box.i * width] for tp in [Int32, Int64]: if tp.T == lltype.Signed: From noreply at buildbot.pypy.org Tue Feb 14 14:54:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 14:54:09 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix tests Message-ID: <20120214135409.C53C98203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52449:6ba10f180b4f Date: 2012-02-14 15:53 +0200 http://bitbucket.org/pypy/pypy/changeset/6ba10f180b4f/ Log: fix tests diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -51,6 +51,8 @@ w_long = "long" w_tuple = 'tuple' w_slice = "slice" + w_str = "str" + w_unicode = "unicode" def __init__(self): """NOT_RPYTHON""" @@ -91,8 +93,12 @@ return BoolObject(obj) elif isinstance(obj, int): return IntObject(obj) + elif isinstance(obj, long): + return LongObject(obj) elif isinstance(obj, W_Root): return obj + elif isinstance(obj, str): + return StringObject(obj) raise NotImplementedError def newlist(self, items): @@ -151,7 +157,13 @@ return instantiate(klass) def newtuple(self, list_w): - raise ValueError + return ListObject(list_w) + + def newdict(self): + return {} + + def setitem(self, dict, item, value): + dict[item] = value def len_w(self, w_obj): if isinstance(w_obj, ListObject): @@ -178,6 +190,11 @@ def __init__(self, intval): self.intval = intval +class LongObject(W_Root): + tp = FakeSpace.w_long + def __init__(self, intval): + self.intval = intval + class ListObject(W_Root): tp = FakeSpace.w_list def __init__(self, items): @@ -190,6 +207,11 @@ self.stop = stop self.step = step +class StringObject(W_Root): + tp = FakeSpace.w_str + def __init__(self, v): + self.v = v + class InterpreterState(object): def __init__(self, code): self.code = code diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -927,7 +927,7 @@ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def calc_strides(self, shape): strides = [] @@ -1072,7 +1072,7 @@ """ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def setshape(self, space, new_shape): self.shape = new_shape @@ -1153,7 +1153,7 @@ # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] - dtype.setitem(arr.storage, arr_iter.offset, + dtype.setitem(arr, arr_iter.offset, dtype.coerce(space, w_elem)) arr_iter = arr_iter.next(shapelen) return arr diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -142,11 +142,10 @@ from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) - storage = concr.storage if self.iter_no >= len(iterlist): iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): - arraylist.append(storage) + arraylist.append(concr) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] From noreply at buildbot.pypy.org Tue Feb 14 15:02:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 15:02:07 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: oops Message-ID: <20120214140207.862418203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52450:92fcbec416cf Date: 2012-02-14 16:01 +0200 http://bitbucket.org/pypy/pypy/changeset/92fcbec416cf/ Log: oops diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -12,7 +12,7 @@ class MockDtype(object): class itemtype(object): - @classmethod + @staticmethod def malloc(size): return None From noreply at buildbot.pypy.org Tue Feb 14 15:05:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 15:05:24 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix more tests Message-ID: <20120214140524.5900A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52451:74906f74143e Date: 2012-02-14 16:05 +0200 http://bitbucket.org/pypy/pypy/changeset/74906f74143e/ Log: fix more tests diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -46,7 +46,7 @@ def getitem_bool(self, arr, i): isize = self.itemtype.get_element_size() - return self.itemtype.read_bool(arr.storage, isize, i, 0) + return self.itemtype.read_bool(arr, isize, i, 0) def setitem(self, arr, i, box): self.itemtype.store(arr, self.itemtype.get_element_size(), i, 0, box) diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -53,7 +53,7 @@ a = W_NDimArray(num_items, [num_items], dtype=dtype) for i, val in enumerate(items): - a.dtype.setitem(a.storage, i, val) + a.dtype.setitem(a, i, val) return space.wrap(a) From noreply at buildbot.pypy.org Tue Feb 14 15:27:56 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 15:27:56 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: speedup tests a bti and fix one more Message-ID: <20120214142756.CC49E8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52452:575766b2e255 Date: 2012-02-14 16:27 +0200 http://bitbucket.org/pypy/pypy/changeset/575766b2e255/ Log: speedup tests a bti and fix one more diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -74,7 +74,7 @@ a = W_NDimArray(count, [count], dtype=dtype) for i in range(count): val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) - a.dtype.setitem(a.storage, i, val) + a.dtype.setitem(a, i, val) return space.wrap(a) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -6,7 +6,7 @@ from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat, libffi, clibffi -from pypy.rlib.objectmodel import specialize +from pypy.rlib.objectmodel import specialize, we_are_translated from pypy.rlib.rarithmetic import widen, byteswap from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack @@ -115,8 +115,11 @@ raise NotImplementedError def _read(self, storage, width, i, offset): - return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset) + if we_are_translated(): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + else: + return libffi.array_getitem_T(self.T, width, storage, i, offset) def read(self, arr, width, i, offset): return self.box(self._read(arr.storage, width, i, offset)) @@ -125,8 +128,11 @@ return bool(self.for_computation(self._read(arr.storage, width, i, offset))) def _write(self, storage, width, i, offset, value): - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value) + if we_are_translated(): + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value) + else: + libffi.array_setitem_T(self.T, width, storage, i, offset, value) def store(self, arr, width, i, offset, box): diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -424,6 +424,11 @@ return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] assert False +def array_getitem_T(TYPE, width, addr, index, offset): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + @specialize.call_location() @jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") def array_setitem(ffitype, width, addr, index, offset, value): @@ -434,3 +439,8 @@ rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value return assert False + +def array_setitem_T(TYPE, width, addr, index, offset, value): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value From noreply at buildbot.pypy.org Tue Feb 14 15:29:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 15:29:33 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix one more test Message-ID: <20120214142933.4ED788203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52453:a06d3e71c2b6 Date: 2012-02-14 16:29 +0200 http://bitbucket.org/pypy/pypy/changeset/a06d3e71c2b6/ Log: fix one more test diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -680,7 +680,7 @@ @jit.unroll_safe def store(self, arr, width, i, ofs, box): for k in range(width): - arr[k + i * width] = box.arr.storage[k + box.i * width] + arr.storage[k + i * width] = box.arr.storage[k + box.i * width] for tp in [Int32, Int64]: if tp.T == lltype.Signed: From noreply at buildbot.pypy.org Tue Feb 14 16:33:10 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:33:10 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add class scratch_reg as help for management of the scratch registers Message-ID: <20120214153310.623B48203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52454:cd1b5bef88c7 Date: 2012-02-14 16:30 +0100 http://bitbucket.org/pypy/pypy/changeset/cd1b5bef88c7/ Log: add class scratch_reg as help for management of the scratch registers diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1174,6 +1174,16 @@ f.write(data[i]) f.close() +class scratch_reg(object): + def __init__(self, mc): + self.mc = mc + + def __enter__(self): + self.mc.alloc_scratch_reg() + + def __exit__(self): + self.mc.free_scratch_reg() + class BranchUpdater(PPCAssembler): def __init__(self): PPCAssembler.__init__(self) From noreply at buildbot.pypy.org Tue Feb 14 16:33:12 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:33:12 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjust malloc_cond to arm code Message-ID: <20120214153312.5D6148203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52455:6ad1b03fd0c9 Date: 2012-02-14 16:32 +0100 http://bitbucket.org/pypy/pypy/changeset/6ad1b03fd0c9/ Log: adjust malloc_cond to arm code diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -7,7 +7,8 @@ from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup -from pypy.jit.backend.ppc.codebuilder import PPCBuilder, OverwritingBuilder +from pypy.jit.backend.ppc.codebuilder import (PPCBuilder, OverwritingBuilder, + scratch_reg) from pypy.jit.backend.ppc.jump import remap_frame_layout from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, NONVOLATILES, MAX_REG_PARAMS, @@ -917,38 +918,31 @@ def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): assert size & (WORD-1) == 0 # must be correctly aligned - size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) - size = (size + WORD - 1) & ~(WORD - 1) # round up - self.mc.load_imm(r.r3, nursery_free_adr) - self.mc.load(r.r3.value, r.r3.value, 0) + self.mc.load_imm(r.RES.value, nursery_free_adr) + self.mc.load(r.RES.value, r.RES.value, 0) if _check_imm_arg(size): - self.mc.addi(r.r4.value, r.r3.value, size) + self.mc.addi(r.r4.value, r.RES.value, size) else: - self.mc.load_imm(r.r4, size) - self.mc.add(r.r4.value, r.r3.value, r.r4.value) + self.mc.load_imm(r.r4.value, size) + self.mc.add(r.r4.value, r.RES.value, r.r4.value) - # XXX maybe use an offset from the value nursery_free_addr - self.mc.load_imm(r.r3, nursery_top_adr) - self.mc.load(r.r3.value, r.r3.value, 0) + with scratch_reg(self.mc): + self.mc.gen_load_int(r.SCRATCH.value, nursery_top_adr) + self.mc.loadx(r.SCRATCH.value, 0, r.SCRATCH.value) - self.mc.cmp_op(0, r.r4.value, r.r3.value, signed=False) - + self.mc.cmp_op(0, r.r4.value, r.SCRATCH.value, signed=False) fast_jmp_pos = self.mc.currpos() self.mc.nop() - # XXX update - # See comments in _build_malloc_slowpath for the - # details of the two helper functions that we are calling below. - # First, we need to call two of them and not just one because we - # need to have a mark_gc_roots() in between. Then the calling - # convention of slowpath_addr{1,2} are tweaked a lot to allow - # the code here to be just two CALLs: slowpath_addr1 gets the - # size of the object to allocate from (EDX-EAX) and returns the - # result in EAX; self.malloc_slowpath additionally returns in EDX a - # copy of heap(nursery_free_adr), so that the final MOV below is - # a no-op. + # We load into r3 the address stored at nursery_free_adr. We calculate + # the new value for nursery_free_adr and store in r1 The we load the + # address stored in nursery_top_adr into IP If the value in r4 is + # (unsigned) bigger than the one in ip we conditionally call + # malloc_slowpath in case we called malloc_slowpath, which returns the + # new value of nursery_free_adr in r4 and the adr of the new object in + # r3. self.mark_gc_roots(self.write_new_force_index(), use_copy_area=True) self.mc.call(self.malloc_slowpath) @@ -956,9 +950,10 @@ offset = self.mc.currpos() - fast_jmp_pos pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) pmc.bc(4, 1, offset) # jump if LE (not GT) - - self.mc.load_imm(r.r3, nursery_free_adr) - self.mc.store(r.r4.value, r.r3.value, 0) + + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH.value, nursery_free_adr) + self.mc.storex(r.r1.value, 0, r.SCRATCH.value) def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: From noreply at buildbot.pypy.org Tue Feb 14 16:33:14 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:33:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjustments to ARM code Message-ID: <20120214153314.B4E388203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52456:2de2afaaf62f Date: 2012-02-14 16:32 +0100 http://bitbucket.org/pypy/pypy/changeset/2de2afaaf62f/ Log: adjustments to ARM code diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -30,13 +30,14 @@ gcdescr=None): if gcdescr is not None: gcdescr.force_index_ofs = FORCE_INDEX_OFS + # XXX for now the ppc backend does not support the gcremovetypeptr + # translation option + assert gcdescr.config.translation.gcremovetypeptr is False AbstractLLCPU.__init__(self, rtyper, stats, opts, translate_support_code, gcdescr) # floats are not supported yet self.supports_floats = False - self.total_compiled_loops = 0 - self.total_compiled_bridges = 0 def setup(self): self.asm = AssemblerPPC(self) @@ -44,20 +45,24 @@ def setup_once(self): self.asm.setup_once() + def finish_once(self): + self.asm.finish_once() + def compile_loop(self, inputargs, operations, looptoken, log=True, name=""): - self.asm.assemble_loop(inputargs, operations, looptoken, log) + return self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, original_loop_token, log=False): clt = original_loop_token.compiled_loop_token clt.compiling_a_bridge() - self.asm.assemble_bridge(faildescr, inputargs, operations, + return self.asm.assemble_bridge(faildescr, inputargs, operations, original_loop_token, log=log) def clear_latest_values(self, count): + setitem = self.asm.fail_boxes_ptr.setitem null = lltype.nullptr(llmemory.GCREF.TO) for index in range(count): - self.asm.fail_boxes_ptr.setitem(index, null) + setitem(index, null) # executes the stored machine code in the token def make_execute_token(self, *ARGS): From noreply at buildbot.pypy.org Tue Feb 14 16:36:59 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:36:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: kill unused import Message-ID: <20120214153659.E33628203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52457:fd6bb16bad74 Date: 2012-02-14 16:36 +0100 http://bitbucket.org/pypy/pypy/changeset/fd6bb16bad74/ Log: kill unused import diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -9,7 +9,6 @@ from pypy.jit.backend.ppc.symbol_lookup import lookup from pypy.jit.backend.ppc.codebuilder import (PPCBuilder, OverwritingBuilder, scratch_reg) -from pypy.jit.backend.ppc.jump import remap_frame_layout from pypy.jit.backend.ppc.arch import (IS_PPC_32, IS_PPC_64, WORD, NONVOLATILES, MAX_REG_PARAMS, GPR_SAVE_AREA, BACKCHAIN_SIZE, From noreply at buildbot.pypy.org Tue Feb 14 16:48:09 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:48:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add emit_call_malloc_nursery Message-ID: <20120214154809.B661E8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52458:01a80f3d0122 Date: 2012-02-14 16:47 +0100 http://bitbucket.org/pypy/pypy/changeset/01a80f3d0122/ Log: add emit_call_malloc_nursery diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -859,6 +859,17 @@ self.emit_call(op, arglocs, regalloc) self.propagate_memoryerror_if_r3_is_null() + def emit_call_malloc_nursery(self, op, arglocs, regalloc): + # registers r3 and r4 are allocated for this call + assert len(arglocs) == 1 + size = arglocs[0].value + gc_ll_descr = self.cpu.gc_ll_descr + self.malloc_cond( + gc_ll_descr.get_nursery_free_addr(), + gc_ll_descr.get_nursery_top_addr(), + size + ) + def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) From noreply at buildbot.pypy.org Tue Feb 14 16:55:49 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 16:55:49 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjust prepare_call_malloc_nursery Message-ID: <20120214155549.DCB138203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52459:15d85f32a5db Date: 2012-02-14 16:55 +0100 http://bitbucket.org/pypy/pypy/changeset/15d85f32a5db/ Log: adjust prepare_call_malloc_nursery diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -785,16 +785,10 @@ self.rm.force_allocate_reg(op.result, selected_reg=r.r3) t = TempInt() - self.rm.force_allocate_reg(t, selected_reg=r.r4) + self.rm.force_allocate_reg(t, selected_reg=r.r1) self.possibly_free_var(op.result) self.possibly_free_var(t) - - gc_ll_descr = self.assembler.cpu.gc_ll_descr - self.assembler.malloc_cond( - gc_ll_descr.get_nursery_free_addr(), - gc_ll_descr.get_nursery_top_addr(), - size - ) + return [imm(size)] def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): shape = gcrootmap.get_basic_shape(False) From noreply at buildbot.pypy.org Tue Feb 14 17:06:49 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 14 Feb 2012 17:06:49 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: forgot to change register Message-ID: <20120214160649.F12318203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52460:bf0389592aae Date: 2012-02-14 17:01 +0100 http://bitbucket.org/pypy/pypy/changeset/bf0389592aae/ Log: forgot to change register diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -785,7 +785,7 @@ self.rm.force_allocate_reg(op.result, selected_reg=r.r3) t = TempInt() - self.rm.force_allocate_reg(t, selected_reg=r.r1) + self.rm.force_allocate_reg(t, selected_reg=r.r4) self.possibly_free_var(op.result) self.possibly_free_var(t) return [imm(size)] From noreply at buildbot.pypy.org Tue Feb 14 17:28:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 17:28:12 +0100 (CET) Subject: [pypy-commit] pypy py3k: add a more useful repr for OperationError, and add a warning when using pycode.dump() Message-ID: <20120214162812.1F4A18203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52461:986f8604e616 Date: 2012-02-14 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/986f8604e616/ Log: add a more useful repr for OperationError, and add a warning when using pycode.dump() diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -59,6 +59,10 @@ s = self._compute_value() return '[%s: %s]' % (self.w_type, s) + def __repr__(self): + "NOT_RPYTHON" + return 'OperationError(%s)' % (self.w_type) + def errorstr(self, space, use_repr=False): "The exception class and value, as a string." w_value = self.get_w_value(space) diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -287,6 +287,8 @@ def dump(self): """A dis.dis() dump of the code object.""" + print 'WARNING: dumping a py3k bytecode using python2 opmap, the result might be inaccurate or wrong' + print co = self._to_code() dis.dis(co) From noreply at buildbot.pypy.org Tue Feb 14 17:28:13 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 17:28:13 +0100 (CET) Subject: [pypy-commit] pypy py3k: emit and implement POP_EXCEPT, which is needed for lexical exception handlers. This is equivalent to part of the patch at http://bugs.python.org/issue3021 Message-ID: <20120214162813.B1FB98203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52462:3648ec4ef989 Date: 2012-02-14 12:15 +0100 http://bitbucket.org/pypy/pypy/changeset/3648ec4ef989/ Log: emit and implement POP_EXCEPT, which is needed for lexical exception handlers. This is equivalent to part of the patch at http://bugs.python.org/issue3021 diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -550,7 +550,7 @@ ops.END_FINALLY : -3, ops.SETUP_WITH : 1, ops.SETUP_FINALLY : 0, - ops.SETUP_EXCEPT : 0, + ops.SETUP_EXCEPT : 4, ops.RETURN_VALUE : -1, ops.YIELD_VALUE : 0, diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -529,6 +529,7 @@ self.emit_op(ops.POP_TOP) self.emit_op(ops.POP_TOP) self.visit_sequence(handler.body) + self.emit_op(ops.POP_EXCEPT) self.emit_jump(ops.JUMP_FORWARD, end) self.use_next_block(next_except) self.emit_op(ops.END_FINALLY) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -174,6 +174,7 @@ ec.bytecode_trace(self) next_instr = r_uint(self.last_instr) opcode = ord(co_code[next_instr]) + #print 'executing', self.last_instr, bytecode_spec.method_names[opcode] next_instr += 1 if space.config.objspace.logbytecodes: space.bytecodecounts[opcode] += 1 @@ -524,7 +525,9 @@ self.setdictscope(w_locals) def POP_EXCEPT(self, oparg, next_instr): - raise NotImplementedError + # on CPython, POP_EXCEPT also pops the block. Here, the block is + # automatically popped by unrollstack() + self.last_exception = self.popvalue() def POP_BLOCK(self, oparg, next_instr): block = self.pop_block() @@ -1268,6 +1271,7 @@ # the stack setup is slightly different than in CPython: # instead of the traceback, we store the unroller object, # wrapped. + frame.pushvalue(frame.last_exception) # this is popped by POP_EXCEPT frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) From noreply at buildbot.pypy.org Tue Feb 14 17:28:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 17:28:14 +0100 (CET) Subject: [pypy-commit] pypy py3k: a passing test, from cpython's test suite Message-ID: <20120214162814.E11FD8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52463:eabaea0df557 Date: 2012-02-14 14:07 +0100 http://bitbucket.org/pypy/pypy/changeset/eabaea0df557/ Log: a passing test, from cpython's test suite diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -72,7 +72,6 @@ assert sys.exc_info()[0] is ValueError assert sys.exc_info() == (None, None, None) - def test_raise_with___traceback__(self): import sys try: @@ -162,6 +161,17 @@ assert sys.exc_info()[2].tb_next is some_traceback """) + def test_nested_reraise(self): + raises(TypeError, """ + def nested_reraise(): + raise + try: + raise TypeError("foo") + except: + nested_reraise() + """) + + def test_userclass(self): # new-style classes can't be raised unless they inherit from # BaseException From noreply at buildbot.pypy.org Tue Feb 14 17:28:16 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 17:28:16 +0100 (CET) Subject: [pypy-commit] pypy py3k: two more tests from cpython's test suite. The first passes, the second is failing Message-ID: <20120214162816.1AEF98203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52464:639f68fb00e9 Date: 2012-02-14 14:11 +0100 http://bitbucket.org/pypy/pypy/changeset/639f68fb00e9/ Log: two more tests from cpython's test suite. The first passes, the second is failing diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -171,6 +171,38 @@ nested_reraise() """) + def test_with_reraise_1(self): + class Context: + def __enter__(self): + return self + def __exit__(self, exc_type, exc_value, exc_tb): + return True + + def fn(): + try: + raise ValueError("foo") + except: + with Context(): + pass + raise + raises(ValueError, "fn()") + + + def test_with_reraise_2(self): + class Context: + def __enter__(self): + return self + def __exit__(self, exc_type, exc_value, exc_tb): + return True + + def fn(): + try: + raise ValueError("foo") + except: + with Context(): + raise KeyError("caught") + raise + raises(ValueError, "fn()") def test_userclass(self): # new-style classes can't be raised unless they inherit from From noreply at buildbot.pypy.org Tue Feb 14 17:28:17 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 14 Feb 2012 17:28:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix test_with_reraise_2: when we pop a WithBlock, the exception must be considered already handled, and thus we don't want to restore it (which is different than FinallyBlock) Message-ID: <20120214162817.4B6968203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52465:84d8d3671171 Date: 2012-02-14 14:33 +0100 http://bitbucket.org/pypy/pypy/changeset/84d8d3671171/ Log: fix test_with_reraise_2: when we pop a WithBlock, the exception must be considered already handled, and thus we don't want to restore it (which is different than FinallyBlock) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1285,6 +1285,7 @@ _immutable_ = True _opname = 'SETUP_FINALLY' handling_mask = -1 # handles every kind of SuspendedUnroller + restore_last_exception = True # set to False by WithBlock def cleanup(self, frame): # upon normal entry into the finally: part, the standard Python @@ -1310,14 +1311,17 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - if operationerr: + if operationerr and self.restore_last_exception: frame.last_exception = operationerr return self.handlerposition # jump to the handler + + class WithBlock(FinallyBlock): _immutable_ = True + restore_last_exception = False def really_handle(self, frame, unroller): if (frame.space.full_exceptions and From noreply at buildbot.pypy.org Tue Feb 14 17:49:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 17:49:19 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: Refactor a bit way we trck additional memory pressure - target is to support it also in the nursery Message-ID: <20120214164919.4E65B8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52466:d712241e48ce Date: 2012-02-14 17:39 +0100 http://bitbucket.org/pypy/pypy/changeset/d712241e48ce/ Log: Refactor a bit way we trck additional memory pressure - target is to support it also in the nursery diff --git a/pypy/annotation/builtin.py b/pypy/annotation/builtin.py --- a/pypy/annotation/builtin.py +++ b/pypy/annotation/builtin.py @@ -411,8 +411,7 @@ from pypy.annotation.model import SomePtr from pypy.rpython.lltypesystem import lltype -def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None, - s_add_memory_pressure=None): +def malloc(s_T, s_n=None, s_flavor=None, s_zero=None, s_track_allocation=None): assert (s_n is None or s_n.knowntype == int or issubclass(s_n.knowntype, pypy.rlib.rarithmetic.base_int)) assert s_T.is_constant() @@ -428,8 +427,6 @@ else: assert s_flavor.is_constant() assert s_track_allocation is None or s_track_allocation.is_constant() - assert (s_add_memory_pressure is None or - s_add_memory_pressure.is_constant()) # not sure how to call malloc() for the example 'p' in the # presence of s_extraargs r = SomePtr(lltype.Ptr(s_T.const)) diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -502,7 +502,6 @@ def _rewrite_raw_malloc(self, op, name, args): d = op.args[1].value.copy() d.pop('flavor') - add_memory_pressure = d.pop('add_memory_pressure', False) zero = d.pop('zero', False) track_allocation = d.pop('track_allocation', True) if d: @@ -510,8 +509,6 @@ TYPE = op.args[0].value if zero: name += '_zero' - if add_memory_pressure: - name += '_add_memory_pressure' if not track_allocation: name += '_no_track_allocation' return self._do_builtin_call(op, name, args, diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -599,12 +599,10 @@ return _ll_0_alloc_with_del def build_raw_malloc_varsize_builder(zero=False, - add_memory_pressure=False, track_allocation=True): def build_ll_1_raw_malloc_varsize(ARRAY): def _ll_1_raw_malloc_varsize(n): return lltype.malloc(ARRAY, n, flavor='raw', zero=zero, - add_memory_pressure=add_memory_pressure, track_allocation=track_allocation) return _ll_1_raw_malloc_varsize return build_ll_1_raw_malloc_varsize @@ -613,26 +611,16 @@ build_raw_malloc_varsize_builder()) build_ll_1_raw_malloc_varsize_zero = ( build_raw_malloc_varsize_builder(zero=True)) - build_ll_1_raw_malloc_varsize_zero_add_memory_pressure = ( - build_raw_malloc_varsize_builder(zero=True, add_memory_pressure=True)) - build_ll_1_raw_malloc_varsize_add_memory_pressure = ( - build_raw_malloc_varsize_builder(add_memory_pressure=True)) build_ll_1_raw_malloc_varsize_no_track_allocation = ( build_raw_malloc_varsize_builder(track_allocation=False)) build_ll_1_raw_malloc_varsize_zero_no_track_allocation = ( build_raw_malloc_varsize_builder(zero=True, track_allocation=False)) - build_ll_1_raw_malloc_varsize_zero_add_memory_pressure_no_track_allocation = ( - build_raw_malloc_varsize_builder(zero=True, add_memory_pressure=True, track_allocation=False)) - build_ll_1_raw_malloc_varsize_add_memory_pressure_no_track_allocation = ( - build_raw_malloc_varsize_builder(add_memory_pressure=True, track_allocation=False)) def build_raw_malloc_fixedsize_builder(zero=False, - add_memory_pressure=False, track_allocation=True): def build_ll_0_raw_malloc_fixedsize(STRUCT): def _ll_0_raw_malloc_fixedsize(): return lltype.malloc(STRUCT, flavor='raw', zero=zero, - add_memory_pressure=add_memory_pressure, track_allocation=track_allocation) return _ll_0_raw_malloc_fixedsize return build_ll_0_raw_malloc_fixedsize @@ -641,18 +629,10 @@ build_raw_malloc_fixedsize_builder()) build_ll_0_raw_malloc_fixedsize_zero = ( build_raw_malloc_fixedsize_builder(zero=True)) - build_ll_0_raw_malloc_fixedsize_zero_add_memory_pressure = ( - build_raw_malloc_fixedsize_builder(zero=True, add_memory_pressure=True)) - build_ll_0_raw_malloc_fixedsize_add_memory_pressure = ( - build_raw_malloc_fixedsize_builder(add_memory_pressure=True)) build_ll_0_raw_malloc_fixedsize_no_track_allocation = ( build_raw_malloc_fixedsize_builder(track_allocation=False)) build_ll_0_raw_malloc_fixedsize_zero_no_track_allocation = ( build_raw_malloc_fixedsize_builder(zero=True, track_allocation=False)) - build_ll_0_raw_malloc_fixedsize_zero_add_memory_pressure_no_track_allocation = ( - build_raw_malloc_fixedsize_builder(zero=True, add_memory_pressure=True, track_allocation=False)) - build_ll_0_raw_malloc_fixedsize_add_memory_pressure_no_track_allocation = ( - build_raw_malloc_fixedsize_builder(add_memory_pressure=True, track_allocation=False)) def build_raw_free_builder(track_allocation=True): def build_ll_1_raw_free(ARRAY): diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -261,21 +261,24 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes -def add_memory_pressure(estimate): - """Add memory pressure for OpaquePtrs.""" +def add_memory_pressure(owner, estimate): + """Add memory pressure for OpaquePtrs. Owner is either None or typically + the object which owns the reference (the one that would free it on __del__) + """ pass class AddMemoryPressureEntry(ExtRegistryEntry): _about_ = add_memory_pressure - def compute_result_annotation(self, s_nbytes): + def compute_result_annotation(self, s_owner, s_nbytes): from pypy.annotation import model as annmodel return annmodel.s_None def specialize_call(self, hop): - [v_size] = hop.inputargs(lltype.Signed) + [v_owner, v_size] = hop.inputargs(hop.args_r[0], lltype.Signed) hop.exception_cannot_occur() - return hop.genop('gc_add_memory_pressure', [v_size], + v_owner_addr = hop.genop('cast_ptr_to_adr', [v_owner], resulttype=llmemory.Address) + return hop.genop('gc_add_memory_pressure', [v_owner_addr, v_size], resulttype=lltype.Void) diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1967,7 +1967,7 @@ def malloc(T, n=None, flavor='gc', immortal=False, zero=False, - track_allocation=True, add_memory_pressure=False): + track_allocation=True): assert flavor in ('gc', 'raw') if zero or immortal: initialization = 'example' diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -739,7 +739,7 @@ if self.max_heap_size < self.next_major_collection_threshold: self.next_major_collection_threshold = self.max_heap_size - def raw_malloc_memory_pressure(self, sizehint): + def raw_malloc_memory_pressure(self, obj, sizehint): self.next_major_collection_threshold -= sizehint if self.next_major_collection_threshold < 0: # cannot trigger a full collection now, but we can ensure diff --git a/pypy/rpython/memory/gc/test/test_direct.py b/pypy/rpython/memory/gc/test/test_direct.py --- a/pypy/rpython/memory/gc/test/test_direct.py +++ b/pypy/rpython/memory/gc/test/test_direct.py @@ -593,6 +593,10 @@ addr_byte = self.gc.get_card(addr_dst, 0) assert ord(addr_byte.char[0]) == 0x01 | 0x04 # bits 0 and 2 + def test_memory_pressure(self): + + + test_writebarrier_before_copy_preserving_cards.GC_PARAMS = { "card_page_indices": 4} diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -377,21 +377,12 @@ self.malloc_varsize_nonmovable_ptr = None if getattr(GCClass, 'raw_malloc_memory_pressure', False): - def raw_malloc_memory_pressure_varsize(length, itemsize): - totalmem = length * itemsize - if totalmem > 0: - gcdata.gc.raw_malloc_memory_pressure(totalmem) - #else: probably an overflow -- the following rawmalloc - # will fail then - def raw_malloc_memory_pressure(sizehint): - gcdata.gc.raw_malloc_memory_pressure(sizehint) - self.raw_malloc_memory_pressure_varsize_ptr = getfn( - raw_malloc_memory_pressure_varsize, - [annmodel.SomeInteger(), annmodel.SomeInteger()], - annmodel.s_None, minimal_transform = False) + malloc_memory_pressure = func_with_new_name( + GCClass.raw_malloc_memory_pressure.im_func, + 'raw_malloc_memory_pressure') self.raw_malloc_memory_pressure_ptr = getfn( - raw_malloc_memory_pressure, - [annmodel.SomeInteger()], + malloc_memory_pressure, + [s_gc, annmodel.SomeAddress(), annmodel.SomeInteger(nonneg=True)], annmodel.s_None, minimal_transform = False) diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -560,10 +560,9 @@ def gct_gc_add_memory_pressure(self, hop): if hasattr(self, 'raw_malloc_memory_pressure_ptr'): op = hop.spaceop - size = op.args[0] return hop.genop("direct_call", [self.raw_malloc_memory_pressure_ptr, - size]) + self.c_const_gc, op.args[0], op.args[1]]) def varsize_malloc_helper(self, hop, flags, meth, extraargs): def intconst(c): return rmodel.inputconst(lltype.Signed, c) @@ -595,11 +594,6 @@ def gct_fv_raw_malloc_varsize(self, hop, flags, TYPE, v_length, c_const_size, c_item_size, c_offset_to_length): - if flags.get('add_memory_pressure', False): - if hasattr(self, 'raw_malloc_memory_pressure_varsize_ptr'): - hop.genop("direct_call", - [self.raw_malloc_memory_pressure_varsize_ptr, - v_length, c_item_size]) if c_offset_to_length is None: if flags.get('zero'): fnptr = self.raw_malloc_varsize_no_length_zero_ptr diff --git a/pypy/rpython/rbuiltin.py b/pypy/rpython/rbuiltin.py --- a/pypy/rpython/rbuiltin.py +++ b/pypy/rpython/rbuiltin.py @@ -341,17 +341,15 @@ BUILTIN_TYPER[object.__init__] = rtype_object__init__ # annotation of low-level types -def rtype_malloc(hop, i_flavor=None, i_zero=None, i_track_allocation=None, - i_add_memory_pressure=None): +def rtype_malloc(hop, i_flavor=None, i_zero=None, i_track_allocation=None): assert hop.args_s[0].is_constant() vlist = [hop.inputarg(lltype.Void, arg=0)] opname = 'malloc' - v_flavor, v_zero, v_track_allocation, v_add_memory_pressure = parse_kwds( + v_flavor, v_zero, v_track_allocation = parse_kwds( hop, (i_flavor, lltype.Void), (i_zero, None), - (i_track_allocation, None), - (i_add_memory_pressure, None)) + (i_track_allocation, None)) flags = {'flavor': 'gc'} if v_flavor is not None: @@ -360,8 +358,6 @@ flags['zero'] = v_zero.value if i_track_allocation is not None: flags['track_allocation'] = v_track_allocation.value - if i_add_memory_pressure is not None: - flags['add_memory_pressure'] = v_add_memory_pressure.value vlist.append(hop.inputconst(lltype.Void, flags)) assert 1 <= hop.nb_args <= 2 diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1451,8 +1451,9 @@ ARRAY = rffi.CArray(rffi.INT) class A: def __init__(self, n): - self.buf = lltype.malloc(ARRAY, n, flavor='raw', - add_memory_pressure=True) + self.buf = lltype.malloc(ARRAY, n, flavor='raw') + rgc.add_memory_pressure(self, n * rffi.sizeof(ARRAY.OF)) + def __del__(self): lltype.free(self.buf, flavor='raw') A(6) @@ -1486,7 +1487,7 @@ flavor='raw') digest = ropenssl.EVP_get_digestbyname('sha1') ropenssl.EVP_DigestInit(self.ctx, digest) - rgc.add_memory_pressure(HASH_MALLOC_SIZE + 64) + rgc.add_memory_pressure(self, HASH_MALLOC_SIZE + 64) def __del__(self): ropenssl.EVP_MD_CTX_cleanup(self.ctx) From noreply at buildbot.pypy.org Tue Feb 14 17:49:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 17:49:54 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: start supporting array creation not working so far Message-ID: <20120214164954.3BA2A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52467:729aca0b90d6 Date: 2012-02-14 18:49 +0200 http://bitbucket.org/pypy/pypy/changeset/729aca0b90d6/ Log: start supporting array creation not working so far diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -112,6 +112,9 @@ def is_bool_type(self): return self.kind == BOOLLTR + def is_record_type(self): + return self.fields is not None + def __repr__(self): if self.fields is not None: return '' % self.fields diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1132,21 +1132,20 @@ if copy: return w_item_or_iterable.copy(space) return w_item_or_iterable - shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) + if w_dtype is None or space.is_w(w_dtype, space.w_None): + dtype = None + else: + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype)) + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable, dtype) # they come back in C order size = len(elems_w) - if w_dtype is None or space.is_w(w_dtype, space.w_None): - w_dtype = None + if dtype is None: for w_elem in elems_w: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, + dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, w_dtype) - if w_dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: + if dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: break - if w_dtype is None: - w_dtype = space.w_None - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -38,22 +38,31 @@ rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides -def find_shape_and_elems(space, w_iterable): +def is_single_elem(space, w_elem, is_rec_type): + if (is_rec_type and space.isinstance_w(w_elem, space.w_tuple)): + return True + if space.issequence_w(w_elem): + return False + return True + +def find_shape_and_elems(space, w_iterable, dtype): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) + is_rec_type = dtype.is_record_type() while True: new_batch = [] if not batch: return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): + if is_single_elem(space, batch[0], is_rec_type): + for w_elem in batch: + if is_single_elem(space, w_elem, is_rec_type): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) return shape, batch size = space.len_w(batch[0]) for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + if (not is_single_elem(space, w_elem, is_rec_type) or + space.len_w(w_elem) != size): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) new_batch += space.listview(w_elem) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1810,3 +1810,10 @@ b[1] = a[1] assert a[1]['y'] == 2 + def test_views(self): + skip("xx") + + def test_creation(self): + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + assert repr(a[0]) == '(1, 2.0)' From notifications-noreply at bitbucket.org Tue Feb 14 19:14:10 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Tue, 14 Feb 2012 18:14:10 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120214181410.23302.29606@bitbucket01.managed.contegix.com> You have received a notification from Igor Shishkin. Hi, I forked pypy. My fork is at https://bitbucket.org/specialforest/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Tue Feb 14 19:42:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 19:42:00 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: (fijal, arigo) be a bit more precise about stored sizes Message-ID: <20120214184200.A14988203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52468:d78b4be75ced Date: 2012-02-14 20:41 +0200 http://bitbucket.org/pypy/pypy/changeset/d78b4be75ced/ Log: (fijal, arigo) be a bit more precise about stored sizes diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -52,8 +52,6 @@ from pypy.rlib.rarithmetic import ovfcheck, LONG_BIT, intmask, r_uint from pypy.rlib.rarithmetic import LONG_BIT_SHIFT from pypy.rlib.debug import ll_assert, debug_print, debug_start, debug_stop -from pypy.rlib.objectmodel import we_are_translated -from pypy.tool.sourcetools import func_with_new_name # # Handles the objects in 2 generations: @@ -113,8 +111,9 @@ # one bit per 'card_page_indices' indices. GCFLAG_HAS_CARDS = first_gcflag << 5 GCFLAG_CARDS_SET = first_gcflag << 6 # <- at least one card bit is set +GCFLAG_OWNS_RAW_MEMORY = first_gcflag << 7 -TID_MASK = (first_gcflag << 7) - 1 +TID_MASK = (first_gcflag << 8) - 1 FORWARDSTUB = lltype.GcStruct('forwarding_stub', @@ -740,6 +739,19 @@ self.next_major_collection_threshold = self.max_heap_size def raw_malloc_memory_pressure(self, obj, sizehint): + size = self.get_size(obj) + if obj + size == self.nursery_free: + sizehint = llarena.round_up_for_allocation(sizehint, WORD) + if (self.nursery_top - self.nursery_free) < sizehint: + self.nursery_free = self.nursery_top + else: + self.nursery_free.signed[0] = sizehint + self.header(obj).tid |= GCFLAG_OWNS_RAW_MEMORY + self.nursery_free += sizehint + return + self._raw_malloc_memory_pressure_major(sizehint) + + def _raw_malloc_memory_pressure_major(self, sizehint): self.next_major_collection_threshold -= sizehint if self.next_major_collection_threshold < 0: # cannot trigger a full collection now, but we can ensure @@ -1448,6 +1460,10 @@ # # Copy it. Note that references to other objects in the # nursery are kept unchanged in this step. + if self.header(obj).tid & GCFLAG_OWNS_RAW_MEMORY: + raw_memory_size = (obj + size).signed[0] + self.major_collection_threshold -= raw_memory_size + self.header(obj).tid &= ~GCFLAG_OWNS_RAW_MEMORY llmemory.raw_memcopy(obj - size_gc_header, newhdr, totalsize) # # Set the old object's tid to -42 (containing all flags) and diff --git a/pypy/rpython/memory/gc/test/test_direct.py b/pypy/rpython/memory/gc/test/test_direct.py --- a/pypy/rpython/memory/gc/test/test_direct.py +++ b/pypy/rpython/memory/gc/test/test_direct.py @@ -594,8 +594,15 @@ assert ord(addr_byte.char[0]) == 0x01 | 0x04 # bits 0 and 2 def test_memory_pressure(self): - - + obj = self.malloc(S) + nursery_size = self.gc.nursery_size + self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj), + nursery_size / 2) + obj2 = self.malloc(S) + self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj2), + nursery_size / 2) + # obj should be dead by now + assert self.gc.nursery_free == self.gc.nursery test_writebarrier_before_copy_preserving_cards.GC_PARAMS = { "card_page_indices": 4} From noreply at buildbot.pypy.org Tue Feb 14 19:51:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 14 Feb 2012 19:51:04 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add an assert. Message-ID: <20120214185104.BDAAE8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52469:8d8764e61a5f Date: 2012-02-13 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/8d8764e61a5f/ Log: Add an assert. diff --git a/pypy/module/transaction/threadintf.py b/pypy/module/transaction/threadintf.py --- a/pypy/module/transaction/threadintf.py +++ b/pypy/module/transaction/threadintf.py @@ -24,6 +24,7 @@ lock.release() def start_new_thread(callback, args): + assert args == () if we_are_translated(): ll_thread.start_new_thread(callback, args) else: From noreply at buildbot.pypy.org Tue Feb 14 19:51:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 14 Feb 2012 19:51:06 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: The proper voodoo dance to have it work untranslated. Message-ID: <20120214185106.C53F88203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: raw-memory-pressure-nursery Changeset: r52470:e4a336273b9f Date: 2012-02-14 19:50 +0100 http://bitbucket.org/pypy/pypy/changeset/e4a336273b9f/ Log: The proper voodoo dance to have it work untranslated. The test still fails but for a good reason now. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -740,11 +740,16 @@ def raw_malloc_memory_pressure(self, obj, sizehint): size = self.get_size(obj) + obj = llarena.getfakearenaaddress(obj) if obj + size == self.nursery_free: - sizehint = llarena.round_up_for_allocation(sizehint, WORD) + sizehint = llarena.round_up_for_allocation( + sizehint, llmemory.sizeof(lltype.Signed)) + sizehint = llmemory.raw_malloc_usage(sizehint) if (self.nursery_top - self.nursery_free) < sizehint: self.nursery_free = self.nursery_top else: + llarena.arena_reserve(self.nursery_free, + llmemory.sizeof(lltype.Signed)) self.nursery_free.signed[0] = sizehint self.header(obj).tid |= GCFLAG_OWNS_RAW_MEMORY self.nursery_free += sizehint diff --git a/pypy/rpython/memory/gc/test/test_direct.py b/pypy/rpython/memory/gc/test/test_direct.py --- a/pypy/rpython/memory/gc/test/test_direct.py +++ b/pypy/rpython/memory/gc/test/test_direct.py @@ -596,11 +596,12 @@ def test_memory_pressure(self): obj = self.malloc(S) nursery_size = self.gc.nursery_size + BYTE = llmemory.sizeof(lltype.Char) self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj), - nursery_size / 2) + BYTE * (nursery_size / 2)) obj2 = self.malloc(S) self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj2), - nursery_size / 2) + BYTE * (nursery_size / 2)) # obj should be dead by now assert self.gc.nursery_free == self.gc.nursery From noreply at buildbot.pypy.org Tue Feb 14 20:27:26 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 20:27:26 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add finish_once. Message-ID: <20120214192726.6FF1A8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52471:e37b3183f5db Date: 2012-02-14 14:27 -0500 http://bitbucket.org/pypy/pypy/changeset/e37b3183f5db/ Log: Add finish_once. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -415,6 +415,20 @@ self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) + def finish_once(self): + if self._debug: + debug_start('jit-backend-counts') + for i in range(len(self.loop_run_counters)): + struct = self.loop_run_counters[i] + if struct.type == 'l': + prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) + else: + prefix = 'entry ' + str(struct.number) + debug_print(prefix + ':' + str(struct.i)) + debug_stop('jit-backend-counts') + @staticmethod def _release_gil_shadowstack(): before = rffi.aroundstate.before From noreply at buildbot.pypy.org Tue Feb 14 20:59:59 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 20:59:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Import debug_prints. Add debug_{start, stop} to setup_once. Message-ID: <20120214195959.B61FB8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52472:9f795730f233 Date: 2012-02-14 14:59 -0500 http://bitbucket.org/pypy/pypy/changeset/9f795730f233/ Log: Import debug_prints. Add debug_{start,stop} to setup_once. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -37,6 +37,8 @@ from pypy.jit.metainterp.history import (BoxInt, ConstInt, ConstPtr, ConstFloat, Box, INT, REF, FLOAT) from pypy.jit.backend.x86.support import values_array +from pypy.rlib.debug import (debug_print, debug_start, debug_stop, + have_debug_prints) from pypy.rlib import rgc from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated @@ -414,6 +416,9 @@ self.exit_code_adr = self._gen_exit_path() self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) + debug_start('jit-backend-counts') + self.set_debug(have_debug_prints()) + debug_stop('jit-backend-counts') def finish_once(self): if self._debug: From noreply at buildbot.pypy.org Tue Feb 14 21:08:09 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 21:08:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add more debug infrastructure. Message-ID: <20120214200809.2BBAA8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52473:2e331721bba6 Date: 2012-02-14 15:07 -0500 http://bitbucket.org/pypy/pypy/changeset/2e331721bba6/ Log: Add more debug infrastructure. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -48,6 +48,12 @@ memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, sandboxsafe=True, _nowrapper=True) + +DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point + ('number', lltype.Signed)) + def hi(w): return w >> 16 @@ -110,6 +116,12 @@ self.max_stack_params = 0 self.propagate_exception_path = 0 self.setup_failure_recovery() + self._debug = False + self.loop_run_counters = [] + self.debug_counter_descr = cpu.fielddescrof(DEBUG_COUNTER, 'i') + + def set_debug(self, v): + self._debug = v def _save_nonvolatiles(self): """ save nonvolatile GPRs in GPR SAVE AREA From noreply at buildbot.pypy.org Tue Feb 14 21:34:16 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 21:34:16 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Fix signature of __exit__ in scratch_reg. Message-ID: <20120214203416.E38F78203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52474:418c8c63f1ef Date: 2012-02-14 15:33 -0500 http://bitbucket.org/pypy/pypy/changeset/418c8c63f1ef/ Log: Fix signature of __exit__ in scratch_reg. diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1181,7 +1181,7 @@ def __enter__(self): self.mc.alloc_scratch_reg() - def __exit__(self): + def __exit__(self, *args): self.mc.free_scratch_reg() class BranchUpdater(PPCAssembler): From noreply at buildbot.pypy.org Tue Feb 14 21:34:56 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 21:34:56 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Comment out gcremovetypeptr assert that now succeeds. Message-ID: <20120214203456.1AA8E8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52475:8a1db6056b87 Date: 2012-02-14 15:34 -0500 http://bitbucket.org/pypy/pypy/changeset/8a1db6056b87/ Log: Comment out gcremovetypeptr assert that now succeeds. diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -32,7 +32,7 @@ gcdescr.force_index_ofs = FORCE_INDEX_OFS # XXX for now the ppc backend does not support the gcremovetypeptr # translation option - assert gcdescr.config.translation.gcremovetypeptr is False + # assert gcdescr.config.translation.gcremovetypeptr is False AbstractLLCPU.__init__(self, rtyper, stats, opts, translate_support_code, gcdescr) From noreply at buildbot.pypy.org Tue Feb 14 21:36:28 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 14 Feb 2012 21:36:28 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Do not reference .value in load_imm calls. Message-ID: <20120214203628.531EC8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52476:07e49c3b451d Date: 2012-02-14 15:36 -0500 http://bitbucket.org/pypy/pypy/changeset/07e49c3b451d/ Log: Do not reference .value in load_imm calls. Fix one gen_load_int -> load_imm. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -949,17 +949,17 @@ def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): assert size & (WORD-1) == 0 # must be correctly aligned - self.mc.load_imm(r.RES.value, nursery_free_adr) + self.mc.load_imm(r.RES, nursery_free_adr) self.mc.load(r.RES.value, r.RES.value, 0) if _check_imm_arg(size): self.mc.addi(r.r4.value, r.RES.value, size) else: - self.mc.load_imm(r.r4.value, size) + self.mc.load_imm(r.r4, size) self.mc.add(r.r4.value, r.RES.value, r.r4.value) with scratch_reg(self.mc): - self.mc.gen_load_int(r.SCRATCH.value, nursery_top_adr) + self.mc.load_imm(r.SCRATCH, nursery_top_adr) self.mc.loadx(r.SCRATCH.value, 0, r.SCRATCH.value) self.mc.cmp_op(0, r.r4.value, r.SCRATCH.value, signed=False) @@ -982,7 +982,7 @@ pmc.bc(4, 1, offset) # jump if LE (not GT) with scratch_reg(self.mc): - self.mc.load_imm(r.SCRATCH.value, nursery_free_adr) + self.mc.load_imm(r.SCRATCH, nursery_free_adr) self.mc.storex(r.r1.value, 0, r.SCRATCH.value) def mark_gc_roots(self, force_index, use_copy_area=False): From noreply at buildbot.pypy.org Tue Feb 14 22:13:33 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 14 Feb 2012 22:13:33 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: add more ufunc tests, try an implementation Message-ID: <20120214211333.ED73C8203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52477:08b7ccd7985e Date: 2012-02-14 22:46 +0200 http://bitbucket.org/pypy/pypy/changeset/08b7ccd7985e/ Log: add more ufunc tests, try an implementation diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -35,6 +35,7 @@ kwds_w.pop('casting', None) w_subok = kwds_w.pop('subok', None) w_out = kwds_w.pop('out', space.w_None) + # Setup a default value for out if space.is_w(w_out, space.w_None): out = None else: @@ -46,13 +47,17 @@ raise OperationError(space.w_ValueError, space.wrap("invalid number of arguments") ) - elif len(args_w) > self.argcount: + elif (len(args_w) > self.argcount and out is not None) or \ + (len(args_w) > self.argcount + 1): raise OperationError(space.w_TypeError, space.wrap("invalid number of arguments") ) - elif out is not None: + # Override the default out value, if it has been provided in w_wargs + if len(args_w) > self.argcount: + out = args_w[-1] + else: args_w = args_w[:] + [out] - if args_w[-1] and not isinstance(args_w[-1], BaseArray): + if out is not None and not isinstance(out, BaseArray): raise OperationError(space.w_TypeError, space.wrap( 'output must be an array')) return self.call(space, args_w) @@ -223,23 +228,34 @@ def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call1, - convert_to_array, Scalar) + convert_to_array, Scalar, shape_agreement) - [w_obj, w_out] = args_w + if len(args_w)<2: + [w_obj] = args_w + out = None + else: + [w_obj, out] = args_w w_obj = convert_to_array(space, w_obj) calc_dtype = find_unaryop_result_dtype(space, w_obj.find_dtype(), promote_to_float=self.promote_to_float, promote_bools=self.promote_bools) - if self.bool_result: + if out: + ret_shape = shape_agreement(space, w_obj.shape, out.shape) + assert(ret_shape is not None) + res_dtype = out.find_dtype() + elif self.bool_result: res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype else: res_dtype = calc_dtype if isinstance(w_obj, Scalar): - return space.wrap(self.func(calc_dtype, w_obj.value.convert_to(calc_dtype))) + arr = self.func(calc_dtype, w_obj.value.convert_to(calc_dtype)) + if isinstance(out,Scalar): + out.value=arr + return space.wrap(out) w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, - w_obj) + w_obj, out) w_obj.add_invalidates(w_res) return w_res @@ -259,11 +275,14 @@ def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, convert_to_array, Scalar, shape_agreement, BaseArray) - - [w_lhs, w_rhs, w_out] = args_w + if len(args_w)>2: + [w_lhs, w_rhs, w_out] = args_w + else: + [w_lhs, w_rhs] = args_w + w_out = None w_lhs = convert_to_array(space, w_lhs) w_rhs = convert_to_array(space, w_rhs) - if space.is_w(w_out, space.w_None) or not w_out: + if space.is_w(w_out, space.w_None) or w_out is None: out = None calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -902,35 +902,6 @@ assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() - def test_reduce_out(self): - from numpypy import arange, zeros, array - a = arange(15).reshape(5, 3) - b = arange(12).reshape(4,3) - c = a.sum(0, out=b[1]) - assert (c == [30, 35, 40]).all() - assert (c == b[1]).all() - raises(ValueError, 'a.prod(0, out=arange(10))') - a=arange(12).reshape(3,2,2) - raises(ValueError, 'a.sum(0, out=arange(12).reshape(3,2,2))') - raises(ValueError, 'a.sum(0, out=arange(3))') - c = array([-1, 0, 1]).sum(out=zeros([], dtype=bool)) - #You could argue that this should product False, but - # that would require an itermediate result. Cpython numpy - # gives True. - assert c == True - a = array([[-1, 0, 1], [1, 0, -1]]) - c = a.sum(0, out=zeros((3,), dtype=bool)) - assert (c == [True, False, True]).all() - c = a.sum(1, out=zeros((2,), dtype=bool)) - assert (c == [True, True]).all() - - def test_reduce_intermediary(self): - from numpypy import arange, array - a = arange(15).reshape(5, 3) - b = array(range(3), dtype=bool) - c = a.prod(0, out=b) - assert(b == [False, True, True]).all() - def test_identity(self): from _numpypy import identity, array from _numpypy import int32, float64, dtype diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -0,0 +1,48 @@ +import py +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestOutArg(BaseNumpyAppTest): + def test_reduce_out(self): + from numpypy import arange, zeros, array + a = arange(15).reshape(5, 3) + b = arange(12).reshape(4,3) + c = a.sum(0, out=b[1]) + assert (c == [30, 35, 40]).all() + assert (c == b[1]).all() + raises(ValueError, 'a.prod(0, out=arange(10))') + a=arange(12).reshape(3,2,2) + raises(ValueError, 'a.sum(0, out=arange(12).reshape(3,2,2))') + raises(ValueError, 'a.sum(0, out=arange(3))') + c = array([-1, 0, 1]).sum(out=zeros([], dtype=bool)) + #You could argue that this should product False, but + # that would require an itermediate result. Cpython numpy + # gives True. + assert c == True + a = array([[-1, 0, 1], [1, 0, -1]]) + c = a.sum(0, out=zeros((3,), dtype=bool)) + assert (c == [True, False, True]).all() + c = a.sum(1, out=zeros((2,), dtype=bool)) + assert (c == [True, True]).all() + + def test_reduce_intermediary(self): + from numpypy import arange, array + a = arange(15).reshape(5, 3) + b = array(range(3), dtype=bool) + c = a.prod(0, out=b) + assert(b == [False, True, True]).all() + + def test_ufunc_out(self): + from _numpypy import array, negative, zeros + a = array([[1, 2], [3, 4]]) + c = zeros((2,2,2)) + b = negative(a + a, out=c[1]) + assert (b == [[-2, -4], [-6, -8]]).all() + assert (c[:, :, 1] == [[0, 0], [-4, -8]]).all() + + def test_ufunc_cast(self): + from _numpypy import array, negative + cast_error = raises(TypeError, negative, array(16,dtype=float), + out=array(0, dtype=int)) + assert str(cast_error.value) == \ + "Cannot cast ufunc negative output from dtype('float64') to dtype('int64') with casting rule 'same_kind'" + From noreply at buildbot.pypy.org Tue Feb 14 22:13:35 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 14 Feb 2012 22:13:35 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: now nothing works Message-ID: <20120214211335.86C2F8203C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52478:01d2a5613fb2 Date: 2012-02-14 23:11 +0200 http://bitbucket.org/pypy/pypy/changeset/01d2a5613fb2/ Log: now nothing works diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -773,6 +773,7 @@ class Call1(VirtualArray): def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values, out_arg=None): + xxx VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.values = values self.size = values.size diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -235,6 +235,8 @@ out = None else: [w_obj, out] = args_w + if space.is_w(out, space.w_None): + out = None w_obj = convert_to_array(space, w_obj) calc_dtype = find_unaryop_result_dtype(space, w_obj.find_dtype(), diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -36,6 +36,7 @@ a = array([[1, 2], [3, 4]]) c = zeros((2,2,2)) b = negative(a + a, out=c[1]) + print c assert (b == [[-2, -4], [-6, -8]]).all() assert (c[:, :, 1] == [[0, 0], [-4, -8]]).all() From noreply at buildbot.pypy.org Tue Feb 14 22:32:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 22:32:35 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: fix the test and write another one. fix the typo Message-ID: <20120214213235.92F5E8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52479:8f8652b2b601 Date: 2012-02-14 23:32 +0200 http://bitbucket.org/pypy/pypy/changeset/8f8652b2b601/ Log: fix the test and write another one. fix the typo diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -1466,8 +1466,8 @@ # Copy it. Note that references to other objects in the # nursery are kept unchanged in this step. if self.header(obj).tid & GCFLAG_OWNS_RAW_MEMORY: - raw_memory_size = (obj + size).signed[0] - self.major_collection_threshold -= raw_memory_size + raw_memory_size = (llarena.getfakearenaaddress(obj) + size).signed[0] + self.next_major_collection_threshold -= raw_memory_size self.header(obj).tid &= ~GCFLAG_OWNS_RAW_MEMORY llmemory.raw_memcopy(obj - size_gc_header, newhdr, totalsize) # diff --git a/pypy/rpython/memory/gc/test/test_direct.py b/pypy/rpython/memory/gc/test/test_direct.py --- a/pypy/rpython/memory/gc/test/test_direct.py +++ b/pypy/rpython/memory/gc/test/test_direct.py @@ -602,8 +602,25 @@ obj2 = self.malloc(S) self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj2), BYTE * (nursery_size / 2)) - # obj should be dead by now - assert self.gc.nursery_free == self.gc.nursery + assert self.gc.nursery_free == self.gc.nursery_top + + def test_memory_pressure_surviving(self): + obj = self.malloc(S) + self.stackroots.append(obj) + nursery_size = self.gc.nursery_size + BYTE = llmemory.sizeof(lltype.Char) + self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj), + BYTE * (nursery_size / 2)) + obj2 = self.malloc(S) + self.gc.raw_malloc_memory_pressure(llmemory.cast_ptr_to_adr(obj2), + BYTE * (nursery_size / 2)) + assert self.gc.nursery_free == self.gc.nursery_top + one = self.gc.next_major_collection_threshold + self.gc.minor_collection() + obj = self.stackroots[0] + assert not self.gc.is_in_nursery(llmemory.cast_ptr_to_adr(obj)) + two = self.gc.next_major_collection_threshold + assert one - two >= nursery_size / 2 test_writebarrier_before_copy_preserving_cards.GC_PARAMS = { "card_page_indices": 4} From noreply at buildbot.pypy.org Tue Feb 14 22:41:41 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:41 +0100 (CET) Subject: [pypy-commit] pypy default: dlltool.CLibraryBuilder works again, it builds RPython shared libraries Message-ID: <20120214214141.D6D408203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52480:c7198f2fca23 Date: 2012-02-14 12:10 +0100 http://bitbucket.org/pypy/pypy/changeset/c7198f2fca23/ Log: dlltool.CLibraryBuilder works again, it builds RPython shared libraries that are not CPython extension modules. diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s From noreply at buildbot.pypy.org Tue Feb 14 22:41:43 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:43 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement Py_GetVersion() Message-ID: <20120214214143.6CBA38203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52481:237af9251261 Date: 2012-02-14 21:31 +0100 http://bitbucket.org/pypy/pypy/changeset/237af9251261/ Log: cpyext: implement Py_GetVersion() diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: From noreply at buildbot.pypy.org Tue Feb 14 22:41:44 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:44 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Expose PyCFunctionObject::m_module Message-ID: <20120214214144.9705A8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52482:81a26dec2e1a Date: 2012-02-14 21:39 +0100 http://bitbucket.org/pypy/pypy/changeset/81a26dec2e1a/ Log: cpyext: Expose PyCFunctionObject::m_module diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) From noreply at buildbot.pypy.org Tue Feb 14 22:41:45 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:45 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyThreadState_GET as an alias to PyThreadState_Get. Message-ID: <20120214214145.BFAFF8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52483:418883fe233e Date: 2012-02-14 21:40 +0100 http://bitbucket.org/pypy/pypy/changeset/418883fe233e/ Log: cpyext: add PyThreadState_GET as an alias to PyThreadState_Get. diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -24,4 +24,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ From noreply at buildbot.pypy.org Tue Feb 14 22:41:46 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:46 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: declare the RESTRICTED constants for struct members, Message-ID: <20120214214146.E94678203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52484:8dd2cfbcc2f7 Date: 2012-02-14 21:48 +0100 http://bitbucket.org/pypy/pypy/changeset/8dd2cfbcc2f7/ Log: cpyext: declare the RESTRICTED constants for struct members, even if pypy does not implement this at all it seems. diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED From noreply at buildbot.pypy.org Tue Feb 14 22:41:48 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:48 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Implement PyFile_WriteObject() and PyFile_SoftSpace() Message-ID: <20120214214148.1F63A8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52485:f6349995a199 Date: 2012-02-14 22:06 +0100 http://bitbucket.org/pypy/pypy/changeset/f6349995a199/ Log: cpyext: Implement PyFile_WriteObject() and PyFile_SoftSpace() diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if newflag: + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" From noreply at buildbot.pypy.org Tue Feb 14 22:41:49 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:49 +0100 (CET) Subject: [pypy-commit] pypy default: Remove newly implemented functions from stubs.py Message-ID: <20120214214149.4F6348203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52486:2fd3c79938e4 Date: 2012-02-14 22:09 +0100 http://bitbucket.org/pypy/pypy/changeset/2fd3c79938e4/ Log: Remove newly implemented functions from stubs.py diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower From noreply at buildbot.pypy.org Tue Feb 14 22:41:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:50 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PyObject_Dir() Message-ID: <20120214214150.7E62E8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52487:d75aa4b92253 Date: 2012-02-14 22:22 +0100 http://bitbucket.org/pypy/pypy/changeset/d75aa4b92253/ Log: cpyext: implement PyObject_Dir() diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1647,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) From noreply at buildbot.pypy.org Tue Feb 14 22:41:51 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 22:41:51 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PyString_InternInPlace() Message-ID: <20120214214151.AD1288203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52488:a5bb40bcb46b Date: 2012-02-14 22:34 +0100 http://bitbucket.org/pypy/pypy/changeset/a5bb40bcb46b/ Log: cpyext: implement PyString_InternInPlace() diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,25 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1768,21 +1768,6 @@ """Empty an existing set of all elements.""" raise NotImplementedError - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) From noreply at buildbot.pypy.org Tue Feb 14 22:49:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 14 Feb 2012 22:49:38 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: I think most modules now use the correct interface Message-ID: <20120214214938.31CB68203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52489:0385f8a3e803 Date: 2012-02-14 23:49 +0200 http://bitbucket.org/pypy/pypy/changeset/0385f8a3e803/ Log: I think most modules now use the correct interface diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -23,6 +23,7 @@ ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) def __init__(self, space, name): + rgc.add_memory_pressure(self, HASH_MALLOC_SIZE + self.digest_size) self.name = name digest_type = self.digest_type_by_name(space) self.digest_size = rffi.getintfield(digest_type, 'c_md_size') @@ -33,7 +34,6 @@ self.lock = Lock(space) ctx = lltype.malloc(ropenssl.EVP_MD_CTX.TO, flavor='raw') - rgc.add_memory_pressure(HASH_MALLOC_SIZE + self.digest_size) try: ropenssl.EVP_DigestInit(ctx, digest_type) self.ctx = ctx diff --git a/pypy/module/_multiprocessing/interp_semaphore.py b/pypy/module/_multiprocessing/interp_semaphore.py --- a/pypy/module/_multiprocessing/interp_semaphore.py +++ b/pypy/module/_multiprocessing/interp_semaphore.py @@ -208,7 +208,7 @@ # don't forget to wrap them into OperationError if sys.platform == 'win32': - def create_semaphore(space, name, val, max): + def create_semaphore(space, self, name, val, max): rwin32.SetLastError(0) handle = _CreateSemaphore(rffi.NULL, val, max, rffi.NULL) # On Windows we should fail on ERROR_ALREADY_EXISTS @@ -302,14 +302,14 @@ return semlock_getvalue(self, space) == 0 else: - def create_semaphore(space, name, val, max): + def create_semaphore(space, w_semaphore, name, val, max): sem = sem_open(name, os.O_CREAT | os.O_EXCL, 0600, val) try: sem_unlink(name) except OSError: pass else: - rgc.add_memory_pressure(SEM_T_SIZE) + rgc.add_memory_pressure(w_semaphore, SEM_T_SIZE) return sem def delete_semaphore(handle): @@ -517,12 +517,12 @@ counter = space.fromcache(CounterState).getCount() name = "/mp%d-%d" % (os.getpid(), counter) + self = space.allocate_instance(W_SemLock, w_subtype) try: - handle = create_semaphore(space, name, value, maxvalue) + handle = create_semaphore(space, self, name, value, maxvalue) except OSError, e: raise wrap_oserror(space, e) - self = space.allocate_instance(W_SemLock, w_subtype) self.__init__(handle, kind, maxvalue) return space.wrap(self) diff --git a/pypy/module/_rawffi/interp_rawffi.py b/pypy/module/_rawffi/interp_rawffi.py --- a/pypy/module/_rawffi/interp_rawffi.py +++ b/pypy/module/_rawffi/interp_rawffi.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.unroll import unrolling_iterable import pypy.rlib.rposix as rposix +from pypy.rlib import rgc _MS_WINDOWS = os.name == "nt" @@ -267,8 +268,9 @@ if address: self.ll_buffer = rffi.cast(rffi.VOIDP, address) else: + rgc.add_memory_pressure(self, size) self.ll_buffer = lltype.malloc(rffi.VOIDP.TO, size, flavor='raw', - zero=True, add_memory_pressure=True) + zero=True) if tracker.DO_TRACING: ll_buf = rffi.cast(lltype.Signed, self.ll_buffer) tracker.trace_allocation(ll_buf, self) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -12,7 +12,7 @@ from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.lltypesystem import lltype, rffi - +from pypy.rlib import rgc @unwrap_spec(typecode=str) def w_array(space, w_cls, typecode, __args__): @@ -226,8 +226,8 @@ some += size >> 3 self.allocated = size + some new_buffer = lltype.malloc(mytype.arraytype, - self.allocated, flavor='raw', - add_memory_pressure=True) + self.allocated, flavor='raw') + rgc.add_memory_pressure(self, self.allocated * rffi.sizeof(mytype.arraytype.OF)) for i in range(min(size, self.len)): new_buffer[i] = self.buffer[i] else: diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -7,7 +7,7 @@ from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT from pypy.rpython.lltypesystem import lltype, rffi - +from pypy.rlib import rgc UNSIGNEDLTR = "u" SIGNEDLTR = "i" @@ -30,12 +30,12 @@ self.alternate_constructors = alternate_constructors self.aliases = aliases - def malloc(self, length): + def malloc(self, array, length): # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, self.itemtype.get_element_size() * length, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True - ) + size = self.itemtype.get_element_size() * length + rgc.add_memory_pressure(array, size) + return lltype.malloc(VOID_STORAGE, size, zero=True, flavor="raw", + track_allocation=False) @specialize.argtype(1) def box(self, value): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -903,7 +903,7 @@ if parent is not None: self.storage = parent.storage else: - self.storage = dtype.malloc(size) + self.storage = dtype.malloc(self, size) self.order = order self.dtype = dtype if self.strides is None: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -11,7 +11,7 @@ class MockDtype(object): - def malloc(self, size): + def malloc(self, array, size): return None diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -407,6 +407,7 @@ class W_XMLParserType(Wrappable): def __init__(self, space, parser, w_intern): + rgc.add_memory_pressure(XML_Parser_SIZE + 300) self.itself = parser self.w_intern = w_intern @@ -824,7 +825,6 @@ # Currently this is just the size of the pointer and some estimated bytes. # The struct isn't actually defined in expat.h - it is in xmlparse.c # XXX: find a good estimate of the XML_ParserStruct - rgc.add_memory_pressure(XML_Parser_SIZE + 300) if not xmlparser: raise OperationError(space.w_RuntimeError, space.wrap('XML_ParserCreate failed')) diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -4,7 +4,7 @@ import py from pypy.rlib import jit, rgc from pypy.rlib.debug import ll_assert -from pypy.rlib.objectmodel import we_are_translated, specialize +from pypy.rlib.objectmodel import we_are_translated, specialize, instantiate from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.tool import rffi_platform from pypy.tool import autopath @@ -83,7 +83,10 @@ _nowrapper=True) def allocate_lock(): - return Lock(allocate_ll_lock()) + lock = instantiate(Lock) + ll_lock = allocate_ll_lock(lock) + lock.__init__(lock, ll_lock) + return lock @specialize.arg(0) def ll_start_new_thread(func): From noreply at buildbot.pypy.org Tue Feb 14 23:02:44 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 14 Feb 2012 23:02:44 +0100 (CET) Subject: [pypy-commit] pypy default: Translation fix Message-ID: <20120214220244.417428203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52490:30cb1ba90150 Date: 2012-02-14 23:02 +0100 http://bitbucket.org/pypy/pypy/changeset/30cb1ba90150/ Log: Translation fix diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -98,7 +98,7 @@ There is no way to detect errors from this function, but doing so should not be needed.""" try: - if newflag: + if rffi.cast(lltype.Signed, newflag): w_newflag = space.w_True else: w_newflag = space.w_False From noreply at buildbot.pypy.org Wed Feb 15 00:04:13 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 00:04:13 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: hopefully clean up and improve Message-ID: <20120214230413.554FA8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52491:aad4ffbb7df8 Date: 2012-02-15 01:03 +0200 http://bitbucket.org/pypy/pypy/changeset/aad4ffbb7df8/ Log: hopefully clean up and improve diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -83,9 +83,10 @@ _nowrapper=True) def allocate_lock(): - lock = instantiate(Lock) - ll_lock = allocate_ll_lock(lock) - lock.__init__(lock, ll_lock) + lock = Lock(allocate_ll_lock()) + # Add some memory pressure for the size of the lock because it is an + # Opaque object + rgc.add_memory_pressure(lock, TLOCKP_SIZE) return lock @specialize.arg(0) @@ -170,9 +171,6 @@ if rffi.cast(lltype.Signed, res) <= 0: lltype.free(ll_lock, flavor='raw', track_allocation=False) raise error("out of resources") - # Add some memory pressure for the size of the lock because it is an - # Opaque object - rgc.add_memory_pressure(TLOCKP_SIZE) return ll_lock def free_ll_lock(ll_lock): From noreply at buildbot.pypy.org Wed Feb 15 00:45:26 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 15 Feb 2012 00:45:26 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: use type rather than string as dummy return type Message-ID: <20120214234526.7AE008203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52492:a7e3808bb6ac Date: 2012-02-13 13:04 -0800 http://bitbucket.org/pypy/pypy/changeset/a7e3808bb6ac/ Log: use type rather than string as dummy return type diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -44,6 +44,8 @@ def perform(self, executioncontext, frame): pass +class FakeReturnType(object): + pass class FakeSpace(object): fake = True @@ -123,7 +125,7 @@ return None def allocate_instance(self, cls, w_type): - assert w_type == "stuff" + assert isinstance(w_type, FakeReturnType) return instantiate(cls) def _freeze_(self): @@ -136,7 +138,7 @@ def f(): lib = interp_cppyy.load_dictionary(space, "./example01Dict.so") cls = interp_cppyy.type_byname(space, "example01") - inst = cls.get_overload("example01").call(None, "stuff", [FakeInt(0)]) + inst = cls.get_overload("example01").call(None, FakeReturnType(), [FakeInt(0)]) addDataToInt = cls.get_overload("addDataToInt") assert isinstance(inst, interp_cppyy.W_CPPInstance) i = 10 From noreply at buildbot.pypy.org Wed Feb 15 00:45:27 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 15 Feb 2012 00:45:27 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: optimization and cleanup Message-ID: <20120214234527.AB7B98203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52493:5edadec1b731 Date: 2012-02-13 13:04 -0800 http://bitbucket.org/pypy/pypy/changeset/5edadec1b731/ Log: optimization and cleanup diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -109,6 +109,8 @@ @jit.unroll_safe def call(self, cppthis, w_type, args_w): + jit.promote(self) + jit.promote(w_type) assert lltype.typeOf(cppthis) == capi.C_OBJECT args_expected = len(self.arg_defs) args_given = len(args_w) @@ -129,12 +131,9 @@ @jit.unroll_safe def do_fast_call(self, cppthis, w_type, args_w): - space = self.space - if self.arg_converters is None: - self._build_converters() jit.promote(self) funcptr = self.methgetter(rffi.cast(capi.C_OBJECT, cppthis)) - libffi_func = self._get_libffi_func(funcptr) + libffi_func = self._prepare_libffi_func(funcptr) if not libffi_func: raise FastCallNotPossible @@ -144,14 +143,16 @@ for i in range(len(args_w)): conv = self.arg_converters[i] w_arg = args_w[i] - conv.convert_argument_libffi(space, w_arg, argchain) + conv.convert_argument_libffi(self.space, w_arg, argchain) for j in range(i+1, len(self.arg_defs)): conv = self.arg_converters[j] - conv.default_argument_libffi(space, argchain) - return self.executor.execute_libffi(space, w_type, libffi_func, argchain) + conv.default_argument_libffi(self.space, argchain) + return self.executor.execute_libffi(self.space, w_type, libffi_func, argchain) @jit.elidable_promote() - def _get_libffi_func(self, funcptr): + def _prepare_libffi_func(self, funcptr): + if self.arg_converters is None: + self._build_converters() key = rffi.cast(rffi.LONG, funcptr) if key in self._libffifunc_cache: return self._libffifunc_cache[key] @@ -175,7 +176,6 @@ @jit.unroll_safe def prepare_arguments(self, args_w): jit.promote(self) - space = self.space if self.arg_converters is None: self._build_converters() args = capi.c_allocate_function_args(len(args_w)) @@ -185,7 +185,7 @@ w_arg = args_w[i] try: arg_i = lltype.direct_ptradd(rffi.cast(rffi.CCHARP, args), i*stride) - conv.convert_argument(space, w_arg, rffi.cast(capi.C_OBJECT, arg_i)) + conv.convert_argument(self.space, w_arg, rffi.cast(capi.C_OBJECT, arg_i)) except: # fun :-( for j in range(i): From noreply at buildbot.pypy.org Wed Feb 15 00:45:28 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 15 Feb 2012 00:45:28 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: benchmark fix for 64b Message-ID: <20120214234528.D6D8D8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52494:04baca12b2b5 Date: 2012-02-13 13:04 -0800 http://bitbucket.org/pypy/pypy/changeset/04baca12b2b5/ Log: benchmark fix for 64b diff --git a/pypy/module/cppyy/bench/Makefile b/pypy/module/cppyy/bench/Makefile --- a/pypy/module/cppyy/bench/Makefile +++ b/pypy/module/cppyy/bench/Makefile @@ -17,10 +17,10 @@ ifeq ($(shell $(genreflex) --help | grep -- --with-methptrgetter),) genreflexflags= - cppflags2=-O3 + cppflags2=-O3 -fPIC else genreflexflags=--with-methptrgetter - cppflags2=-Wno-pmf-conversions -O3 + cppflags2=-Wno-pmf-conversions -O3 -fPIC endif From noreply at buildbot.pypy.org Wed Feb 15 00:45:30 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 15 Feb 2012 00:45:30 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: remove raised OperationError that could be caught at the interp level (speeds up overloads) Message-ID: <20120214234530.15CD48203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52495:070a3cb94e09 Date: 2012-02-14 15:45 -0800 http://bitbucket.org/pypy/pypy/changeset/070a3cb94e09/ Log: remove raised OperationError that could be caught at the interp level (speeds up overloads) diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -1,6 +1,5 @@ import sys -from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib.rarithmetic import r_singlefloat from pypy.rlib import jit, libffi, clibffi, rfloat @@ -39,8 +38,7 @@ return fieldptr def _is_abstract(self, space): - raise OperationError(space.w_NotImplementedError, - space.wrap("no converter available")) # more detailed part is not rpython: (actual: %s)" % type(self).__name__)) + raise TypeError("no converter available") def convert_argument(self, space, w_obj, address): self._is_abstract(space) @@ -129,8 +127,7 @@ try: byteptr[0] = buf.get_raw_address() except ValueError: - raise OperationError(space.w_TypeError, - space.wrap("raw buffer interface not supported")) + raise TypeError("raw buffer interface not supported") class NumericTypeConverterMixin(object): @@ -181,8 +178,7 @@ self.name = name def convert_argument(self, space, w_obj, address): - raise OperationError(space.w_NotImplementedError, - space.wrap('no converter available for type "%s"' % self.name)) + raise TypeError('no converter available for type "%s"' % self.name) class BoolConverter(TypeConverter): @@ -192,8 +188,7 @@ def _unwrap_object(self, space, w_obj): arg = space.c_int_w(w_obj) if arg != False and arg != True: - raise OperationError(space.w_TypeError, - space.wrap("boolean value should be bool, or integer 1 or 0")) + raise ValueError("boolean value should be bool, or integer 1 or 0") return arg def convert_argument(self, space, w_obj, address): @@ -226,16 +221,14 @@ if space.isinstance_w(w_value, space.w_int): ival = space.c_int_w(w_value) if ival < 0 or 256 <= ival: - raise OperationError(space.w_TypeError, - space.wrap("char arg not in range(256)")) + raise ValueError("char arg not in range(256)") value = rffi.cast(rffi.CHAR, space.c_int_w(w_value)) else: value = space.str_w(w_value) if len(value) != 1: - raise OperationError(space.w_TypeError, - space.wrap("char expected, got string of size %d" % len(value))) + raise ValueError("char expected, got string of size %d" % len(value)) return value[0] # turn it into a "char" to the annotator def convert_argument(self, space, w_obj, address): @@ -515,10 +508,8 @@ obj.cppclass.handle, self.cpptype.handle, obj.rawobject) obj_address = capi.direct_ptradd(obj.rawobject, offset) return rffi.cast(capi.C_OBJECT, obj_address) - raise OperationError(space.w_TypeError, - space.wrap("cannot pass %s as %s" % ( - space.type(w_obj).getname(space, "?"), - self.cpptype.name))) + raise TypeError("cannot pass %s as %s" % + (space.type(w_obj).getname(space, "?"), self.cpptype.name)) def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -1,6 +1,5 @@ import sys -from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib import libffi, clibffi @@ -21,8 +20,7 @@ def execute(self, space, w_returntype, func, cppthis, num_args, args): rtype = capi.charp2str_free(capi.c_method_result_type(func.cpptype.handle, func.method_index)) - raise OperationError(space.w_NotImplementedError, - space.wrap('return type not available or supported ("%s")' % rtype)) + raise TypeError('return type not available or supported ("%s")' % rtype) def execute_libffi(self, space, w_returntype, libffifunc, argchain): from pypy.module.cppyy.interp_cppyy import FastCallNotPossible diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -8,7 +8,7 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib import libffi, rdynload, rweakref -from pypy.rlib import jit, debug +from pypy.rlib import jit, debug, objectmodel from pypy.module.cppyy import converter, executor, helper @@ -115,7 +115,7 @@ args_expected = len(self.arg_defs) args_given = len(args_w) if args_expected < args_given or args_given < self.args_required: - raise OperationError(self.space.w_TypeError, self.space.wrap("wrong number of arguments")) + raise TypeError("wrong number of arguments") if self.methgetter and cppthis: # only for methods try: @@ -264,13 +264,8 @@ cppyyfunc = self.functions[i] try: return cppyyfunc.call(cppthis, w_type, args_w) - except OperationError, e: - if not (e.match(space, space.w_TypeError) or \ - e.match(space, space.w_NotImplementedError)): - raise + except Exception, e: errmsg += '\n\t'+str(e) - except KeyError: - pass raise OperationError(space.w_TypeError, space.wrap(errmsg)) @@ -314,13 +309,23 @@ def get(self, w_cppinstance, w_type): cppinstance = self.space.interp_w(W_CPPInstance, w_cppinstance, can_be_None=True) offset = self._get_offset(cppinstance) - return self.converter.from_memory(self.space, w_cppinstance, w_type, offset) + try: + return self.converter.from_memory(self.space, w_cppinstance, w_type, offset) + except Exception, e: + raise OperationError(self.space.w_TypeError, self.space.wrap(str(e))) + except ValueError, e: + raise OperationError(self.space.w_ValueError, self.space.wrap(str(e))) def set(self, w_cppinstance, w_value): cppinstance = self.space.interp_w(W_CPPInstance, w_cppinstance, can_be_None=True) offset = self._get_offset(cppinstance) - self.converter.to_memory(self.space, w_cppinstance, w_value, offset) - return self.space.w_None + try: + self.converter.to_memory(self.space, w_cppinstance, w_value, offset) + return self.space.w_None + except TypeError, e: + raise OperationError(self.space.w_TypeError, self.space.wrap(str(e))) + except ValueError, e: + raise OperationError(self.space.w_ValueError, self.space.wrap(str(e))) W_CPPDataMember.typedef = TypeDef( 'CPPDataMember', diff --git a/pypy/module/cppyy/test/bench1.py b/pypy/module/cppyy/test/bench1.py --- a/pypy/module/cppyy/test/bench1.py +++ b/pypy/module/cppyy/test/bench1.py @@ -64,6 +64,14 @@ addDataToInt.call(instance, None, i) return i +class CppyyInterpBench2(CppyyInterpBench1): + def __call__(self): + addDataToInt = self.cls.get_overload("overloadedAddDataToInt") + instance = self.inst + for i in range(NNN): + addDataToInt.call(instance, None, i) + return i + class CppyyPythonBench1(object): scale = 1 def __init__(self): @@ -112,14 +120,16 @@ # warm-up print "warming up ... " interp_bench1 = CppyyInterpBench1() + interp_bench2 = CppyyInterpBench2() python_bench1 = CppyyPythonBench1() - interp_bench1(); python_bench1() + interp_bench1(); interp_bench2(); python_bench1() # to allow some consistency checking print "C++ reference uses %.3fs" % t_cppref # test runs ... print_bench("cppyy interp", run_bench(interp_bench1)) + print_bench("... overload", run_bench(interp_bench2)) print_bench("cppyy python", run_bench(python_bench1)) stat, t_cintex = commands.getstatusoutput("python bench1.py --pycintex") print_bench("pycintex ", float(t_cintex)) diff --git a/pypy/module/cppyy/test/example01.cxx b/pypy/module/cppyy/test/example01.cxx --- a/pypy/module/cppyy/test/example01.cxx +++ b/pypy/module/cppyy/test/example01.cxx @@ -91,6 +91,18 @@ return m_somedata + a; } +int example01::overloadedAddDataToInt(int a, int b) { + return m_somedata + a + b; +} + +int example01::overloadedAddDataToInt(int a) { + return m_somedata + a; +} + +int example01::overloadedAddDataToInt(int a, int b, int c) { + return m_somedata + a + b + c; +} + double example01::addDataToDouble(double a) { return m_somedata + a; } diff --git a/pypy/module/cppyy/test/example01.h b/pypy/module/cppyy/test/example01.h --- a/pypy/module/cppyy/test/example01.h +++ b/pypy/module/cppyy/test/example01.h @@ -39,6 +39,9 @@ public: // instance methods int addDataToInt(int a); + int overloadedAddDataToInt(int a, int b); + int overloadedAddDataToInt(int a); + int overloadedAddDataToInt(int a, int b, int c); double addDataToDouble(double a); int addDataToAtoi(const char* str); char* addToStringValue(const char* str); diff --git a/pypy/module/cppyy/test/test_cppyy.py b/pypy/module/cppyy/test/test_cppyy.py --- a/pypy/module/cppyy/test/test_cppyy.py +++ b/pypy/module/cppyy/test/test_cppyy.py @@ -59,7 +59,7 @@ raises(TypeError, 't.get_overload("staticAddOneToInt").call(None, None, 1, [])') raises(TypeError, 't.get_overload("staticAddOneToInt").call(None, None, 1.)') - raises(OverflowError, 't.get_overload("staticAddOneToInt").call(None, None, maxint32+1)') + raises(TypeError, 't.get_overload("staticAddOneToInt").call(None, None, maxint32+1)') def test02_static_double(self): """Test passing of a double and returning of a double on a static function.""" diff --git a/pypy/module/cppyy/test/test_datatypes.py b/pypy/module/cppyy/test/test_datatypes.py --- a/pypy/module/cppyy/test/test_datatypes.py +++ b/pypy/module/cppyy/test/test_datatypes.py @@ -253,8 +253,8 @@ assert c.s_uchar == 'c' c.s_uchar = 'd' assert cppyy_test_data.s_uchar == 'd' - raises(TypeError, setattr, cppyy_test_data, 's_uchar', -1) - raises(TypeError, setattr, c, 's_uchar', -1) + raises(ValueError, setattr, cppyy_test_data, 's_uchar', -1) + raises(ValueError, setattr, c, 's_uchar', -1) # integer types c.s_short = -102 diff --git a/pypy/module/cppyy/test/test_fragile.py b/pypy/module/cppyy/test/test_fragile.py --- a/pypy/module/cppyy/test/test_fragile.py +++ b/pypy/module/cppyy/test/test_fragile.py @@ -83,7 +83,7 @@ e = fragile.E() raises(TypeError, e.overload, None) - raises(NotImplementedError, getattr, e, 'm_pp_no_such') + raises(TypeError, getattr, e, 'm_pp_no_such') def test05_wrong_arg_addressof(self): """Test addressof() error reporting""" diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -60,7 +60,7 @@ raises(TypeError, 'example01_class.staticAddOneToInt(1, [])') raises(TypeError, 'example01_class.staticAddOneToInt(1.)') - raises(OverflowError, 'example01_class.staticAddOneToInt(maxint32+1)') + raises(TypeError, 'example01_class.staticAddOneToInt(maxint32+1)') res = example01_class.staticAddToDouble(0.09) assert res == 0.09 + 0.01 diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -151,3 +151,23 @@ space = FakeSpace() result = self.meta_interp(f, [], listops=True, backendopt=True, listcomp=True) self.check_jitcell_token_count(1) + + def test_overload(self): + space = FakeSpace() + drv = jit.JitDriver(greens=[], reds=["i", "inst", "addDataToInt"]) + def f(): + lib = interp_cppyy.load_dictionary(space, "./example01Dict.so") + cls = interp_cppyy.type_byname(space, "example01") + inst = cls.get_overload("example01").call(None, FakeReturnType(), [FakeInt(0)]) + addDataToInt = cls.get_overload("overloadedAddDataToInt") + assert isinstance(inst, interp_cppyy.W_CPPInstance) + i = 10 + while i > 0: + drv.jit_merge_point(inst=inst, addDataToInt=addDataToInt, i=i) + addDataToInt.call(inst, None, [FakeInt(i)]) + i -= 1 + return 7 + f() + space = FakeSpace() + result = self.meta_interp(f, [], listops=True, backendopt=True, listcomp=True) + self.check_jitcell_token_count(1) From noreply at buildbot.pypy.org Wed Feb 15 08:31:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 08:31:47 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: fix hashlib module Message-ID: <20120215073147.DF6CD8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52496:27bcc2c71f0c Date: 2012-02-15 09:31 +0200 http://bitbucket.org/pypy/pypy/changeset/27bcc2c71f0c/ Log: fix hashlib module diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -23,10 +23,10 @@ ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) def __init__(self, space, name): + digest_type = self.digest_type_by_name(space) + self.digest_size = rffi.getintfield(digest_type, 'c_md_size') rgc.add_memory_pressure(self, HASH_MALLOC_SIZE + self.digest_size) self.name = name - digest_type = self.digest_type_by_name(space) - self.digest_size = rffi.getintfield(digest_type, 'c_md_size') # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, From noreply at buildbot.pypy.org Wed Feb 15 08:33:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 08:33:50 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: fix another place Message-ID: <20120215073350.C7F898204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52497:de30be579a52 Date: 2012-02-15 09:33 +0200 http://bitbucket.org/pypy/pypy/changeset/de30be579a52/ Log: fix another place diff --git a/pypy/module/pyexpat/interp_pyexpat.py b/pypy/module/pyexpat/interp_pyexpat.py --- a/pypy/module/pyexpat/interp_pyexpat.py +++ b/pypy/module/pyexpat/interp_pyexpat.py @@ -407,7 +407,7 @@ class W_XMLParserType(Wrappable): def __init__(self, space, parser, w_intern): - rgc.add_memory_pressure(XML_Parser_SIZE + 300) + rgc.add_memory_pressure(self, XML_Parser_SIZE + 300) self.itself = parser self.w_intern = w_intern From noreply at buildbot.pypy.org Wed Feb 15 09:05:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 09:05:38 +0100 (CET) Subject: [pypy-commit] pypy default: Fix reference count in PyString_InternInPlace() Message-ID: <20120215080538.098208204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52498:bb2eb64cc7e6 Date: 2012-02-15 09:04 +0100 http://bitbucket.org/pypy/pypy/changeset/bb2eb64cc7e6/ Log: Fix reference count in PyString_InternInPlace() diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -267,6 +267,7 @@ alias.""" w_str = from_ref(space, string[0]) w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) string[0] = make_ref(space, w_str) @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) From noreply at buildbot.pypy.org Wed Feb 15 09:17:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 09:17:02 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: add a linux64 sandboxed binary Message-ID: <20120215081702.268568204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r332:4a832215a7b5 Date: 2012-02-15 10:16 +0200 http://bitbucket.org/pypy/pypy.org/changeset/4a832215a7b5/ Log: add a linux64 sandboxed binary diff --git a/download.html b/download.html --- a/download.html +++ b/download.html @@ -92,7 +92,11 @@ (It is also possible to translate a version that includes both sandboxing and the JIT compiler, although as the JIT is relatively complicated, this reduces a bit the level of confidence we can put in -the result.) +the result.) Note that the sandboxed binary needs a full pypy checkout +to work. Consult the sandbox docs for details

    +

    These versions are not officially part of the release 1.8, which focuses on the JIT. You can find prebuilt binaries for them on our @@ -196,10 +200,12 @@ c4a1d11e0283a390d9e9b801a4633b9f pypy-1.8-linux.tar.bz2 1c293253e8e4df411c3dd59dff82a663 pypy-1.8-osx64.tar.bz2 1af8ee722721e9f5fd06b61af530ecb3 pypy-1.8-win32.zip +2c9f0054f3b93a6473f10be35277825a pypy-1.8-sandbox-linux64.tar.bz2 a6bb7b277d5186385fd09b71ec4e35c9e93b380d pypy-1.8-linux64.tar.bz2 089f4269a6079da2eabdeabd614f668f56c4121a pypy-1.8-linux.tar.bz2 15b99f780b9714e3ebd82b2e41577afab232d148 pypy-1.8-osx64.tar.bz2 77a565b1cfa4874a0079c17edd1b458b20e67bfd pypy-1.8-win32.zip +895aaf7bba5787dd30adda5cc0e0e7fc297c0ca7 pypy-1.8-sandbox-linux64.tar.bz2 diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -73,7 +73,13 @@ (It is also possible to translate_ a version that includes both sandboxing and the JIT compiler, although as the JIT is relatively complicated, this reduces a bit the level of confidence we can put in - the result.) + the result.) **Note that the sandboxed binary needs a full pypy checkout + to work**. Consult the `sandbox docs`_ for details + + * `Linux binary (64bit)`__ + +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-sandbox-linux64.tar.bz2 +.. _`sandbox docs`: http://doc.pypy.org/en/latest/sandbox.html These versions are not officially part of the release 1.8, which focuses on the JIT. You can find prebuilt binaries for them on our @@ -201,8 +207,10 @@ c4a1d11e0283a390d9e9b801a4633b9f pypy-1.8-linux.tar.bz2 1c293253e8e4df411c3dd59dff82a663 pypy-1.8-osx64.tar.bz2 1af8ee722721e9f5fd06b61af530ecb3 pypy-1.8-win32.zip + 2c9f0054f3b93a6473f10be35277825a pypy-1.8-sandbox-linux64.tar.bz2 a6bb7b277d5186385fd09b71ec4e35c9e93b380d pypy-1.8-linux64.tar.bz2 089f4269a6079da2eabdeabd614f668f56c4121a pypy-1.8-linux.tar.bz2 15b99f780b9714e3ebd82b2e41577afab232d148 pypy-1.8-osx64.tar.bz2 77a565b1cfa4874a0079c17edd1b458b20e67bfd pypy-1.8-win32.zip + 895aaf7bba5787dd30adda5cc0e0e7fc297c0ca7 pypy-1.8-sandbox-linux64.tar.bz2 From noreply at buildbot.pypy.org Wed Feb 15 09:36:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 15 Feb 2012 09:36:21 +0100 (CET) Subject: [pypy-commit] pypy default: By default, --sandbox should not use asmgcc. Message-ID: <20120215083621.A09DF8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52499:60e366cf4f99 Date: 2012-02-15 09:36 +0100 http://bitbucket.org/pypy/pypy/changeset/60e366cf4f99/ Log: By default, --sandbox should not use asmgcc. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), From noreply at buildbot.pypy.org Wed Feb 15 09:48:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 09:48:50 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: more downloads Message-ID: <20120215084850.AE3378204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r333:af9e055a1473 Date: 2012-02-15 10:48 +0200 http://bitbucket.org/pypy/pypy.org/changeset/af9e055a1473/ Log: more downloads diff --git a/download.html b/download.html --- a/download.html +++ b/download.html @@ -95,6 +95,7 @@ the result.) Note that the sandboxed binary needs a full pypy checkout to work. Consult the sandbox docs for details

    @@ -201,11 +202,13 @@ 1c293253e8e4df411c3dd59dff82a663 pypy-1.8-osx64.tar.bz2 1af8ee722721e9f5fd06b61af530ecb3 pypy-1.8-win32.zip 2c9f0054f3b93a6473f10be35277825a pypy-1.8-sandbox-linux64.tar.bz2 +009c970b5fa75754ae4c32a5d108a8d4 pypy-1.8-sandbox-linux.tar.bz2 a6bb7b277d5186385fd09b71ec4e35c9e93b380d pypy-1.8-linux64.tar.bz2 089f4269a6079da2eabdeabd614f668f56c4121a pypy-1.8-linux.tar.bz2 15b99f780b9714e3ebd82b2e41577afab232d148 pypy-1.8-osx64.tar.bz2 77a565b1cfa4874a0079c17edd1b458b20e67bfd pypy-1.8-win32.zip 895aaf7bba5787dd30adda5cc0e0e7fc297c0ca7 pypy-1.8-sandbox-linux64.tar.bz2 +be94460bed8b2682880495435c309b6611ae2c31 pypy-1.8-sandbox-linux.tar.bz2 diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -78,7 +78,10 @@ * `Linux binary (64bit)`__ + * `Linux binary (32bit)`__ + .. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-sandbox-linux64.tar.bz2 +.. __: https://bitbucket.org/pypy/pypy/downloads/pypy-1.8-sandbox-linux.tar.bz2 .. _`sandbox docs`: http://doc.pypy.org/en/latest/sandbox.html These versions are not officially part of the release 1.8, which focuses @@ -208,9 +211,11 @@ 1c293253e8e4df411c3dd59dff82a663 pypy-1.8-osx64.tar.bz2 1af8ee722721e9f5fd06b61af530ecb3 pypy-1.8-win32.zip 2c9f0054f3b93a6473f10be35277825a pypy-1.8-sandbox-linux64.tar.bz2 + 009c970b5fa75754ae4c32a5d108a8d4 pypy-1.8-sandbox-linux.tar.bz2 a6bb7b277d5186385fd09b71ec4e35c9e93b380d pypy-1.8-linux64.tar.bz2 089f4269a6079da2eabdeabd614f668f56c4121a pypy-1.8-linux.tar.bz2 15b99f780b9714e3ebd82b2e41577afab232d148 pypy-1.8-osx64.tar.bz2 77a565b1cfa4874a0079c17edd1b458b20e67bfd pypy-1.8-win32.zip 895aaf7bba5787dd30adda5cc0e0e7fc297c0ca7 pypy-1.8-sandbox-linux64.tar.bz2 + be94460bed8b2682880495435c309b6611ae2c31 pypy-1.8-sandbox-linux.tar.bz2 From noreply at buildbot.pypy.org Wed Feb 15 09:49:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 15 Feb 2012 09:49:16 +0100 (CET) Subject: [pypy-commit] pypy default: Add some more sign-extending instructions. Two of them were Message-ID: <20120215084916.4B06F8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52500:02dc2f6160ee Date: 2012-02-15 09:48 +0100 http://bitbucket.org/pypy/pypy/changeset/02dc2f6160ee/ Log: Add some more sign-extending instructions. Two of them were already elsewhere in the list. diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far From noreply at buildbot.pypy.org Wed Feb 15 11:14:49 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 11:14:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: implement support for __traceback__ when raising exceptions. test_raise fully passes now :-) Message-ID: <20120215101449.4628F8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52501:145994f7f8f9 Date: 2012-02-15 11:14 +0100 http://bitbucket.org/pypy/pypy/changeset/145994f7f8f9/ Log: implement support for __traceback__ when raising exceptions. test_raise fully passes now :-) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -495,8 +495,17 @@ w_type = space.type(w_value) operror = OperationError(w_type, w_value, w_cause=w_cause) operror.normalize_exception(space) + # XXX: we actually know that w_value is an instance of + # W_BaseException, so we could directly use w_value.w_traceback, + # however that class belongs to the std objspace, so for now we just + # go through all the getattr machinery (which is removed by the JIT + # anyway) + tb = space.getattr(w_value, space.wrap('__traceback__')) + if tb is not None: + operror.set_traceback(tb) raise operror + def LOAD_LOCALS(self, oparg, next_instr): self.pushvalue(self.w_locals) From noreply at buildbot.pypy.org Wed Feb 15 12:34:18 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 12:34:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill XXX after discussion with armin on IRC, we don't care :-) Message-ID: <20120215113418.8FA158204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52502:1d322e23e812 Date: 2012-02-15 11:18 +0100 http://bitbucket.org/pypy/pypy/changeset/1d322e23e812/ Log: kill XXX after discussion with armin on IRC, we don't care :-) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -495,11 +495,6 @@ w_type = space.type(w_value) operror = OperationError(w_type, w_value, w_cause=w_cause) operror.normalize_exception(space) - # XXX: we actually know that w_value is an instance of - # W_BaseException, so we could directly use w_value.w_traceback, - # however that class belongs to the std objspace, so for now we just - # go through all the getattr machinery (which is removed by the JIT - # anyway) tb = space.getattr(w_value, space.wrap('__traceback__')) if tb is not None: operror.set_traceback(tb) From noreply at buildbot.pypy.org Wed Feb 15 12:34:19 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 12:34:19 +0100 (CET) Subject: [pypy-commit] pypy py3k: use the official way to check whether an object is a valid traceback, and add a test Message-ID: <20120215113419.C68A28204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52503:5cee50641566 Date: 2012-02-15 11:24 +0100 http://bitbucket.org/pypy/pypy/changeset/5cee50641566/ Log: use the official way to check whether an object is a valid traceback, and add a test diff --git a/pypy/module/exceptions/interp_exceptions.py b/pypy/module/exceptions/interp_exceptions.py --- a/pypy/module/exceptions/interp_exceptions.py +++ b/pypy/module/exceptions/interp_exceptions.py @@ -78,7 +78,7 @@ descr_set_dict, descr_del_dict) from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError -from pypy.interpreter.pytraceback import PyTraceback +from pypy.interpreter.pytraceback import check_traceback from pypy.rlib import rwin32 def readwrite_attrproperty_w(name, cls): @@ -175,9 +175,8 @@ return self.w_traceback def descr_settraceback(self, space, w_newtraceback): - # Check argument - space.interp_w(PyTraceback, w_newtraceback, can_be_None=True) - self.w_traceback = w_newtraceback + msg = '__traceback__ must be a traceback or None' + self.w_traceback = check_traceback(space, w_newtraceback, msg) def descr_getitem(self, space, w_index): return space.getitem(space.newtuple(self.args_w), w_index) diff --git a/pypy/module/exceptions/test/test_exc.py b/pypy/module/exceptions/test/test_exc.py --- a/pypy/module/exceptions/test/test_exc.py +++ b/pypy/module/exceptions/test/test_exc.py @@ -271,3 +271,7 @@ tb = sys.exc_info()[2] assert e.with_traceback(tb) is e assert e.__traceback__ is tb + + def test_set_traceback(self): + e = Exception() + raises(TypeError, "e.__traceback__ = 42") From noreply at buildbot.pypy.org Wed Feb 15 12:34:21 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 12:34:21 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, we need to support both py3k and py2 bytecodes, for the flow objspace. Add a flag to the space, and check whether or not we need to support POP_EXCEPT Message-ID: <20120215113421.10A498204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52504:c0a619c7c1db Date: 2012-02-15 12:31 +0100 http://bitbucket.org/pypy/pypy/changeset/c0a619c7c1db/ Log: bah, we need to support both py3k and py2 bytecodes, for the flow objspace. Add a flag to the space, and check whether or not we need to support POP_EXCEPT diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -272,6 +272,7 @@ http://pypy.readthedocs.org/en/latest/objspace.html""" full_exceptions = True # full support for exceptions (normalization & more) + py3k = True # are we interpreting py3k bytecode? def __init__(self, config=None): "NOT_RPYTHON: Basic initialization of objects." diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -529,6 +529,7 @@ self.setdictscope(w_locals) def POP_EXCEPT(self, oparg, next_instr): + assert self.space.py3k # on CPython, POP_EXCEPT also pops the block. Here, the block is # automatically popped by unrollstack() self.last_exception = self.popvalue() @@ -1275,7 +1276,9 @@ # the stack setup is slightly different than in CPython: # instead of the traceback, we store the unroller object, # wrapped. - frame.pushvalue(frame.last_exception) # this is popped by POP_EXCEPT + if frame.space.py3k: + # this is popped by POP_EXCEPT, which is present only in py3k + frame.pushvalue(frame.last_exception) frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) diff --git a/pypy/objspace/flow/objspace.py b/pypy/objspace/flow/objspace.py --- a/pypy/objspace/flow/objspace.py +++ b/pypy/objspace/flow/objspace.py @@ -47,6 +47,7 @@ """ full_exceptions = False + py3k = False # the RPython bytecode is still python2 do_imports_immediately = True FrameClass = flowcontext.FlowSpaceFrame From noreply at buildbot.pypy.org Wed Feb 15 15:01:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 15:01:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: bump the pyc magic number; this should have been checked in with 3648ec4ef989 Message-ID: <20120215140131.654838204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52505:b9d0fc485961 Date: 2012-02-15 15:01 +0100 http://bitbucket.org/pypy/pypy/changeset/b9d0fc485961/ Log: bump the pyc magic number; this should have been checked in with 3648ec4ef989 diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -32,7 +32,7 @@ # different value for the highest 16 bits. Bump pypy_incremental_magic every # time you make pyc files incompatible -pypy_incremental_magic = 0 # bump it by 16 +pypy_incremental_magic = 16 # bump it by 16 assert pypy_incremental_magic % 16 == 0 assert pypy_incremental_magic < 3000 # the magic number of Python 3. There are # no known magic numbers below this value From noreply at buildbot.pypy.org Wed Feb 15 15:23:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 15:23:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, confusion between applevel and interplevel None Message-ID: <20120215142331.5C4BF8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52506:f3f032ce4cb2 Date: 2012-02-15 15:23 +0100 http://bitbucket.org/pypy/pypy/changeset/f3f032ce4cb2/ Log: bah, confusion between applevel and interplevel None diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -496,7 +496,7 @@ operror = OperationError(w_type, w_value, w_cause=w_cause) operror.normalize_exception(space) tb = space.getattr(w_value, space.wrap('__traceback__')) - if tb is not None: + if not space.is_w(tb, space.w_None): operror.set_traceback(tb) raise operror From noreply at buildbot.pypy.org Wed Feb 15 16:00:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 16:00:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: w_None is a valid value to assign to __traceback__ Message-ID: <20120215150027.A47C482B1F@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52508:c1756f5aa63e Date: 2012-02-15 16:00 +0100 http://bitbucket.org/pypy/pypy/changeset/c1756f5aa63e/ Log: w_None is a valid value to assign to __traceback__ diff --git a/pypy/module/exceptions/interp_exceptions.py b/pypy/module/exceptions/interp_exceptions.py --- a/pypy/module/exceptions/interp_exceptions.py +++ b/pypy/module/exceptions/interp_exceptions.py @@ -176,7 +176,9 @@ def descr_settraceback(self, space, w_newtraceback): msg = '__traceback__ must be a traceback or None' - self.w_traceback = check_traceback(space, w_newtraceback, msg) + if not space.is_w(w_newtraceback, space.w_None): + w_newtraceback = check_traceback(space, w_newtraceback, msg) + self.w_traceback = w_newtraceback def descr_getitem(self, space, w_index): return space.getitem(space.newtuple(self.args_w), w_index) From noreply at buildbot.pypy.org Wed Feb 15 16:00:26 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 15 Feb 2012 16:00:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill sys.exc_clear(). Also kill OperationError.clear, which seems to be no longer used anywhere else now. I hope not to be wrong :-) Message-ID: <20120215150026.4A3FF8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52507:35c013f9b1a5 Date: 2012-02-15 15:44 +0100 http://bitbucket.org/pypy/pypy/changeset/35c013f9b1a5/ Log: kill sys.exc_clear(). Also kill OperationError.clear, which seems to be no longer used anywhere else now. I hope not to be wrong :-) diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -35,14 +35,6 @@ if not we_are_translated(): self.debug_excs = [] - def clear(self, space): - # for sys.exc_clear() - self.w_type = space.w_None - self._w_value = space.w_None - self._application_traceback = None - if not we_are_translated(): - del self.debug_excs[:] - def match(self, space, w_check_class): "Check if this application-level exception matches 'w_check_class'." return space.exception_match(self.w_type, w_check_class) diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -47,7 +47,6 @@ 'setcheckinterval' : 'vm.setcheckinterval', 'getcheckinterval' : 'vm.getcheckinterval', 'exc_info' : 'vm.exc_info', - 'exc_clear' : 'vm.exc_clear', 'settrace' : 'vm.settrace', 'gettrace' : 'vm.gettrace', 'setprofile' : 'vm.setprofile', diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -223,51 +223,6 @@ # FIXME: testing the code for a lost or replaced excepthook in # Python/pythonrun.c::PyErr_PrintEx() is tricky. - def test_exc_clear(self): - import sys - raises(TypeError, sys.exc_clear, 42) - - # Verify that exc_info is present and matches exc, then clear it, and - # check that it worked. - def clear_check(exc): - typ, value, traceback = sys.exc_info() - assert typ is not None - assert value is exc - assert traceback is not None - - sys.exc_clear() - - typ, value, traceback = sys.exc_info() - assert typ is None - assert value is None - assert traceback is None - - def clear(): - try: - raise ValueError(42) - except ValueError as exc: - clear_check(exc) - - # Raise an exception and check that it can be cleared - clear() - - # Verify that a frame currently handling an exception is - # unaffected by calling exc_clear in a nested frame. - try: - raise ValueError(13) - except ValueError as exc: - typ1, value1, traceback1 = sys.exc_info() - clear() - typ2, value2, traceback2 = sys.exc_info() - - assert typ1 is typ2 - assert value1 is exc - assert value1 is value2 - assert traceback1 is traceback2 - - # Check that an exception can be cleared outside of an except block - clear_check(exc) - def test_exit(self): import sys raises(TypeError, sys.exit, 42, 42) diff --git a/pypy/module/sys/vm.py b/pypy/module/sys/vm.py --- a/pypy/module/sys/vm.py +++ b/pypy/module/sys/vm.py @@ -96,15 +96,6 @@ return space.newtuple([operror.w_type, operror.get_w_value(space), space.wrap(operror.get_traceback())]) -def exc_clear(space): - """Clear global information on the current exception. Subsequent calls -to exc_info() will return (None,None,None) until another exception is -raised and caught in the current thread or the execution stack returns to a -frame where another exception is being handled.""" - operror = space.getexecutioncontext().sys_exc_info() - if operror is not None: - operror.clear(space) - def settrace(space, w_func): """Set the global debug tracing function. It will be called on each function call. See the debugger chapter in the library manual.""" From noreply at buildbot.pypy.org Wed Feb 15 16:46:49 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 15 Feb 2012 16:46:49 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use shadowstack and comment out compiler flags Message-ID: <20120215154649.D4A5E8204C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52509:c5a74a6eb1ba Date: 2012-02-15 07:45 -0800 http://bitbucket.org/pypy/pypy/changeset/c5a74a6eb1ba/ Log: use shadowstack and comment out compiler flags diff --git a/pypy/jit/backend/ppc/test/test_ztranslation.py b/pypy/jit/backend/ppc/test/test_ztranslation.py --- a/pypy/jit/backend/ppc/test/test_ztranslation.py +++ b/pypy/jit/backend/ppc/test/test_ztranslation.py @@ -18,8 +18,9 @@ def _check_cbuilder(self, cbuilder): # We assume here that we have sse2. If not, the CPUClass # needs to be changed to CPU386_NO_SSE2, but well. - assert '-msse2' in cbuilder.eci.compile_extra - assert '-mfpmath=sse' in cbuilder.eci.compile_extra + #assert '-msse2' in cbuilder.eci.compile_extra + #assert '-mfpmath=sse' in cbuilder.eci.compile_extra + pass def test_stuff_translates(self): # this is a basic test that tries to hit a number of features and their @@ -176,7 +177,7 @@ def _get_TranslationContext(self): t = TranslationContext() t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' - t.config.translation.gcrootfinder = 'asmgcc' + t.config.translation.gcrootfinder = 'shadowstack' t.config.translation.list_comprehension_operations = True t.config.translation.gcremovetypeptr = True return t From noreply at buildbot.pypy.org Wed Feb 15 16:47:44 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 15 Feb 2012 16:47:44 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add debug information Message-ID: <20120215154744.92A4A8204C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52510:a60ef6f200f6 Date: 2012-02-15 07:46 -0800 http://bitbucket.org/pypy/pypy/changeset/a60ef6f200f6/ Log: add debug information diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -962,6 +962,11 @@ PPCAssembler.__init__(self) self.init_block_builder() self.r0_in_use = r0_in_use + self.ops_offset = {} + + def mark_op(self, op): + pos = self.get_relative_pos() + self.ops_offset[op] = pos def check(self, desc, v, *args): desc.__get__(self)(*args) diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -44,6 +44,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop from pypy.jit.backend.ppc.locations import StackLocation, get_spp_offset +from pypy.rlib.jit import AsmInfo memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, @@ -490,6 +491,7 @@ looptoken._ppc_loop_code = start_pos clt.frame_depth = clt.param_depth = -1 spilling_area, param_depth = self._assemble(operations, regalloc) + size_excluding_failure_stuff = self.mc.get_relative_pos() clt.frame_depth = spilling_area clt.param_depth = param_depth @@ -517,8 +519,12 @@ print 'Loop', inputargs, operations self.mc._dump_trace(loop_start, 'loop_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling loop with token %r' % looptoken + ops_offset = self.mc.ops_offset self._teardown() + # XXX 3rd arg may not be correct yet + return AsmInfo(ops_offset, real_start, size_excluding_failure_stuff) + def _assemble(self, operations, regalloc): regalloc.compute_hint_frame_locations(operations) self._walk_operations(operations, regalloc) @@ -547,7 +553,9 @@ sp_patch_location = self._prepare_sp_patch_position() + startpos = self.mc.get_relative_pos() spilling_area, param_depth = self._assemble(operations, regalloc) + codeendpos = self.mc.get_relative_pos() self.write_pending_failure_recoveries() @@ -569,8 +577,12 @@ print 'Loop', inputargs, operations self.mc._dump_trace(rawstart, 'bridge_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling bridge with token %r' % looptoken + + ops_offset = self.mc.ops_offset self._teardown() + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) + def _patch_sp_offset(self, sp_patch_location, rawstart): mc = PPCBuilder() frame_depth = self.compute_frame_depth(self.current_clt.frame_depth, From noreply at buildbot.pypy.org Wed Feb 15 17:03:44 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 15 Feb 2012 17:03:44 +0100 (CET) Subject: [pypy-commit] pypy default: Add a failing test. Message-ID: <20120215160344.557FD8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52511:51c1f7757f0d Date: 2012-02-15 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/51c1f7757f0d/ Log: Add a failing test. diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,36 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + py.test.skip("fails on x86!") + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): From noreply at buildbot.pypy.org Wed Feb 15 17:10:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 17:10:15 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: jit should not see add_memory_pressure Message-ID: <20120215161015.1339D8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52512:3a2a46ebcc58 Date: 2012-02-15 18:09 +0200 http://bitbucket.org/pypy/pypy/changeset/3a2a46ebcc58/ Log: jit should not see add_memory_pressure diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -261,17 +261,23 @@ except Exception: return False # don't keep objects whose _freeze_() method explodes + at jit.dont_look_inside + at specialize.argtype(0) def add_memory_pressure(owner, estimate): """Add memory pressure for OpaquePtrs. Owner is either None or typically the object which owns the reference (the one that would free it on __del__) """ + _add_memory_pressure(owner, estimate) + +def _add_memory_pressure(owner, estimate): pass class AddMemoryPressureEntry(ExtRegistryEntry): - _about_ = add_memory_pressure + _about_ = _add_memory_pressure def compute_result_annotation(self, s_owner, s_nbytes): from pypy.annotation import model as annmodel + assert isinstance(s_nbytes, annmodel.SomeInteger) return annmodel.s_None def specialize_call(self, hop): From noreply at buildbot.pypy.org Wed Feb 15 17:14:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 15 Feb 2012 17:14:31 +0100 (CET) Subject: [pypy-commit] pypy default: Fix test_guard_not_invalidated_and_label on x86. Message-ID: <20120215161431.205C58204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52513:6529910d32cc Date: 2012-02-15 17:14 +0100 http://bitbucket.org/pypy/pypy/changeset/6529910d32cc/ Log: Fix test_guard_not_invalidated_and_label on x86. diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2222,7 +2222,6 @@ print '-'*79 def test_guard_not_invalidated_and_label(self): - py.test.skip("fails on x86!") # test that the guard_not_invalidated reserves enough room before # the label. If it doesn't, then in this example after we invalidate # the guard, jumping to the label will hit the invalidation code too diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) From noreply at buildbot.pypy.org Wed Feb 15 17:16:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 17:16:29 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: pass another arg Message-ID: <20120215161629.6DEF38204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52514:2c93b78cdc3c Date: 2012-02-15 18:15 +0200 http://bitbucket.org/pypy/pypy/changeset/2c93b78cdc3c/ Log: pass another arg diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -719,8 +719,8 @@ track_allocation = flags.get('track_allocation', True) self.heap.free(obj, flavor='raw', track_allocation=track_allocation) - def op_gc_add_memory_pressure(self, size): - self.heap.add_memory_pressure(size) + def op_gc_add_memory_pressure(self, owner, size): + self.heap.add_memory_pressure(owner, size) def op_shrink_array(self, obj, smallersize): return self.heap.shrink_array(obj, smallersize) From noreply at buildbot.pypy.org Wed Feb 15 17:32:14 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 17:32:14 +0100 (CET) Subject: [pypy-commit] pypy raw-memory-pressure-nursery: fix another obvious typo, why those tests run so long Message-ID: <20120215163214.CBC6F8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: raw-memory-pressure-nursery Changeset: r52515:2f60c9f828db Date: 2012-02-15 18:31 +0200 http://bitbucket.org/pypy/pypy/changeset/2f60c9f828db/ Log: fix another obvious typo, why those tests run so long diff --git a/pypy/module/_hashlib/interp_hashlib.py b/pypy/module/_hashlib/interp_hashlib.py --- a/pypy/module/_hashlib/interp_hashlib.py +++ b/pypy/module/_hashlib/interp_hashlib.py @@ -23,10 +23,10 @@ ctx = lltype.nullptr(ropenssl.EVP_MD_CTX.TO) def __init__(self, space, name): + self.name = name digest_type = self.digest_type_by_name(space) self.digest_size = rffi.getintfield(digest_type, 'c_md_size') rgc.add_memory_pressure(self, HASH_MALLOC_SIZE + self.digest_size) - self.name = name # Allocate a lock for each HASH object. # An optimization would be to not release the GIL on small requests, From noreply at buildbot.pypy.org Wed Feb 15 18:35:28 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 15 Feb 2012 18:35:28 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove gcremovetypeptr from test_zrpy_gc Message-ID: <20120215173528.38DB68204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52516:081cae386918 Date: 2012-02-10 15:23 +0100 http://bitbucket.org/pypy/pypy/changeset/081cae386918/ Log: remove gcremovetypeptr from test_zrpy_gc diff --git a/pypy/jit/backend/arm/test/test_zrpy_gc.py b/pypy/jit/backend/arm/test/test_zrpy_gc.py --- a/pypy/jit/backend/arm/test/test_zrpy_gc.py +++ b/pypy/jit/backend/arm/test/test_zrpy_gc.py @@ -94,8 +94,9 @@ # t = TranslationContext() t.config.translation.gc = gc - if gc != 'boehm': - t.config.translation.gcremovetypeptr = True + # The ARM backend does not support this option + #if gc != 'boehm': + # t.config.translation.gcremovetypeptr = True for name, value in kwds.items(): setattr(t.config.translation, name, value) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) From noreply at buildbot.pypy.org Wed Feb 15 18:35:30 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 15 Feb 2012 18:35:30 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120215173530.D8B198204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52517:72c916028806 Date: 2012-02-15 10:01 +0100 http://bitbucket.org/pypy/pypy/changeset/72c916028806/ Log: merge default diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -1520,7 +1522,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -106,7 +106,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,16 +2,21 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As has become a habit, this -release brings a lot of bugfixes, and performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -60,13 +65,6 @@ * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. -* Since the last release there was a significant breakthrough in PyPy's - fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_. We would like to thank again to everyone who donated. - - It's also probably worth noting, we're considering donations for the STM - project. - * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work @@ -82,7 +80,15 @@ * More numpy work -* Software Transactional Memory, you can read more about `our plans`_ +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -24,4 +24,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1685,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1815,21 +1768,6 @@ """Empty an existing set of all elements.""" raise NotImplementedError - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,6 +83,8 @@ descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") descr_xor = _binop_impl("bitwise_xor") @@ -97,13 +99,31 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") descr_invert = _unaryop_impl("invert") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -185,7 +205,10 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), __and__ = interp2app(W_GenericBox.descr_and), __or__ = interp2app(W_GenericBox.descr_or), __xor__ = interp2app(W_GenericBox.descr_xor), @@ -193,7 +216,16 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -101,8 +101,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +117,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +135,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1227,21 +1246,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1250,10 +1284,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1267,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -392,6 +392,8 @@ ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -406,15 +406,28 @@ from operator import truediv from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) assert int_(3) & int_(1) == int_(1) - raises(TypeError, lambda: float64(3) & 1) - assert int_(8) % int_(3) == int_(2) + assert 2 & int_(3) == int_(2) assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -625,6 +625,59 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +731,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1410,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -368,14 +368,14 @@ assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -22,3 +25,22 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) @@ -125,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,19 +471,22 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Wed Feb 15 18:35:32 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 15 Feb 2012 18:35:32 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: enable tests that check and require gcremovetypeptr Message-ID: <20120215173532.0E0958204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52518:70dd96853661 Date: 2012-02-15 11:22 +0100 http://bitbucket.org/pypy/pypy/changeset/70dd96853661/ Log: enable tests that check and require gcremovetypeptr diff --git a/pypy/jit/backend/arm/test/test_zrpy_gc.py b/pypy/jit/backend/arm/test/test_zrpy_gc.py --- a/pypy/jit/backend/arm/test/test_zrpy_gc.py +++ b/pypy/jit/backend/arm/test/test_zrpy_gc.py @@ -94,9 +94,8 @@ # t = TranslationContext() t.config.translation.gc = gc - # The ARM backend does not support this option - #if gc != 'boehm': - # t.config.translation.gcremovetypeptr = True + if gc != 'boehm': + t.config.translation.gcremovetypeptr = True for name, value in kwds.items(): setattr(t.config.translation, name, value) ann = t.buildannotator(policy=annpolicy.StrictAnnotatorPolicy()) diff --git a/pypy/jit/backend/arm/test/test_ztranslation.py b/pypy/jit/backend/arm/test/test_ztranslation.py --- a/pypy/jit/backend/arm/test/test_ztranslation.py +++ b/pypy/jit/backend/arm/test/test_ztranslation.py @@ -173,3 +173,87 @@ bound = res & ~255 assert 1024 <= bound <= 131072 assert bound & (bound-1) == 0 # a power of two + +class TestTranslationRemoveTypePtrARM(CCompiledMixin): + CPUClass = getcpuclass() + + def _get_TranslationContext(self): + t = TranslationContext() + t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' + t.config.translation.gcrootfinder = 'shadowstack' + t.config.translation.list_comprehension_operations = True + t.config.translation.gcremovetypeptr = True + return t + + def test_external_exception_handling_translates(self): + jitdriver = JitDriver(greens = [], reds = ['n', 'total']) + + class ImDone(Exception): + def __init__(self, resvalue): + self.resvalue = resvalue + + @dont_look_inside + def f(x, total): + if x <= 30: + raise ImDone(total * 10) + if x > 200: + return 2 + raise ValueError + @dont_look_inside + def g(x): + if x > 150: + raise ValueError + return 2 + class Base: + def meth(self): + return 2 + class Sub(Base): + def meth(self): + return 1 + @dont_look_inside + def h(x): + if x < 20000: + return Sub() + else: + return Base() + def myportal(i): + set_param(jitdriver, "threshold", 3) + set_param(jitdriver, "trace_eagerness", 2) + total = 0 + n = i + while True: + jitdriver.can_enter_jit(n=n, total=total) + jitdriver.jit_merge_point(n=n, total=total) + try: + total += f(n, total) + except ValueError: + total += 1 + try: + total += g(n) + except ValueError: + total -= 1 + n -= h(n).meth() # this is to force a GUARD_CLASS + def main(i): + try: + myportal(i) + except ImDone, e: + return e.resvalue + + # XXX custom fishing, depends on the exact env var and format + logfile = udir.join('test_ztranslation.log') + os.environ['PYPYLOG'] = 'jit-log-opt:%s' % (logfile,) + try: + res = self.meta_interp(main, [400]) + assert res == main(400) + finally: + del os.environ['PYPYLOG'] + + guard_class = 0 + for line in open(str(logfile)): + if 'guard_class' in line: + guard_class += 1 + # if we get many more guard_classes, it means that we generate + # guards that always fail (the following assert's original purpose + # is to catch the following case: each GUARD_CLASS is misgenerated + # and always fails with "gcremovetypeptr") + assert 0 < guard_class < 10 From noreply at buildbot.pypy.org Wed Feb 15 18:35:33 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 15 Feb 2012 18:35:33 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: implement and enable gcremovetypeptr support in the ARM backend Message-ID: <20120215173533.986F28204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52519:b1614ce9d2cb Date: 2012-02-15 18:35 +0100 http://bitbucket.org/pypy/pypy/changeset/b1614ce9d2cb/ Log: implement and enable gcremovetypeptr support in the ARM backend diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -261,7 +261,6 @@ def emit_op_guard_overflow(self, op, arglocs, regalloc, fcond): return self._emit_guard(op, arglocs, c.VS, save_exc=False) - # from ../x86/assembler.py:1265 def emit_op_guard_class(self, op, arglocs, regalloc, fcond): self._cmp_guard_class(op, arglocs, regalloc, fcond) self._emit_guard(op, arglocs[3:], c.EQ, save_exc=False) @@ -279,8 +278,12 @@ self.mc.LDR_ri(r.ip.value, locs[0].value, offset.value, cond=fcond) self.mc.CMP_rr(r.ip.value, locs[1].value, cond=fcond) else: - raise NotImplementedError - # XXX port from x86 backend once gc support is in place + typeid = locs[1] + self.mc.LDRH_ri(r.ip.value, locs[0].value, cond=fcond) + if typeid.is_imm(): + self.mc.CMP_ri(r.ip.value, typeid.value, cond=fcond) + else: + self.mc.CMP_rr(r.ip.value, typeid.value, cond=fcond) def emit_op_guard_not_invalidated(self, op, locs, regalloc, fcond): return self._emit_guard(op, locs, fcond, save_exc=False, diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -22,7 +22,8 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.llsupport.descr import ArrayDescr from pypy.jit.backend.llsupport import symbolic -from pypy.rpython.lltypesystem import lltype, rffi, rstr +from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory +from pypy.rpython.lltypesystem.lloperation import llop from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.backend.llsupport.descr import unpack_arraydescr from pypy.jit.backend.llsupport.descr import unpack_fielddescr @@ -653,15 +654,44 @@ boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) - y = self.get_scratch_reg(INT, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) - self.assembler.load(y, imm(y_val)) + + arglocs = [x, None, None] offset = self.cpu.vtable_offset - assert offset is not None - assert check_imm_arg(offset) - offset_loc = imm(offset) - arglocs = self._prepare_guard(op, [x, y, offset_loc]) + if offset is not None: + y = self.get_scratch_reg(INT, forbidden_vars=boxes) + self.assembler.load(y, imm(y_val)) + + assert check_imm_arg(offset) + offset_loc = imm(offset) + + arglocs[1] = y + arglocs[2] = offset_loc + else: + # XXX hard-coded assumption: to go from an object to its class + # we use the following algorithm: + # - read the typeid from mem(locs[0]), i.e. at offset 0 + # - keep the lower 16 bits read there + # - multiply by 4 and use it as an offset in type_info_group + # - add 16 bytes, to go past the TYPE_INFO structure + classptr = y_val + # here, we have to go back from 'classptr' to the value expected + # from reading the 16 bits in the object header + from pypy.rpython.memory.gctypelayout import GCData + sizeof_ti = rffi.sizeof(GCData.TYPE_INFO) + type_info_group = llop.gc_get_type_info_group(llmemory.Address) + type_info_group = rffi.cast(lltype.Signed, type_info_group) + expected_typeid = classptr - sizeof_ti - type_info_group + expected_typeid >>= 2 + if check_imm_arg(expected_typeid): + arglocs[1] = imm(expected_typeid) + else: + y = self.get_scratch_reg(INT, forbidden_vars=boxes) + self.assembler.load(y, imm(expected_typeid)) + arglocs[1] = y + + return self._prepare_guard(op, arglocs) return arglocs diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -14,9 +14,6 @@ gcdescr=None): if gcdescr is not None: gcdescr.force_index_ofs = FORCE_INDEX_OFS - # XXX for now the arm backend does not support the gcremovetypeptr - # translation option - assert gcdescr.config.translation.gcremovetypeptr is False AbstractLLCPU.__init__(self, rtyper, stats, opts, translate_support_code, gcdescr) From noreply at buildbot.pypy.org Wed Feb 15 18:56:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 18:56:03 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: a test and a fix Message-ID: <20120215175603.516F08204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52520:d484c26c1eee Date: 2012-02-15 19:54 +0200 http://bitbucket.org/pypy/pypy/changeset/d484c26c1eee/ Log: a test and a fix diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -271,5 +271,19 @@ ops = """ [p0, p1, p2, i0, i1, i2] call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) f0 = getarrayitem_raw(p0, i0, descr=arraydescr) - xxx + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + i3 = cast_float_to_int(f2) + finish(p0, p1, p2, i0, i1, i3) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + i3 = cast_float_to_int(f2) + finish(p0, p1, p2, i0, i1, i3) + """ + self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -195,7 +195,11 @@ elif op.is_always_pure(): # in theory no side effect ops, but stuff like malloc # can go in the way - pass + # we also need to keep track of stuff that can go into those + for box in op.getarglist(): + if self.getvalue(box) in self.track: + self.reset() + break else: self.reset() self.emit_operation(op) From noreply at buildbot.pypy.org Wed Feb 15 18:56:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 18:56:04 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: add a passing test Message-ID: <20120215175604.770718204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52521:f534ba1deb98 Date: 2012-02-15 19:55 +0200 http://bitbucket.org/pypy/pypy/changeset/f534ba1deb98/ Log: add a passing test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -287,3 +287,27 @@ finish(p0, p1, p2, i0, i1, i3) """ self.optimize_loop(ops, expected) + + def test_force_by_box_usage_2(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p2, i2, descr=assert_aligned) + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2) + i3 = cast_float_to_int(f2) + finish(p0, p1, p2, i0, i1, i3) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + f0 = getarrayitem_raw(p0, i0, descr=arraydescr) + f1 = getarrayitem_raw(p1, i1, descr=arraydescr) + f2 = float_add(f0, f1) + setarrayitem_raw(p2, i2, f2) + i3 = cast_float_to_int(f2) + finish(p0, p1, p2, i0, i1, i3) + """ + self.optimize_loop(ops, expected) From noreply at buildbot.pypy.org Wed Feb 15 22:35:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 15 Feb 2012 22:35:46 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix translation Message-ID: <20120215213546.E32538204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52522:f83f761e3db5 Date: 2012-02-15 23:35 +0200 http://bitbucket.org/pypy/pypy/changeset/f83f761e3db5/ Log: fix translation diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -109,7 +109,7 @@ def optimize_GETARRAYITEM_RAW(self, op): arr = self.getvalue(op.getarg(0)) index = self.getvalue(op.getarg(1)) - track = self.tracked_indexes.get(index) + track = self.tracked_indexes.get(index, None) if track is None: self.emit_operation(op) else: From noreply at buildbot.pypy.org Wed Feb 15 22:44:23 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 15 Feb 2012 22:44:23 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: most tests pass, translation demands assert to be removed later Message-ID: <20120215214423.B0BF48204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52523:f5c2c9a63515 Date: 2012-02-15 23:42 +0200 http://bitbucket.org/pypy/pypy/changeset/f5c2c9a63515/ Log: most tests pass, translation demands assert to be removed later diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -723,6 +723,7 @@ raise NotImplementedError def compute(self): + assert isinstance(self.res, BaseArray) ra = ResultArray(self, self.size, self.shape, self.res_dtype, self.res) loop.compute(ra) @@ -773,7 +774,6 @@ class Call1(VirtualArray): def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, values, out_arg=None): - xxx VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.values = values self.size = values.size @@ -799,6 +799,8 @@ out_arg=None): VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.ufunc = ufunc + assert isinstance(left, BaseArray) + assert isinstance(right, BaseArray) self.left = left self.right = right self.calc_dtype = calc_dtype diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -227,9 +227,8 @@ self.bool_result = bool_result def call(self, space, args_w): - from pypy.module.micronumpy.interp_numarray import (Call1, + from pypy.module.micronumpy.interp_numarray import (Call1, BaseArray, convert_to_array, Scalar, shape_agreement) - if len(args_w)<2: [w_obj] = args_w out = None @@ -243,8 +242,9 @@ promote_to_float=self.promote_to_float, promote_bools=self.promote_bools) if out: - ret_shape = shape_agreement(space, w_obj.shape, out.shape) - assert(ret_shape is not None) + if not isinstance(out, BaseArray): + raise OperationError(space.w_TypeError, space.wrap( + 'output must be an array')) res_dtype = out.find_dtype() elif self.bool_result: res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype @@ -254,8 +254,11 @@ arr = self.func(calc_dtype, w_obj.value.convert_to(calc_dtype)) if isinstance(out,Scalar): out.value=arr + elif isinstance(out, BaseArray): + out.fill(space, arr) + else: + out = arr return space.wrap(out) - w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, w_obj, out) w_obj.add_invalidates(w_res) From noreply at buildbot.pypy.org Wed Feb 15 23:48:43 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:43 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PySet_Pop(), PySet_Clear() Message-ID: <20120215224843.C924A8204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52524:176ba45b109b Date: 2012-02-15 18:50 +0100 http://bitbucket.org/pypy/pypy/changeset/176ba45b109b/ Log: cpyext: implement PySet_Pop(), PySet_Clear() diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1755,19 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + From noreply at buildbot.pypy.org Wed Feb 15 23:48:45 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:45 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Expose more fields of Py_buffer, if someone wants to use them... Message-ID: <20120215224845.047A98204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52525:5dba8347e4d2 Date: 2012-02-15 19:04 +0100 http://bitbucket.org/pypy/pypy/changeset/5dba8347e4d2/ Log: cpyext: Expose more fields of Py_buffer, if someone wants to use them... diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -435,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -439,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -454,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 From noreply at buildbot.pypy.org Wed Feb 15 23:48:46 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:46 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyUnicode_FromOrdinal() Message-ID: <20120215224846.319B08204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52526:8fa0802a2488 Date: 2012-02-15 19:08 +0100 http://bitbucket.org/pypy/pypy/changeset/8fa0802a2488/ Log: cpyext: add PyUnicode_FromOrdinal() diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far From noreply at buildbot.pypy.org Wed Feb 15 23:48:47 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:47 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PyThreadState_GetDict() Message-ID: <20120215224847.5B7928204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52527:4a55cbff44eb Date: 2012-02-15 21:24 +0100 http://bitbucket.org/pypy/pypy/changeset/4a55cbff44eb/ Log: cpyext: implement PyThreadState_GetDict() diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,48 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +95,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) From noreply at buildbot.pypy.org Wed Feb 15 23:48:48 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:48 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Implement PyDictProxy_New(), as a read-only dict. Message-ID: <20120215224848.8C0448204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52528:94882d5e1677 Date: 2012-02-15 22:06 +0100 http://bitbucket.org/pypy/pypy/changeset/94882d5e1677/ Log: cpyext: Implement PyDictProxy_New(), as a read-only dict. I don't know if there is a better way; PyPy does not have a separate type for int.__dict__. diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -191,3 +192,24 @@ raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,13 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') From noreply at buildbot.pypy.org Wed Feb 15 23:48:49 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:48:49 +0100 (CET) Subject: [pypy-commit] pypy default: Translation fixes in the Oracle module. Message-ID: <20120215224849.B6C9C8204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52529:a2dfde172fca Date: 2012-02-15 23:47 +0100 http://bitbucket.org/pypy/pypy/changeset/a2dfde172fca/ Log: Translation fixes in the Oracle module. diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, From noreply at buildbot.pypy.org Wed Feb 15 23:50:19 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 15 Feb 2012 23:50:19 +0100 (CET) Subject: [pypy-commit] pypy default: make checkmodule.py pass with module/oracle. Message-ID: <20120215225019.EA6918204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52530:24b5057ce497 Date: 2012-02-15 23:49 +0100 http://bitbucket.org/pypy/pypy/changeset/24b5057ce497/ Log: make checkmodule.py pass with module/oracle. diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() From noreply at buildbot.pypy.org Wed Feb 15 23:56:53 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 15 Feb 2012 23:56:53 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: all pre-out_arg tests pass Message-ID: <20120215225653.7908C8204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52531:e3b7e88090b8 Date: 2012-02-15 23:58 +0200 http://bitbucket.org/pypy/pypy/changeset/e3b7e88090b8/ Log: all pre-out_arg tests pass diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -723,7 +723,6 @@ raise NotImplementedError def compute(self): - assert isinstance(self.res, BaseArray) ra = ResultArray(self, self.size, self.shape, self.res_dtype, self.res) loop.compute(ra) @@ -799,8 +798,6 @@ out_arg=None): VirtualArray.__init__(self, name, shape, res_dtype, out_arg) self.ufunc = ufunc - assert isinstance(left, BaseArray) - assert isinstance(right, BaseArray) self.left = left self.right = right self.calc_dtype = calc_dtype @@ -813,6 +810,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() + assert isinstance(self.left, BaseArray) + assert isinstance(self.right, BaseArray) if self.shape != self.left.shape and self.shape != self.right.shape: return signature.BroadcastBoth(self.ufunc, self.name, self.calc_dtype, @@ -835,6 +834,7 @@ def __init__(self, child, size, shape, dtype, res=None, order='C'): if res is None: res = W_NDimArray(size, shape, dtype, order) + assert isinstance(res, BaseArray) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): From noreply at buildbot.pypy.org Wed Feb 15 23:56:54 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 15 Feb 2012 23:56:54 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: force non-lazy behaviour for ufuncs Message-ID: <20120215225654.A7A688204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52532:481d17327eeb Date: 2012-02-16 00:56 +0200 http://bitbucket.org/pypy/pypy/changeset/481d17327eeb/ Log: force non-lazy behaviour for ufuncs diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -783,6 +783,7 @@ self.values = None def create_sig(self): + print 'Call1::create_sig' if self.forced_result is not None: return self.forced_result.create_sig() return signature.Call1(self.ufunc, self.name, self.calc_dtype, @@ -943,7 +944,7 @@ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self.storage, item, value.convert_to(self.dtype)) def calc_strides(self, shape): strides = [] diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -262,6 +262,9 @@ w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, w_obj, out) w_obj.add_invalidates(w_res) + if out: + #Force it immediately + w_res.get_concrete() return w_res diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -36,9 +36,9 @@ a = array([[1, 2], [3, 4]]) c = zeros((2,2,2)) b = negative(a + a, out=c[1]) - print c + #test for view, and also test that forcing out also forces b + assert (c[:, :, 1] == [[0, 0], [-4, -8]]).all() assert (b == [[-2, -4], [-6, -8]]).all() - assert (c[:, :, 1] == [[0, 0], [-4, -8]]).all() def test_ufunc_cast(self): from _numpypy import array, negative From noreply at buildbot.pypy.org Thu Feb 16 00:18:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 16 Feb 2012 00:18:35 +0100 (CET) Subject: [pypy-commit] pypy default: Fix translation Message-ID: <20120215231835.75BC28204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52533:f272bf10ef94 Date: 2012-02-16 00:17 +0100 http://bitbucket.org/pypy/pypy/changeset/f272bf10ef94/ Log: Fix translation diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -61,6 +61,7 @@ return MemoryCapsule def ThreadState_dealloc(ts, space): + assert space is not None Py_DecRef(space, ts.c_dict) ThreadStateCapsule = encapsulator(PyThreadState.TO, dealloc=ThreadState_dealloc) From notifications-noreply at bitbucket.org Thu Feb 16 03:15:00 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 16 Feb 2012 02:15:00 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120216021500.1184.82934@bitbucket01.managed.contegix.com> You have received a notification from p01197. Hi, I forked pypy. My fork is at https://bitbucket.org/p01197/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Feb 16 09:19:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 09:19:24 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: todo Message-ID: <20120216081924.86DDB8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52534:fc24912355a2 Date: 2012-02-15 20:27 +0100 http://bitbucket.org/pypy/pypy/changeset/fc24912355a2/ Log: todo diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt --- a/pypy/doc/discussion/stm_todo.txt +++ b/pypy/doc/discussion/stm_todo.txt @@ -7,3 +7,5 @@ e23ab2c195c1 Added a number of "# XXX --- custom version for STM ---" 31f2ed861176 One more + +- track code like rdict.popitem() that mutate some global data From noreply at buildbot.pypy.org Thu Feb 16 09:19:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 09:19:26 +0100 (CET) Subject: [pypy-commit] pypy default: Minor change: get rid of 'frame.nlocals', a mostly useless attribute. Message-ID: <20120216081926.3D9438204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52535:ad9b97606c50 Date: 2012-02-16 08:53 +0100 http://bitbucket.org/pypy/pypy/changeset/ad9b97606c50/ Log: Minor change: get rid of 'frame.nlocals', a mostly useless attribute. diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError From noreply at buildbot.pypy.org Thu Feb 16 09:25:47 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 09:25:47 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120216082547.98D3E8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52536:4a8332d1fbc5 Date: 2012-02-16 09:21 +0100 http://bitbucket.org/pypy/pypy/changeset/4a8332d1fbc5/ Log: merge default diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2383,6 +2383,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -168,7 +168,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -205,8 +204,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -464,7 +468,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -500,7 +508,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -191,3 +192,24 @@ raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -439,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -454,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1755,19 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,13 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() From noreply at buildbot.pypy.org Thu Feb 16 09:51:25 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 09:51:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add the operation KEEPALIVE to the test for noops Message-ID: <20120216085125.D53958204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52537:7886a5225210 Date: 2012-02-16 00:43 -0800 http://bitbucket.org/pypy/pypy/changeset/7886a5225210/ Log: Add the operation KEEPALIVE to the test for noops diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1677,6 +1677,7 @@ c_box = self.alloc_string("hi there").constbox() c_nest = ConstInt(0) self.execute_operation(rop.DEBUG_MERGE_POINT, [c_box, c_nest], 'void') + self.execute_operation(rop.KEEPALIVE, [c_box], 'void') self.execute_operation(rop.JIT_DEBUG, [c_box, c_nest, c_nest, c_nest, c_nest], 'void') From noreply at buildbot.pypy.org Thu Feb 16 09:51:27 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 09:51:27 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Implement KEEPALIVE in the ppc backend Message-ID: <20120216085127.20D5F82B1F@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52538:342799dfe20e Date: 2012-02-16 00:43 -0800 http://bitbucket.org/pypy/pypy/changeset/342799dfe20e/ Log: Implement KEEPALIVE in the ppc backend diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -882,6 +882,7 @@ pass emit_jit_debug = emit_debug_merge_point + emit_keepalive = emit_debug_merge_point def emit_cond_call_gc_wb(self, op, arglocs, regalloc): # Write code equivalent to write_barrier() in the GC: it checks diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -811,6 +811,7 @@ prepare_debug_merge_point = void prepare_jit_debug = void + prepare_keepalive = void def prepare_cond_call_gc_wb(self, op): assert op.result is None From noreply at buildbot.pypy.org Thu Feb 16 10:09:57 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:09:57 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Goal: have a whole-program tracker that can propagate these hints. Message-ID: <20120216090957.1BEB18204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52539:275aa34cf209 Date: 2012-02-16 09:18 +0100 http://bitbucket.org/pypy/pypy/changeset/275aa34cf209/ Log: Goal: have a whole-program tracker that can propagate these hints. diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -39,6 +39,10 @@ __metaclass__ = extendabletype + _immutable_fields_ = ['pycode', 'locals_stack_w', 'cells'] + # note: 'locals_stack_w' is immutable because it contains always the + # same list, but what the list itself contains changes + frame_finished_execution = False last_instr = -1 last_exception = None @@ -147,6 +151,9 @@ w_inputvalue is for generator.send()) and operr is for generator.throw()). """ + self = hint(self, stm_write=True) + hint(self.locals_stack_w, stm_write=True) + hint(self.cells, stm_immutable=True) # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but # overridden in the {,Host}FrameClass subclasses of PyFrame. From noreply at buildbot.pypy.org Thu Feb 16 10:09:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:09:58 +0100 (CET) Subject: [pypy-commit] pypy default: typo Message-ID: <20120216090958.497098204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52540:b0a6accd6d37 Date: 2012-02-16 09:21 +0100 http://bitbucket.org/pypy/pypy/changeset/b0a6accd6d37/ Log: typo diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -143,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but From noreply at buildbot.pypy.org Thu Feb 16 10:10:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:10:01 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: hg merge default Message-ID: <20120216091001.7D04F8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52541:c98a02b2c23e Date: 2012-02-16 09:22 +0100 http://bitbucket.org/pypy/pypy/changeset/c98a02b2c23e/ Log: hg merge default diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -111,7 +111,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -64,11 +64,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -148,8 +147,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ self = hint(self, stm_write=True) hint(self.locals_stack_w, stm_write=True) @@ -202,7 +201,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -230,7 +229,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -242,7 +241,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -274,13 +274,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -327,12 +329,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -449,7 +452,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -463,7 +466,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -383,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -397,6 +399,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' @@ -432,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -191,3 +192,24 @@ raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1685,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,13 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1297,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1487,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -78,7 +78,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -144,6 +145,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -249,6 +251,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -320,13 +324,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s From noreply at buildbot.pypy.org Thu Feb 16 10:10:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:10:03 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: A bunch of tests. No code so far :-) Message-ID: <20120216091003.E77DB8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52542:73b92e6987b6 Date: 2012-02-16 10:09 +0100 http://bitbucket.org/pypy/pypy/changeset/73b92e6987b6/ Log: A bunch of tests. No code so far :-) diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/test_localtracker.py @@ -0,0 +1,133 @@ +from pypy.translator.stm.localtracker import StmLocalTracker +from pypy.translator.translator import TranslationContext, graphof +from pypy.conftest import option +from pypy.rlib.jit import hint + + +class TestStmLocalTracker(object): + + def translate(self, func, sig): + t = TranslationContext() + t.buildannotator().build_types(func, sig) + t.buildrtyper().specialize() + if option.view: + t.view() + localtracker = StmLocalTracker(t) + self.localtracker = localtracker + localtracker.track_and_propagate_locals() + return localtracker + + + def test_no_local(self): + x = X(42) + def g(x): + return x.n + def f(n): + return g(x) + # + localtracker = self.translate(f, [int]) + assert not localtracker.locals + + def test_freshly_allocated(self): + z = [42] + def f(n): + x = [n] + y = [n+1] + _see(x, 'x') + _see(y, 'y') + _see(z, 'z') + return x[0], y[0] + # + self.translate(f, [int]) + self.check(['x', 'y']) # x and y are locals; z is prebuilt + + def test_freshly_allocated_to_g(self): + def g(x): + _see(x, 'x') + return x[0] + def f(n): + g([n]) + g([n+1]) + g([n+2]) + # + self.translate(f, [int]) + self.check(['x']) # x is a local in all possible calls to g() + + def test_not_always_freshly_allocated_to_g(self): + z = [42] + def g(x): + _see(x, 'x') + return x[0] + def f(n): + y = [n] + g(y) + g(z) + _see(y, 'y') + # + self.translate(f, [int]) + self.check(['y']) # x is not a local in one possible call to g() + # but y is still a local + + def test_constructor_allocates_freshly(self): + def f(n): + x = X(n) + _see(x, 'x') + # + self.translate(f, [int]) + self.check(['x']) + + def test_fresh_in_init(self): + class Foo: + def __init__(self, n): + self.n = n + _see(self, 'foo') + def f(n): + return Foo(n) + # + self.translate(f, [int]) + self.check(['foo']) + + def test_returns_fresh_object(self): + def g(n): + return X(n) + def f(n): + x = g(n) + _see(x, 'x') + # + self.translate(f, [int]) + self.check(['x']) + + def test_indirect_call_returns_fresh_object(self): + def g(n): + return X(n) + def h(n): + return Y(n) + lst = [g, h] + def f(n): + x = lst[n % 2](n) + _see(x, 'x') + # + self.translate(f, [int]) + self.check(['x']) + + def test_indirect_call_may_return_nonfresh_object(self): + z = X(42) + def g(n): + return X(n) + def h(n): + return z + lst = [g, h] + def f(n): + x = lst[n % 2](n) + _see(x, 'x') + # + self.translate(f, [int]) + self.check([]) + + +class X: + def __init__(self, n): + self.n = n + +class Y(X): + pass From noreply at buildbot.pypy.org Thu Feb 16 10:14:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:14:33 +0100 (CET) Subject: [pypy-commit] pypy default: Patch by djc from Gentoo. Message-ID: <20120216091433.C2B4E8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52543:26a8d3fc57a7 Date: 2012-02-16 10:14 +0100 http://bitbucket.org/pypy/pypy/changeset/26a8d3fc57a7/ Log: Patch by djc from Gentoo. diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -1697,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Thu Feb 16 10:48:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 16 Feb 2012 10:48:51 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: status update about py3k Message-ID: <20120216094851.E83D38204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4087:a9add862b58c Date: 2012-02-16 10:48 +0100 http://bitbucket.org/pypy/extradoc/changeset/a9add862b58c/ Log: status update about py3k diff --git a/blog/draft/py3k-status-update-1.rst b/blog/draft/py3k-status-update-1.rst new file mode 100644 --- /dev/null +++ b/blog/draft/py3k-status-update-1.rst @@ -0,0 +1,53 @@ +Hello, + +thank to all the people who donated_ to the `py3k proposal`_, we managed to +collect enough money to start to work on the first step. This is a quick +summary of what I did since I began working on this. + +First of all, many thanks to Amaury Forgeot d'Arc, who started the `py3k +branch`_ months ago, and already implemented lots of features including +e.g. switching to "unicode everywhere" and the int/long unification, making my +job considerably easier :-), + +I started to work on the branch at the last `Leysin sprint`_ toghether with +Romain Guillebert, where we worked on various syntactical changes such as +extended tuple unpacking and keyword-only arguments. Working on such features +is a good way to learn about a lot of the layers which the PyPy Python +interpreter is composed of, because often you have to touch the tokenizer, the +parser, the ast builder, the compiler and finally the interpreter. + +Then I worked on improving our test machinery in various way, e.g. by +optimizing the initialization phase of the object space created by tests, +which considerably speeds up small test runs, and adding the possibility to +automatically run our tests against CPython 3, to ensure that what we are not +trying to fix a test which is meant to fail :-). I also setup our buildbot to +run the `py3k tests nightly`_, so that we can have an up to date overview of +what is left to do. + +Finally I started to look at all the tests in the interpreter/ directory, +trying to unmangle the mess of failing tests. Lots of tests were failing +because of simple syntax errors (e.g., by using the no longer valid ``except +Exception, e`` syntax or the old ``print`` statement), others for slightly +more complex reasons like ``unicode`` vs ``bytes`` or the now gone int/long +distinction. Others were failing simply because they relied on new features, +such as the new `lexical exception handlers`_. + +To give some numbers, at some point in january we had 1621 failing tests in +the branch, while today we are `under 1000`_ (to be exact: 999, and this is why +I've waited until today to post the status update :-)). + +Before ending this blog post, I would like to thank once again all the people +who donated to PyPy, who let me to do this wonderful job. That's all for now, +I'll post more updates soon. + +cheers, +Antonio + +.. _donated: http://morepypy.blogspot.com/2012/01/py3k-and-numpy-first-stage-thanks-to.html +.. _`py3k proposal`: http://pypy.org/py3donate.html +.. _`Leysin sprint`: http://morepypy.blogspot.com/2011/12/leysin-winter-sprint.html +.. _`py3k tests nightly`: http://buildbot.pypy.org/summary?branch=py3k +.. _`lexical exception handlers`: http://bugs.python.org/issue3021 +.. _`under 1000`: http://buildbot.pypy.org/summary?category=linux32&branch=py3k&recentrev=52508:c1756f5aa63e + + From noreply at buildbot.pypy.org Thu Feb 16 10:53:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 10:53:23 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Typos Message-ID: <20120216095323.47CFE8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4088:59228d3cb390 Date: 2012-02-16 10:53 +0100 http://bitbucket.org/pypy/extradoc/changeset/59228d3cb390/ Log: Typos diff --git a/blog/draft/py3k-status-update-1.rst b/blog/draft/py3k-status-update-1.rst --- a/blog/draft/py3k-status-update-1.rst +++ b/blog/draft/py3k-status-update-1.rst @@ -1,15 +1,15 @@ Hello, -thank to all the people who donated_ to the `py3k proposal`_, we managed to +Thank to all the people who donated_ to the `py3k proposal`_, we managed to collect enough money to start to work on the first step. This is a quick summary of what I did since I began working on this. First of all, many thanks to Amaury Forgeot d'Arc, who started the `py3k branch`_ months ago, and already implemented lots of features including e.g. switching to "unicode everywhere" and the int/long unification, making my -job considerably easier :-), +job considerably easier :-) -I started to work on the branch at the last `Leysin sprint`_ toghether with +I started to work on the branch at the last `Leysin sprint`_ together with Romain Guillebert, where we worked on various syntactical changes such as extended tuple unpacking and keyword-only arguments. Working on such features is a good way to learn about a lot of the layers which the PyPy Python From noreply at buildbot.pypy.org Thu Feb 16 12:24:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 12:24:37 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Comments and start. Message-ID: <20120216112437.985148204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52544:e4072231e489 Date: 2012-02-16 11:03 +0100 http://bitbucket.org/pypy/pypy/changeset/e4072231e489/ Log: Comments and start. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/localtracker.py @@ -0,0 +1,29 @@ + + +RETURNS_LOCAL_POINTER = set([ + 'malloc', 'malloc_varsize', 'malloc_nonmovable', + 'malloc_nonmovable_varsize', + ]) + + +class StmLocalTracker(object): + """Tracker to determine which pointers are statically known to point + to local objects. Here, 'local' versus 'global' is meant in the sense + of the stmgc: a pointer is 'local' if it goes to the thread-local memory, + and 'global' if it points to the shared read-only memory area.""" + + def __init__(self, translator): + self.translator = translator + # a set of variables in the graphs that contain a known-to-be-local + # pointer. + self.locals = set() + + def track_and_propagate_locals(self): + for graph in self.translator.graphs: + self.propagate_from_graph(graph) + + def propagate_from_graph(self, graph): + for block in graph.iterblocks(): + for op in block.operations: + if op.opname in RETURNS_LOCAL_POINTER: + self.locals.add(op.result) diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py --- a/pypy/translator/stm/test/test_localtracker.py +++ b/pypy/translator/stm/test/test_localtracker.py @@ -2,12 +2,17 @@ from pypy.translator.translator import TranslationContext, graphof from pypy.conftest import option from pypy.rlib.jit import hint +from pypy.rpython.lltypesystem import lltype +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.annotation import model as annmodel class TestStmLocalTracker(object): def translate(self, func, sig): t = TranslationContext() + self.translator = t + t._seen_locals = {} t.buildannotator().build_types(func, sig) t.buildrtyper().specialize() if option.view: @@ -17,6 +22,13 @@ localtracker.track_and_propagate_locals() return localtracker + def check(self, expected_names): + got_local_names = set() + for name, v in self.translator._seen_locals.items(): + if v in self.localtracker.locals: + got_local_names.add(name) + assert got_local_names == set(expected_names) + def test_no_local(self): x = X(42) @@ -26,21 +38,72 @@ return g(x) # localtracker = self.translate(f, [int]) - assert not localtracker.locals + self.check([]) def test_freshly_allocated(self): - z = [42] + z = lltype.malloc(S) def f(n): - x = [n] - y = [n+1] + x = lltype.malloc(S) + x.n = n + y = lltype.malloc(S) + y.n = n+1 _see(x, 'x') _see(y, 'y') _see(z, 'z') - return x[0], y[0] + return x.n, y.n, z.n # self.translate(f, [int]) self.check(['x', 'y']) # x and y are locals; z is prebuilt + def test_freshly_allocated_in_one_path(self): + z = lltype.malloc(S) + def f(n): + x = lltype.malloc(S) + x.n = n + if n > 5: + y = lltype.malloc(S) + y.n = n+1 + else: + y = z + _see(x, 'x') + _see(y, 'y') + return x.n + y.n + # + self.translate(f, [int]) + self.check(['x']) # x is local; y not, as it can be equal to z + + def test_freshly_allocated_in_the_other_path(self): + z = lltype.malloc(S) + def f(n): + x = lltype.malloc(S) + x.n = n + if n > 5: + y = z + else: + y = lltype.malloc(S) + y.n = n+1 + _see(x, 'x') + _see(y, 'y') + return x.n + y.n + # + self.translate(f, [int]) + self.check(['x']) # x is local; y not, as it can be equal to z + + def test_freshly_allocated_in_loop(self): + z = lltype.malloc(S) + def f(n): + while True: + x = lltype.malloc(S) + x.n = n + n -= 1 + if n < 0: + break + _see(x, 'x') + return x.n + # + self.translate(f, [int]) + self.check(['x']) # x is local + def test_freshly_allocated_to_g(self): def g(x): _see(x, 'x') @@ -125,9 +188,29 @@ self.check([]) +S = lltype.GcStruct('S', ('n', lltype.Signed)) + class X: def __init__(self, n): self.n = n class Y(X): pass + + +def _see(var, name): + pass + +class Entry(ExtRegistryEntry): + _about_ = _see + + def compute_result_annotation(self, s_var, s_name): + return annmodel.s_None + + def specialize_call(self, hop): + v = hop.inputarg(hop.args_r[0], arg=0) + name = hop.args_s[1].const + assert name not in hop.rtyper.annotator.translator._seen_locals, ( + "duplicate name %r" % (name,)) + hop.rtyper.annotator.translator._seen_locals[name] = v + return hop.inputconst(lltype.Void, None) From noreply at buildbot.pypy.org Thu Feb 16 12:24:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 12:24:38 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: A tracker that attempts to follow globally where GC pointers go. Message-ID: <20120216112438.C52D982B1F@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52545:ac10a97ed5c2 Date: 2012-02-16 12:14 +0100 http://bitbucket.org/pypy/pypy/changeset/ac10a97ed5c2/ Log: A tracker that attempts to follow globally where GC pointers go. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/gcsource.py @@ -0,0 +1,106 @@ +from pypy.objspace.flow.model import Variable +from pypy.rpython.lltypesystem import lltype +from pypy.translator.simplify import get_graph + + +COPIES_POINTER = set([ + 'force_cast', 'cast_pointer', 'same_as', 'cast_opaque_ptr', + ]) + + +def _is_gc(var_or_const): + TYPE = var_or_const.concretetype + return isinstance(TYPE, lltype.Ptr) and TYPE.TO._gckind == 'gc' + +def enum_gc_dependencies(translator): + """Enumerate pairs (var-or-const-or-op, var) that together describe + the whole control flow of GC pointers in the program. If the source + is a SpaceOperation, it means 'produced by this operation but we can't + follow what this operation does'. If the source is None, it means + 'coming from somewhere, unsure where'. + """ + # Tracking dependencies of only GC pointers simplifies the logic here. + # We don't have to worry about external calls and callbacks. + # This works by assuming that each graph's calls are fully tracked + # by the last argument to 'indirect_call'. Graphs for which we don't + # find any call like this are assumed to be called 'from the outside' + # passing any random arguments to it. + resultlist = [] + was_a_callee = set() + # + def call(graph, args, result): + inputargs = graph.getargs() + assert len(args) == len(inputargs) + for v1, v2 in zip(args, inputargs): + if _is_gc(v2): + assert _is_gc(v1) + resultlist.append((v1, v2)) + if _is_gc(result): + v = graph.getreturnvar() + assert _is_gc(v) + resultlist.append((v, result)) + was_a_callee.add(graph) + # + for graph in translator.graphs: + for block in graph.iterblocks(): + for op in block.operations: + # + if op.opname in COPIES_POINTER: + if _is_gc(op.result) and _is_gc(op.args[0]): + resultlist.append((op.args[0], op.result)) + continue + # + if op.opname == 'direct_call': + tograph = get_graph(op.args[0], translator) + if tograph is not None: + call(tograph, op.args[1:], op.result) + continue + # + if op.opname == 'indirect_call': + tographs = op.args[-1].value + if tographs is not None: + for tograph in tographs: + call(tograph, op.args[1:-1], op.result) + continue + # + if _is_gc(op.result): + resultlist.append((op, op.result)) + # + for link in block.exits: + for v1, v2 in zip(link.args, link.target.inputargs): + if _is_gc(v2): + assert _is_gc(v1) + resultlist.append((v1, v2)) + # + for graph in translator.graphs: + if graph not in was_a_callee: + for v in graph.getargs(): + if _is_gc(v): + resultlist.append((None, v)) + return resultlist + + +class GcSource(object): + """Works like a dict {gcptr-var: set-of-sources}. A source is a + Constant, or a SpaceOperation that creates the value, or None which + means 'no clue'.""" + + def __init__(self, translator): + self.translator = translator + self._backmapping = {} + for v1, v2 in enum_gc_dependencies(translator): + self._backmapping.setdefault(v2, []).append(v1) + + def __getitem__(self, variable): + result = set() + pending = [variable] + seen = set(pending) + for v2 in pending: + for v1 in self._backmapping.get(v2, ()): + if isinstance(v1, Variable): + if v1 not in seen: + seen.add(v1) + pending.append(v1) + else: + result.add(v1) + return result diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/test_gcsource.py @@ -0,0 +1,117 @@ +from pypy.translator.translator import TranslationContext +from pypy.translator.stm.gcsource import GcSource +from pypy.objspace.flow.model import SpaceOperation, Constant +from pypy.rpython.lltypesystem import lltype + + +class X: + def __init__(self, n): + self.n = n + + +def gcsource(func, sig): + t = TranslationContext() + t.buildannotator().build_types(func, sig) + t.buildrtyper().specialize() + gsrc = GcSource(t) + return gsrc + +def test_simple(): + def main(n): + return X(n) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 1 + [op] = list(s) + assert isinstance(op, SpaceOperation) + assert op.opname == 'malloc' + +def test_two_sources(): + foo = X(42) + def main(n): + if n > 5: + return X(n) + else: + return foo + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 2 + [s1, s2] = list(s) + if isinstance(s1, SpaceOperation): + s1, s2 = s2, s1 + assert isinstance(s1, Constant) + assert s1.value.inst_n == 42 + assert isinstance(s2, SpaceOperation) + assert s2.opname == 'malloc' + +def test_call(): + def f1(n): + return X(n) + def main(n): + return f1(n) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 1 + assert list(s)[0].opname == 'malloc' + +def test_indirect_call(): + foo = X(42) + def f1(n): + return X(n) + def f2(n): + return foo + lst = [f1, f2] + def main(n): + return lst[n % 2](n) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 2 + [s1, s2] = list(s) + if isinstance(s1, SpaceOperation): + s1, s2 = s2, s1 + assert isinstance(s1, Constant) + assert s1.value.inst_n == 42 + assert isinstance(s2, SpaceOperation) + assert s2.opname == 'malloc' + +def test_argument(): + def f1(x): + return x + def main(n): + return f1(X(5)) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 1 + assert list(s)[0].opname == 'malloc' + +def test_argument_twice(): + foo = X(42) + def f1(x): + return x + def main(n): + f1(foo) + return f1(X(5)) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 2 + [s1, s2] = list(s) + if isinstance(s1, SpaceOperation): + s1, s2 = s2, s1 + assert isinstance(s1, Constant) + assert s1.value.inst_n == 42 + assert isinstance(s2, SpaceOperation) + assert s2.opname == 'malloc' + +def test_unknown_source(): + def main(x): + return x + gsrc = gcsource(main, [lltype.Ptr(lltype.GcStruct('S'))]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert list(s) == [None] From noreply at buildbot.pypy.org Thu Feb 16 12:24:40 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 12:24:40 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Use GcSource to implement the StmLocalTracker. Message-ID: <20120216112440.0AF028204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52546:5dcd567383ed Date: 2012-02-16 12:23 +0100 http://bitbucket.org/pypy/pypy/changeset/5dcd567383ed/ Log: Use GcSource to implement the StmLocalTracker. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -1,3 +1,5 @@ +from pypy.translator.stm.gcsource import GcSource +from pypy.objspace.flow.model import Variable, Constant, SpaceOperation RETURNS_LOCAL_POINTER = set([ @@ -14,16 +16,19 @@ def __init__(self, translator): self.translator = translator - # a set of variables in the graphs that contain a known-to-be-local - # pointer. - self.locals = set() + self.gsrc = GcSource(translator) - def track_and_propagate_locals(self): - for graph in self.translator.graphs: - self.propagate_from_graph(graph) - - def propagate_from_graph(self, graph): - for block in graph.iterblocks(): - for op in block.operations: - if op.opname in RETURNS_LOCAL_POINTER: - self.locals.add(op.result) + def is_local(self, variable): + assert isinstance(variable, Variable) + for src in self.gsrc[variable]: + if isinstance(src, SpaceOperation): + if src.opname not in RETURNS_LOCAL_POINTER: + return False + elif isinstance(src, Constant): + if src.value: # a NULL pointer is still valid as local + return False + elif src is None: + return False + else: + raise AssertionError(src) + return True diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py --- a/pypy/translator/stm/test/test_localtracker.py +++ b/pypy/translator/stm/test/test_localtracker.py @@ -2,6 +2,7 @@ from pypy.translator.translator import TranslationContext, graphof from pypy.conftest import option from pypy.rlib.jit import hint +from pypy.rlib.nonconst import NonConstant from pypy.rpython.lltypesystem import lltype from pypy.rpython.extregistry import ExtRegistryEntry from pypy.annotation import model as annmodel @@ -19,13 +20,12 @@ t.view() localtracker = StmLocalTracker(t) self.localtracker = localtracker - localtracker.track_and_propagate_locals() return localtracker def check(self, expected_names): got_local_names = set() for name, v in self.translator._seen_locals.items(): - if v in self.localtracker.locals: + if self.localtracker.is_local(v): got_local_names.add(name) assert got_local_names == set(expected_names) @@ -41,7 +41,7 @@ self.check([]) def test_freshly_allocated(self): - z = lltype.malloc(S) + z = [lltype.malloc(S), lltype.malloc(S)] def f(n): x = lltype.malloc(S) x.n = n @@ -49,8 +49,8 @@ y.n = n+1 _see(x, 'x') _see(y, 'y') - _see(z, 'z') - return x.n, y.n, z.n + _see(z[n % 2], 'z') + return x.n, y.n # self.translate(f, [int]) self.check(['x', 'y']) # x and y are locals; z is prebuilt @@ -104,6 +104,18 @@ self.translate(f, [int]) self.check(['x']) # x is local + def test_none_variable_is_local(self): + def f(n): + if n > 5: + x = lltype.nullptr(S) + else: + x = lltype.malloc(S) + x.n = n + _see(x, 'x') + # + localtracker = self.translate(f, [int]) + self.check(['x']) + def test_freshly_allocated_to_g(self): def g(x): _see(x, 'x') From noreply at buildbot.pypy.org Thu Feb 16 12:28:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 12:28:38 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Better to get an explicit KeyError Message-ID: <20120216112838.D72F08204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52547:3f3e5fbbe0a8 Date: 2012-02-16 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/3f3e5fbbe0a8/ Log: Better to get an explicit KeyError diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -96,7 +96,9 @@ pending = [variable] seen = set(pending) for v2 in pending: - for v1 in self._backmapping.get(v2, ()): + # we get a KeyError here if 'variable' is not found, + # or if one of the preceeding variables is not found + for v1 in self._backmapping[v2]: if isinstance(v1, Variable): if v1 not in seen: seen.add(v1) From noreply at buildbot.pypy.org Thu Feb 16 13:05:25 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 13:05:25 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Special-case 'instantiate'. Message-ID: <20120216120525.050D38204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52548:1add335091c6 Date: 2012-02-16 13:05 +0100 http://bitbucket.org/pypy/pypy/changeset/1add335091c6/ Log: Special-case 'instantiate'. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -1,5 +1,5 @@ from pypy.objspace.flow.model import Variable -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, rclass from pypy.translator.simplify import get_graph @@ -62,6 +62,19 @@ for tograph in tographs: call(tograph, op.args[1:-1], op.result) continue + # special-case to detect 'instantiate' + is_instantiate = False + v_func = op.args[0] + for op1 in block.operations: + if (v_func is op1.result and + op1.opname == 'getfield' and + op1.args[0].concretetype == rclass.CLASSTYPE and + op1.args[1].value == 'instantiate'): + is_instantiate = True + break + if is_instantiate: + resultlist.append(('instantiate', op.result)) + continue # if _is_gc(op.result): resultlist.append((op, op.result)) diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -29,6 +29,8 @@ return False elif src is None: return False + elif src == 'instantiate': + pass else: - raise AssertionError(src) + raise AssertionError(repr(src)) return True diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py --- a/pypy/translator/stm/test/test_localtracker.py +++ b/pypy/translator/stm/test/test_localtracker.py @@ -199,6 +199,17 @@ self.translate(f, [int]) self.check([]) + def test_instantiate_returns_fresh_object(self): + def f(n): + if n > 5: + cls = X + else: + cls = Y + _see(cls(n), 'x') + # + self.translate(f, [int]) + self.check(['x']) + S = lltype.GcStruct('S', ('n', lltype.Signed)) From noreply at buildbot.pypy.org Thu Feb 16 13:09:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 13:09:33 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix (thanks weirdo). Message-ID: <20120216120933.69E618204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52549:f2e6cf67d37a Date: 2012-02-16 13:09 +0100 http://bitbucket.org/pypy/pypy/changeset/f2e6cf67d37a/ Log: Fix (thanks weirdo). diff --git a/pypy/jit/metainterp/virtualizable.py b/pypy/jit/metainterp/virtualizable.py --- a/pypy/jit/metainterp/virtualizable.py +++ b/pypy/jit/metainterp/virtualizable.py @@ -31,7 +31,7 @@ self.vable_token_descr = cpu.fielddescrof(VTYPE, 'vable_token') # accessor = VTYPE._hints['virtualizable2_accessor'] - all_fields = accessor.fields + all_fields = accessor._fields static_fields = [] array_fields = [] for name, tp in all_fields.iteritems(): From noreply at buildbot.pypy.org Thu Feb 16 16:48:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 16:48:38 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Forgot to put this here. Message-ID: <20120216154838.7403F8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52550:d3e6a5adcace Date: 2012-02-16 16:48 +0100 http://bitbucket.org/pypy/pypy/changeset/d3e6a5adcace/ Log: Forgot to put this here. diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -49,6 +49,8 @@ long stm_in_transaction(void); void _stm_activate_transaction(long); +void stm_copy_transactional_to_raw(void *src, void *dst, long size); + /************************************************************/ From noreply at buildbot.pypy.org Thu Feb 16 16:48:39 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 16:48:39 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Start to refactor transform.py. Message-ID: <20120216154839.A896F82B1F@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52551:276616f00df5 Date: 2012-02-16 16:48 +0100 http://bitbucket.org/pypy/pypy/changeset/276616f00df5/ Log: Start to refactor transform.py. diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -136,7 +136,6 @@ from pypy.translator.stm import transform transformer = transform.STMTransformer(self.translator) transformer.transform() - log.info("Software Transactional Memory transformation applied") gcpolicyclass = self.get_gcpolicyclass() diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -2,6 +2,7 @@ from pypy.objspace.flow.model import Block, Link, checkgraph from pypy.annotation import model as annmodel from pypy.translator.unsimplify import varoftype, copyvar +from pypy.translator.stm.localtracker import StmLocalTracker from pypy.rpython.lltypesystem import lltype, lloperation from pypy.rpython import rclass @@ -9,7 +10,7 @@ ALWAYS_ALLOW_OPERATIONS = set([ 'direct_call', 'force_cast', 'keepalive', 'cast_ptr_to_adr', 'debug_print', 'debug_assert', 'cast_opaque_ptr', 'hint', - 'indirect_call', 'stack_current', + 'indirect_call', 'stack_current', 'gc_stack_bottom', ]) ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_tryfold_ops()) @@ -23,27 +24,42 @@ def __init__(self, translator=None): self.translator = translator + self.count_get_local = 0 + self.count_get_nonlocal = 0 + self.count_get_immutable = 0 + self.count_set_local = 0 + self.count_set_nonlocal = 0 + self.count_set_immutable = 0 - def transform(self): ##, entrypointptr): + def transform(self): assert not hasattr(self.translator, 'stm_transformation_applied') -## entrypointgraph = entrypointptr._obj.graph + self.start_log() + self.localtracker = StmLocalTracker(self.translator) for graph in self.translator.graphs: -## self.seen_transaction_boundary = False -## self.seen_gc_stack_bottom = False self.transform_graph(graph) -## if self.seen_transaction_boundary: -## self.add_stm_declare_variable(graph) -## if self.seen_gc_stack_bottom: -## self.add_descriptor_init_stuff(graph) -## self.add_descriptor_init_stuff(entrypointgraph, main=True) + self.localtracker = None self.translator.stm_transformation_applied = True + self.print_logs() + + def start_log(self): + from pypy.translator.c.support import log + log.info("Software Transactional Memory transformation") + + def print_logs(self): + from pypy.translator.c.support import log + log('get*: proven local: %d' % self.count_get_local) + log(' not proven local: %d' % self.count_get_nonlocal) + log(' immutable: %d' % self.count_get_immutable) + log('set*: proven local: %d' % self.count_set_local) + log(' not proven local: %d' % self.count_set_nonlocal) + log(' immutable: %d' % self.count_set_immutable) + log.info("Software Transactional Memory transformation applied") def transform_block(self, block): if block.operations == (): return newoperations = [] self.current_block = block - #self.access_directly = set() for i, op in enumerate(block.operations): self.current_op_index = i try: @@ -56,199 +72,105 @@ meth = turn_inevitable_and_proceed setattr(self.__class__, 'stt_' + op.opname, staticmethod(meth)) - res = meth(newoperations, op) - if res is True: - newoperations.append(op) - elif res is False: - turn_inevitable_and_proceed(newoperations, op) - else: - assert res is None + meth(newoperations, op) block.operations = newoperations self.current_block = None - #self.access_directly = None def transform_graph(self, graph): for block in graph.iterblocks(): self.transform_block(block) -## def add_descriptor_init_stuff(self, graph, main=False): -## if main: -## self._add_calls_around(graph, -## _rffi_stm.begin_inevitable_transaction, -## _rffi_stm.commit_transaction) -## self._add_calls_around(graph, -## _rffi_stm.descriptor_init, -## _rffi_stm.descriptor_done) - -## def _add_calls_around(self, graph, f_init, f_done): -## c_init = Constant(f_init, lltype.typeOf(f_init)) -## c_done = Constant(f_done, lltype.typeOf(f_done)) -## # -## block = graph.startblock -## v = varoftype(lltype.Void) -## op = SpaceOperation('direct_call', [c_init], v) -## block.operations.insert(0, op) -## # -## v = copyvar(self.translator.annotator, graph.getreturnvar()) -## extrablock = Block([v]) -## v_none = varoftype(lltype.Void) -## newop = SpaceOperation('direct_call', [c_done], v_none) -## extrablock.operations = [newop] -## extrablock.closeblock(Link([v], graph.returnblock)) -## for block in graph.iterblocks(): -## if block is not extrablock: -## for link in block.exits: -## if link.target is graph.returnblock: -## link.target = extrablock -## checkgraph(graph) - -## def add_stm_declare_variable(self, graph): -## block = graph.startblock -## v = varoftype(lltype.Void) -## op = SpaceOperation('stm_declare_variable', [], v) -## block.operations.insert(0, op) - # ---------- - def stt_getfield(self, newoperations, op): - STRUCT = op.args[0].concretetype.TO + def transform_get(self, newoperations, op, stmopname, immutable=False): if op.result.concretetype is lltype.Void: - op1 = op - elif STRUCT._immutable_field(op.args[1].value): - op1 = op - elif 'stm_access_directly' in STRUCT._hints: - #try: - # immfld = STRUCT._hints['immutable_fields'] - #except KeyError: - # pass - #else: - # rank = immfld._fields.get(op.args[1].value, None) - # if rank is rclass.IR_MUTABLE_OWNED: - # self.access_directly.add(op.result) - op1 = op - elif STRUCT._gckind == 'raw': - turn_inevitable(newoperations, "getfield-raw") - op1 = op - else: - op1 = SpaceOperation('stm_getfield', op.args, op.result) + newoperations.append(op) + return + if op.args[0].concretetype.TO._gckind == 'raw': + turn_inevitable(newoperations, op.opname + '-raw') + newoperations.append(op) + return + if immutable: + self.count_get_immutable += 1 + newoperations.append(op) + return + if isinstance(op.args[0], Variable): + if self.localtracker.is_local(op.args[0]): + self.count_get_local += 1 + newoperations.append(op) + return + self.count_get_nonlocal += 1 + op1 = SpaceOperation(stmopname, op.args, op.result) newoperations.append(op1) - def with_writebarrier(self, newoperations, op): + def transform_set(self, newoperations, op, immutable=False): + if op.args[-1].concretetype is lltype.Void: + newoperations.append(op) + return + if op.args[0].concretetype.TO._gckind == 'raw': + turn_inevitable(newoperations, op.opname + '-raw') + newoperations.append(op) + return + if immutable: + self.count_set_immutable += 1 + newoperations.append(op) + return + if isinstance(op.args[0], Variable): + if self.localtracker.is_local(op.args[0]): + self.count_set_local += 1 + newoperations.append(op) + return + self.count_set_nonlocal += 1 v_arg = op.args[0] v_local = varoftype(v_arg.concretetype) op0 = SpaceOperation('stm_writebarrier', [v_arg], v_local) newoperations.append(op0) op1 = SpaceOperation('bare_' + op.opname, [v_local] + op.args[1:], op.result) - return op1 + newoperations.append(op1) + + + def stt_getfield(self, newoperations, op): + STRUCT = op.args[0].concretetype.TO + immutable = STRUCT._immutable_field(op.args[1].value) + self.transform_get(newoperations, op, 'stm_getfield', immutable) def stt_setfield(self, newoperations, op): STRUCT = op.args[0].concretetype.TO - if op.args[2].concretetype is lltype.Void: - op1 = op - elif (STRUCT._immutable_field(op.args[1].value) or - 'stm_access_directly' in STRUCT._hints): - op1 = op - elif STRUCT._gckind == 'raw': - turn_inevitable(newoperations, "setfield-raw") - op1 = op - else: - op1 = self.with_writebarrier(newoperations, op) - newoperations.append(op1) + immutable = STRUCT._immutable_field(op.args[1].value) + self.transform_set(newoperations, op, immutable) def stt_getarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO - if op.result.concretetype is lltype.Void: - op1 = op - elif ARRAY._immutable_field(): - op1 = op - #elif op.args[0] in self.access_directly: - # op1 = op - elif ARRAY._gckind == 'raw': - turn_inevitable(newoperations, "getarrayitem-raw") - op1 = op - else: - op1 = SpaceOperation('stm_getarrayitem', op.args, op.result) - newoperations.append(op1) + immutable = ARRAY._immutable_field() + self.transform_get(newoperations, op, 'stm_getarrayitem', immutable) def stt_setarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO - if op.args[2].concretetype is lltype.Void: - op1 = op - elif ARRAY._immutable_field(): - op1 = op - #elif op.args[0] in self.access_directly: - # op1 = op - elif ARRAY._gckind == 'raw': - turn_inevitable(newoperations, "setarrayitem-raw") - op1 = op - else: - op1 = self.with_writebarrier(newoperations, op) - newoperations.append(op1) + immutable = ARRAY._immutable_field() + self.transform_set(newoperations, op, immutable) def stt_getinteriorfield(self, newoperations, op): OUTER = op.args[0].concretetype.TO - if op.result.concretetype is lltype.Void: - op1 = op - elif OUTER._immutable_interiorfield(unwraplist(op.args[1:])): - op1 = op - elif OUTER._gckind == 'raw': - turn_inevitable(newoperations, "getinteriorfield-raw") - op1 = op - else: - op1 = SpaceOperation('stm_getinteriorfield', op.args, op.result) - newoperations.append(op1) + immutable = OUTER._immutable_interiorfield(unwraplist(op.args[1:])) + self.transform_get(newoperations, op, 'stm_getinteriorfield',immutable) def stt_setinteriorfield(self, newoperations, op): OUTER = op.args[0].concretetype.TO - if op.args[-1].concretetype is lltype.Void: - op1 = op - elif OUTER._immutable_interiorfield(unwraplist(op.args[1:-1])): - op1 = op - elif OUTER._gckind == 'raw': - turn_inevitable(newoperations, "setinteriorfield-raw") - op1 = op - else: - op1 = self.with_writebarrier(newoperations, op) - newoperations.append(op1) - -## def stt_stm_transaction_boundary(self, newoperations, op): -## self.seen_transaction_boundary = True -## v_result = op.result -## # record in op.args the list of variables that are alive across -## # this call -## block = self.current_block -## vars = set() -## for op in block.operations[:self.current_op_index:-1]: -## vars.discard(op.result) -## vars.update(op.args) -## for link in block.exits: -## vars.update(link.args) -## vars.update(link.getextravars()) -## livevars = [v for v in vars if isinstance(v, Variable)] -## newop = SpaceOperation('stm_transaction_boundary', livevars, v_result) -## newoperations.append(newop) + immutable = OUTER._immutable_interiorfield(unwraplist(op.args[1:-1])) + self.transform_set(newoperations, op, immutable) def stt_malloc(self, newoperations, op): flags = op.args[1].value - return flags['flavor'] == 'gc' - - def stt_malloc_varsize(self, newoperations, op): - flags = op.args[1].value - return flags['flavor'] == 'gc' - - stt_malloc_nonmovable = stt_malloc - - def stt_gc_stack_bottom(self, newoperations, op): -## self.seen_gc_stack_bottom = True + if flags['flavor'] == 'gc': + assert self.localtracker.is_local(op.result) + else: + turn_inevitable(newoperations, 'malloc-raw') newoperations.append(op) - #def stt_same_as(self, newoperations, op): - # if op.args[0] in self.access_directly: - # self.access_directly.add(op.result) - # newoperations.append(op) - # - #stt_cast_pointer = stt_same_as + stt_malloc_varsize = stt_malloc + stt_malloc_nonmovable = stt_malloc + stt_malloc_nonmovable_varsize = stt_malloc def transform_graph(graph): From noreply at buildbot.pypy.org Thu Feb 16 16:56:39 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 16:56:39 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) remove unused imports Message-ID: <20120216155639.0685B8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52552:0f7eb7da93a4 Date: 2012-02-16 16:51 +0100 http://bitbucket.org/pypy/pypy/changeset/0f7eb7da93a4/ Log: (arigo, bivab) remove unused imports diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -5,7 +5,6 @@ from pypy.rpython.lltypesystem.rstr import STR, mallocstr from pypy.rpython.lltypesystem import rstr from pypy.rpython.lltypesystem import rlist -from pypy.rpython.module import ll_time, ll_os # table of functions hand-written in src/ll_*.h # Note about *.im_func: The annotator and the rtyper expect direct From noreply at buildbot.pypy.org Thu Feb 16 16:56:40 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 16:56:40 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) mark lseek and ftruncate external definitions as macros Message-ID: <20120216155640.3573B8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52553:0fa0b47058b2 Date: 2012-02-16 16:52 +0100 http://bitbucket.org/pypy/pypy/changeset/0fa0b47058b2/ Log: (arigo, bivab) mark lseek and ftruncate external definitions as macros diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -963,7 +963,7 @@ os_lseek = self.llexternal(funcname, [rffi.INT, rffi.LONGLONG, rffi.INT], - rffi.LONGLONG) + rffi.LONGLONG, macro=True) def lseek_llimpl(fd, pos, how): how = fix_seek_arg(how) @@ -988,7 +988,7 @@ @registering_if(os, 'ftruncate') def register_os_ftruncate(self): os_ftruncate = self.llexternal('ftruncate', - [rffi.INT, rffi.LONGLONG], rffi.INT) + [rffi.INT, rffi.LONGLONG], rffi.INT, macro=True) def ftruncate_llimpl(fd, length): res = rffi.cast(rffi.LONG, From noreply at buildbot.pypy.org Thu Feb 16 16:56:41 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 16:56:41 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) Add a comment and assert that sizeof(off_t) is long long Message-ID: <20120216155641.660A88204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52554:20a1b70bd4ff Date: 2012-02-16 16:54 +0100 http://bitbucket.org/pypy/pypy/changeset/20a1b70bd4ff/ Log: (arigo, bivab) Add a comment and assert that sizeof(off_t) is long long diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -180,10 +180,15 @@ ('tms_cutime', rffi.INT), ('tms_cstime', rffi.INT)]) - GID_T = platform.SimpleType('gid_t',rffi.INT) + GID_T = platform.SimpleType('gid_t', rffi.INT) #TODO right now is used only in getgroups, may need to update other #functions like setgid + # For now we require off_t to be the same size as LONGLONG, which is the + # interface required by callers of functions that thake an argument of type + # off_t + OFF_T_SIZE = platform.SizeOf('off_t') + SEEK_SET = platform.DefinedConstantInteger('SEEK_SET') SEEK_CUR = platform.DefinedConstantInteger('SEEK_CUR') SEEK_END = platform.DefinedConstantInteger('SEEK_END') @@ -197,6 +202,7 @@ def __init__(self): self.configure(CConfig) + assert self.OFF_T_SIZE == rffi.sizeof(rffi.LONGLONG) if hasattr(os, 'getpgrp'): self.GETPGRP_HAVE_ARG = platform.checkcompiles( From noreply at buildbot.pypy.org Thu Feb 16 17:48:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 17:48:19 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add a separate phase pre-inserting 'stm_writebarrier'. This phase Message-ID: <20120216164819.87B038204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52555:becc7ce5e90c Date: 2012-02-16 17:48 +0100 http://bitbucket.org/pypy/pypy/changeset/becc7ce5e90c/ Log: Add a separate phase pre-inserting 'stm_writebarrier'. This phase should insert enough write barriers for all cases. The idea is to to localtracking on this, and then kill the write barriers that are found to apply on already-local objects. diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -1,268 +1,41 @@ -import py -py.test.skip("rewrite or kill") -from pypy.rpython.lltypesystem import lltype, llmemory, rstr -from pypy.rpython.test.test_llinterp import get_interpreter +from pypy.translator.stm.transform import STMTransformer +from pypy.translator.stm.transform import pre_insert_stm_writebarrier +from pypy.translator.translator import TranslationContext, graphof +from pypy.conftest import option from pypy.objspace.flow.model import summary -from pypy.translator.stm.llstminterp import eval_stm_graph -from pypy.translator.stm.transform import transform_graph -from pypy.conftest import option -def eval_stm_func(func, arguments, stm_mode="regular_transaction", - final_stm_mode="regular_transaction"): - interp, graph = get_interpreter(func, arguments) - transform_graph(graph) - #if option.view: - # graph.show() - return eval_stm_graph(interp, graph, arguments, stm_mode=stm_mode, - final_stm_mode=final_stm_mode, - automatic_promotion=True) +def get_graph(func, sig): + t = TranslationContext() + t.buildannotator().build_types(func, sig) + t.buildrtyper().specialize() + if option.view: + t.view() + return graphof(t, func) -# ____________________________________________________________ -def test_simple(): - S = lltype.GcStruct('S', ('x', lltype.Signed)) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - return p.x - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_getfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 42 - -def test_setfield(): - S = lltype.GcStruct('S', ('x', lltype.Signed)) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - p.x = 43 - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_setfield': 1} - eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - -def test_immutable_field(): - S = lltype.GcStruct('S', ('x', lltype.Signed), hints = {'immutable': True}) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - return p.x - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'getfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 42 - -def test_void_field(): - S = lltype.GcStruct('S', ('v', lltype.Void)) - p = lltype.malloc(S, immortal=True) - def func(p): - p.v = None - return p.v - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'getfield': 1, 'setfield': 1} - -def test_getarraysize(): - A = lltype.GcArray(lltype.Signed) - p = lltype.malloc(A, 100, immortal=True) - p[42] = 666 - def func(p): - return len(p) - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'getarraysize': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 100 - -def test_getarrayitem(): - A = lltype.GcArray(lltype.Signed) - p = lltype.malloc(A, 100, immortal=True) - p[42] = 666 - def func(p): - return p[42] - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_getarrayitem': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 666 - -def test_setarrayitem(): - A = lltype.GcArray(lltype.Signed) - p = lltype.malloc(A, 100, immortal=True) - p[42] = 666 - def func(p): - p[42] = 676 - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_setarrayitem': 1} - eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - -STRLIKE = lltype.GcStruct('STRLIKE', # no 'immutable' in this version - ('chars', lltype.Array(lltype.Char))) - -def test_getinteriorfield(): - p = lltype.malloc(STRLIKE, 100, immortal=True) - p.chars[42] = 'X' - def func(p): - return p.chars[42] - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_getinteriorfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 'X' - -def test_setinteriorfield(): - p = lltype.malloc(STRLIKE, 100, immortal=True) - def func(p): - p.chars[42] = 'Y' - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_setinteriorfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - -def test_getstrchar(): - p = lltype.malloc(rstr.STR, 100, immortal=True) - p.chars[42] = 'X' - def func(p): - return p.chars[42] - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'getinteriorfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 'X' - -def test_setstrchar(): - p = lltype.malloc(rstr.STR, 100, immortal=True) - def func(p): - p.chars[42] = 'Y' - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'setinteriorfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - -def test_getfield_setfield_access_directly(): - class P: - x = 42 - _stm_access_directly_ = True - def func(): - p = P() - p.x += 1 - interp, graph = get_interpreter(func, []) - transform_graph(graph) - assert summary(graph) == {'malloc': 1, 'cast_pointer': 1, - 'getfield': 1, 'setfield': 3, 'int_add': 1} - res = eval_stm_graph(interp, graph, [], stm_mode="regular_transaction") - -def test_arrayitem_access_directly(): - class P1: +def test_pre_insert_stm_writebarrier(): + class X: pass - class P2: - _stm_access_directly_ = True - class P3: - _stm_access_directly_ = True - _immutable_fields_ = ['lst->...'] - for P in [P1, P2, P2]: - def func(n): - p = P() - p.lst = [0] - p.lst[0] = n - return p.lst[0] - interp, graph = get_interpreter(func, [42]) - # - from pypy.translator.backendopt.inline import auto_inline_graphs - translator = interp.typer.annotator.translator - auto_inline_graphs(translator, translator.graphs, 16.0) - if option.view: - graph.show() - # - transform_graph(graph) - assert ('stm_getfield' in summary(graph)) == (P is P1) - assert ('stm_setfield' in summary(graph)) == (P is P1) - assert ('stm_getarrayitem' in summary(graph)) == (P is not P3) - #assert ('stm_setarrayitem' in summary(graph)) == (P is not P3) -- xxx - res = eval_stm_graph(interp, graph, [42], - stm_mode="regular_transaction") - assert res == 42 - -def test_setfield_freshly_allocated(): - py.test.skip("XXX not implemented") - S = lltype.GcStruct('S', ('x', lltype.Signed)) - def func(n): - p = lltype.malloc(S) - p.x = n - interp, graph = get_interpreter(func, [42]) - transform_graph(graph) - assert summary(graph) == {'malloc': 1, 'setfield': 1} - res = eval_stm_graph(interp, graph, [42], stm_mode="regular_transaction") - -def test_unsupported_operation(): - def func(n): - n += 1 + class Y(X): + pass + class Z(X): + pass + def f1(n): if n > 5: - p = llmemory.raw_malloc(llmemory.sizeof(lltype.Signed)) - llmemory.raw_free(p) - return n - res = eval_stm_func(func, [3], final_stm_mode="regular_transaction") - assert res == 4 - res = eval_stm_func(func, [13], final_stm_mode="inevitable_transaction") - assert res == 14 - -def test_supported_malloc(): - S = lltype.GcStruct('S', ('x', lltype.Signed)) # GC structure - def func(): - lltype.malloc(S) - eval_stm_func(func, [], final_stm_mode="regular_transaction") - -def test_supported_malloc_varsize(): - A = lltype.GcArray(lltype.Signed) - def func(): - lltype.malloc(A, 5) - eval_stm_func(func, [], final_stm_mode="regular_transaction") - -def test_unsupported_malloc(): - S = lltype.Struct('S', ('x', lltype.Signed)) # non-GC structure - def func(): - lltype.malloc(S, flavor='raw') - eval_stm_func(func, [], final_stm_mode="inevitable_transaction") -test_unsupported_malloc.dont_track_allocations = True - -def test_unsupported_getfield_raw(): - S = lltype.Struct('S', ('x', lltype.Signed)) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - return p.x - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_become_inevitable': 1, 'getfield': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", - final_stm_mode="inevitable_transaction") - assert res == 42 - -def test_unsupported_setfield_raw(): - S = lltype.Struct('S', ('x', lltype.Signed)) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - p.x = 43 - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_become_inevitable': 1, 'setfield': 1} - eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", - final_stm_mode="inevitable_transaction") - -def test_unsupported_getarrayitem_raw(): - A = lltype.Array(lltype.Signed) - p = lltype.malloc(A, 5, immortal=True) - p[3] = 42 - def func(p): - return p[3] - interp, graph = get_interpreter(func, [p]) - transform_graph(graph) - assert summary(graph) == {'stm_become_inevitable': 1, 'getarrayitem': 1} - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", - final_stm_mode="inevitable_transaction") - assert res == 42 + x = Z() + else: + x = Y() + x.n = n + if n > 5: + assert isinstance(x, Z) + x.n = n + 2 + x.sub = n + 1 + # + graph = get_graph(f1, [int]) + pre_insert_stm_writebarrier(graph) + # weak test: check that there are exactly two stm_writebarrier inserted. + # one should be for 'x.n = n', and one should cover both field assignments + # to the Z instance. + sum = summary(graph) + assert sum['stm_writebarrier'] == 2 diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -34,6 +34,8 @@ def transform(self): assert not hasattr(self.translator, 'stm_transformation_applied') self.start_log() + for graph in self.translator.graphs: + pre_insert_stm_writebarrier(graph) self.localtracker = StmLocalTracker(self.translator) for graph in self.translator.graphs: self.transform_graph(graph) @@ -82,7 +84,9 @@ # ---------- - def transform_get(self, newoperations, op, stmopname, immutable=False): + # ---------- + + def transform_get(self, newoperations, op, stmopname): if op.result.concretetype is lltype.Void: newoperations.append(op) return @@ -90,7 +94,7 @@ turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return - if immutable: + if is_immutable(op): self.count_get_immutable += 1 newoperations.append(op) return @@ -103,7 +107,7 @@ op1 = SpaceOperation(stmopname, op.args, op.result) newoperations.append(op1) - def transform_set(self, newoperations, op, immutable=False): + def transform_set(self, newoperations, op): if op.args[-1].concretetype is lltype.Void: newoperations.append(op) return @@ -111,7 +115,7 @@ turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return - if immutable: + if is_immutable(op): self.count_set_immutable += 1 newoperations.append(op) return @@ -128,37 +132,26 @@ op1 = SpaceOperation('bare_' + op.opname, [v_local] + op.args[1:], op.result) newoperations.append(op1) + import pdb; pdb.set_trace() def stt_getfield(self, newoperations, op): - STRUCT = op.args[0].concretetype.TO - immutable = STRUCT._immutable_field(op.args[1].value) - self.transform_get(newoperations, op, 'stm_getfield', immutable) + self.transform_get(newoperations, op, 'stm_getfield') def stt_setfield(self, newoperations, op): - STRUCT = op.args[0].concretetype.TO - immutable = STRUCT._immutable_field(op.args[1].value) - self.transform_set(newoperations, op, immutable) + self.transform_set(newoperations, op) def stt_getarrayitem(self, newoperations, op): - ARRAY = op.args[0].concretetype.TO - immutable = ARRAY._immutable_field() - self.transform_get(newoperations, op, 'stm_getarrayitem', immutable) + self.transform_get(newoperations, op, 'stm_getarrayitem') def stt_setarrayitem(self, newoperations, op): - ARRAY = op.args[0].concretetype.TO - immutable = ARRAY._immutable_field() - self.transform_set(newoperations, op, immutable) + self.transform_set(newoperations, op) def stt_getinteriorfield(self, newoperations, op): - OUTER = op.args[0].concretetype.TO - immutable = OUTER._immutable_interiorfield(unwraplist(op.args[1:])) - self.transform_get(newoperations, op, 'stm_getinteriorfield',immutable) + self.transform_get(newoperations, op, 'stm_getinteriorfield') def stt_setinteriorfield(self, newoperations, op): - OUTER = op.args[0].concretetype.TO - immutable = OUTER._immutable_interiorfield(unwraplist(op.args[1:-1])) - self.transform_set(newoperations, op, immutable) + self.transform_set(newoperations, op) def stt_malloc(self, newoperations, op): flags = op.args[1].value @@ -196,3 +189,84 @@ yield None # unknown else: raise AssertionError(v) + +def is_immutable(op): + if op.opname in ('getfield', 'setfield'): + STRUCT = op.args[0].concretetype.TO + return STRUCT._immutable_field(op.args[1].value) + if op.opname in ('getarrayitem', 'setarrayitem'): + ARRAY = op.args[0].concretetype.TO + return ARRAY._immutable_field() + if op.opname == 'getinteriorfield': + OUTER = op.args[0].concretetype.TO + return OUTER._immutable_interiorfield(unwraplist(op.args[1:])) + if op.opname == 'setinteriorfield': + OUTER = op.args[0].concretetype.TO + return OUTER._immutable_interiorfield(unwraplist(op.args[1:-1])) + raise AssertionError(op) + +def pre_insert_stm_writebarrier(graph): + # put a number of 'stm_writebarrier' operations, one before each + # relevant 'set*'. Then try to avoid the situation where we have + # one variable on which we do 'stm_writebarrier', but there are + # also other variables that contain the same pointer, e.g. casted + # to a different precise type. + from pypy.translator.stm.gcsource import COPIES_POINTER + # + def emit(op): + for v1 in op.args: + if v1 in renames: + # one argument at least is in 'renames', so we need + # to make a new SpaceOperation + args1 = [renames.get(v, v) for v in op.args] + op1 = SpaceOperation(op.opname, args1, op.result) + newoperations.append(op1) + return + # no argument is in 'renames', so we can just emit the op + newoperations.append(op) + # + for block in graph.iterblocks(): + if block.operations == (): + continue + # + # figure out the variables on which we want an stm_writebarrier + copies = {} + wants_a_writebarrier = {} + for op in block.operations: + if op.opname in COPIES_POINTER: + assert len(op.args) == 1 + copies[op.result] = op + elif (op.opname in ('setfield', 'setarrayitem', + 'setinteriorfield') and + op.args[-1].concretetype is not lltype.Void and + op.args[0].concretetype.TO._gckind == 'gc' and + not is_immutable(op)): + wants_a_writebarrier.setdefault(op.args[0], op) + # + # back-propagate the write barrier locations through the cast_pointers + writebarrier_locations = {} + for v, op in wants_a_writebarrier.items(): + while v in copies: + op = copies[v] + v = op.args[0] + protect = writebarrier_locations.setdefault(op, set()) + protect.add(v) + # + # now insert the 'stm_writebarrier's + renames = {} # {original-var: renamed-var} + newoperations = [] + for op in block.operations: + locs = writebarrier_locations.get(op, None) + if locs: + for v1 in locs: + if v1 not in renames: + v2 = varoftype(v1.concretetype) + op1 = SpaceOperation('stm_writebarrier', [v1], v2) + emit(op1) + renames[v1] = v2 + emit(op) + # + if renames: + for link in block.exits: + link.args = [renames.get(v, v) for v in link.args] + block.operations = newoperations From noreply at buildbot.pypy.org Thu Feb 16 17:55:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 17:55:19 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Finish it. test_ztranslated passes. Message-ID: <20120216165519.05F678204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52556:1aa220474865 Date: 2012-02-16 17:55 +0100 http://bitbucket.org/pypy/pypy/changeset/1aa220474865/ Log: Finish it. test_ztranslated passes. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -5,6 +5,7 @@ RETURNS_LOCAL_POINTER = set([ 'malloc', 'malloc_varsize', 'malloc_nonmovable', 'malloc_nonmovable_varsize', + 'stm_writebarrier', ]) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -28,8 +28,8 @@ self.count_get_nonlocal = 0 self.count_get_immutable = 0 self.count_set_local = 0 - self.count_set_nonlocal = 0 self.count_set_immutable = 0 + self.count_write_barrier = 0 def transform(self): assert not hasattr(self.translator, 'stm_transformation_applied') @@ -53,8 +53,8 @@ log(' not proven local: %d' % self.count_get_nonlocal) log(' immutable: %d' % self.count_get_immutable) log('set*: proven local: %d' % self.count_set_local) - log(' not proven local: %d' % self.count_set_nonlocal) log(' immutable: %d' % self.count_set_immutable) + log(' write barriers: %d' % self.count_write_barrier) log.info("Software Transactional Memory transformation applied") def transform_block(self, block): @@ -84,8 +84,6 @@ # ---------- - # ---------- - def transform_get(self, newoperations, op, stmopname): if op.result.concretetype is lltype.Void: newoperations.append(op) @@ -119,20 +117,12 @@ self.count_set_immutable += 1 newoperations.append(op) return - if isinstance(op.args[0], Variable): - if self.localtracker.is_local(op.args[0]): - self.count_set_local += 1 - newoperations.append(op) - return - self.count_set_nonlocal += 1 - v_arg = op.args[0] - v_local = varoftype(v_arg.concretetype) - op0 = SpaceOperation('stm_writebarrier', [v_arg], v_local) - newoperations.append(op0) - op1 = SpaceOperation('bare_' + op.opname, [v_local] + op.args[1:], - op.result) - newoperations.append(op1) - import pdb; pdb.set_trace() + # this is not really a transformation, but just an assertion that + # it work on local objects. This should be ensured by + # pre_insert_stm_writebarrier(). + assert self.localtracker.is_local(op.args[0]) + self.count_set_local += 1 + newoperations.append(op) def stt_getfield(self, newoperations, op): @@ -153,6 +143,14 @@ def stt_setinteriorfield(self, newoperations, op): self.transform_set(newoperations, op) + def stt_stm_writebarrier(self, newoperations, op): + if (isinstance(op.args[0], Variable) and + self.localtracker.is_local(op.args[0])): + op = SpaceOperation('same_as', op.args, op.result) + else: + self.count_write_barrier += 1 + newoperations.append(op) + def stt_malloc(self, newoperations, op): flags = op.args[1].value if flags['flavor'] == 'gc': From noreply at buildbot.pypy.org Thu Feb 16 18:17:46 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 18:17:46 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Improve the logic. Message-ID: <20120216171746.676C58204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52557:c949766e518c Date: 2012-02-16 18:17 +0100 http://bitbucket.org/pypy/pypy/changeset/c949766e518c/ Log: Improve the logic. diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -31,11 +31,14 @@ assert isinstance(x, Z) x.n = n + 2 x.sub = n + 1 + x.n *= 2 # graph = get_graph(f1, [int]) pre_insert_stm_writebarrier(graph) + if option.view: + graph.show() # weak test: check that there are exactly two stm_writebarrier inserted. # one should be for 'x.n = n', and one should cover both field assignments # to the Z instance. sum = summary(graph) - assert sum['stm_writebarrier'] == 2 + assert sum['stm_writebarrier'] == 3 diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -227,36 +227,53 @@ if block.operations == (): continue # - # figure out the variables on which we want an stm_writebarrier + # figure out the variables on which we want an stm_writebarrier; + # also track the getfields, on which we don't want a write barrier + # but which are still recorded in the dict. copies = {} wants_a_writebarrier = {} for op in block.operations: if op.opname in COPIES_POINTER: assert len(op.args) == 1 copies[op.result] = op + elif (op.opname in ('getfield', 'getarrayitem', + 'getinteriorfield') and + op.result.concretetype is not lltype.Void and + op.args[0].concretetype.TO._gckind == 'gc' and + not is_immutable(op)): + wants_a_writebarrier.setdefault(op, False) elif (op.opname in ('setfield', 'setarrayitem', 'setinteriorfield') and op.args[-1].concretetype is not lltype.Void and op.args[0].concretetype.TO._gckind == 'gc' and not is_immutable(op)): - wants_a_writebarrier.setdefault(op.args[0], op) + wants_a_writebarrier[op] = True # - # back-propagate the write barrier locations through the cast_pointers + # back-propagate the write barrier's True/False locations through + # the cast_pointers writebarrier_locations = {} - for v, op in wants_a_writebarrier.items(): - while v in copies: - op = copies[v] - v = op.args[0] - protect = writebarrier_locations.setdefault(op, set()) - protect.add(v) + for op, wants in wants_a_writebarrier.items(): + while op.args[0] in copies: + op = copies[op.args[0]] + if op in writebarrier_locations: + wants |= writebarrier_locations[op] + writebarrier_locations[op] = wants + # + # to back-propagate the locations even more, if it comes before a + # getfield(), we need the following set + writebarrier_vars = set() + for op, wants in writebarrier_locations.items(): + if wants: + writebarrier_vars.add(op.args[0]) # # now insert the 'stm_writebarrier's renames = {} # {original-var: renamed-var} newoperations = [] for op in block.operations: - locs = writebarrier_locations.get(op, None) - if locs: - for v1 in locs: + if op in writebarrier_locations: + wants = writebarrier_locations[op] + if wants or op.args[0] in writebarrier_vars: + v1 = op.args[0] if v1 not in renames: v2 = varoftype(v1.concretetype) op1 = SpaceOperation('stm_writebarrier', [v1], v2) From noreply at buildbot.pypy.org Thu Feb 16 19:42:20 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 16 Feb 2012 19:42:20 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120216184220.65DA88204C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52558:37ca28211ffc Date: 2012-02-14 15:45 -0800 http://bitbucket.org/pypy/pypy/changeset/37ca28211ffc/ Log: merge default into branch diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -1520,7 +1522,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -311,6 +312,11 @@ little_endian = (sys.byteorder == 'little') Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 nan = NaN = NAN = float('nan') False_ = bool_(False) True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -2,16 +2,21 @@ PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As has become a habit, this -release brings a lot of bugfixes, and performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms -of performance and memory. This release also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -60,13 +65,6 @@ * New JIT hooks that allow you to hook into the JIT process from your python program. There is a `brief overview`_ of what they offer. -* Since the last release there was a significant breakthrough in PyPy's - fundraising. We now have enough funds to work on first stages of `numpypy`_ - and `py3k`_. We would like to thank again to everyone who donated. - - It's also probably worth noting, we're considering donations for the STM - project. - * Standard library upgrade from 2.7.1 to 2.7.2. Ongoing work @@ -82,7 +80,15 @@ * More numpy work -* Software Transactional Memory, you can read more about `our plans`_ +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team .. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html .. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -24,4 +24,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,25 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1685,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1815,21 +1768,6 @@ """Empty an existing set of all elements.""" raise NotImplementedError - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -95,6 +95,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), @@ -111,8 +112,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,6 +83,8 @@ descr_truediv = _binop_impl("true_divide") descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") descr_and = _binop_impl("bitwise_and") descr_or = _binop_impl("bitwise_or") descr_xor = _binop_impl("bitwise_xor") @@ -97,13 +99,31 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") descr_invert = _unaryop_impl("invert") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -185,7 +205,10 @@ __div__ = interp2app(W_GenericBox.descr_div), __truediv__ = interp2app(W_GenericBox.descr_truediv), __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), __and__ = interp2app(W_GenericBox.descr_and), __or__ = interp2app(W_GenericBox.descr_or), __xor__ = interp2app(W_GenericBox.descr_xor), @@ -193,7 +216,16 @@ __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -101,8 +101,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +117,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +135,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1227,21 +1246,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1250,10 +1284,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1267,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -392,6 +392,8 @@ ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -406,15 +406,28 @@ from operator import truediv from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) assert int_(3) & int_(1) == int_(1) - raises(TypeError, lambda: float64(3) & 1) - assert int_(8) % int_(3) == int_(2) + assert 2 & int_(3) == int_(2) assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -579,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -600,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array @@ -625,6 +625,59 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +731,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1410,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,15 +367,15 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -22,3 +25,22 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) @@ -125,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Thu Feb 16 19:42:21 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 16 Feb 2012 19:42:21 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: minor fixes Message-ID: <20120216184221.90A868204C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52559:ea2e484181c9 Date: 2012-02-15 13:26 -0800 http://bitbucket.org/pypy/pypy/changeset/ea2e484181c9/ Log: minor fixes diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -92,7 +92,7 @@ class CPPMethod(object): """ A concrete function after overloading has been resolved """ _immutable_ = True - _immutable_fields_ = ["arg_defs[*]", "arg_converters[*]"] + _immutable_fields_ = ["_libffifunc_cache[*]", "arg_converters[*]"] def __init__(self, cpptype, method_index, result_type, arg_defs, args_required): self.cpptype = cpptype @@ -151,11 +151,11 @@ @jit.elidable_promote() def _prepare_libffi_func(self, funcptr): - if self.arg_converters is None: - self._build_converters() key = rffi.cast(rffi.LONG, funcptr) if key in self._libffifunc_cache: return self._libffifunc_cache[key] + if self.arg_converters is None: + self._build_converters() argtypes_libffi = [conv.libffitype for conv in self.arg_converters if conv.libffitype] if (len(argtypes_libffi) == len(self.arg_converters) and From noreply at buildbot.pypy.org Thu Feb 16 19:42:22 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 16 Feb 2012 19:42:22 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: optimize construction of libffi function for methods Message-ID: <20120216184222.BAF2B8204C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52560:769f8f7bb605 Date: 2012-02-16 10:40 -0800 http://bitbucket.org/pypy/pypy/changeset/769f8f7bb605/ Log: optimize construction of libffi function for methods diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -84,15 +84,10 @@ ) W_CPPLibrary.typedef.acceptable_as_base_class = True - at jit.elidable_promote() -def get_methptr_getter(handle, method_index): - return capi.c_get_methptr_getter(handle, method_index) - class CPPMethod(object): """ A concrete function after overloading has been resolved """ _immutable_ = True - _immutable_fields_ = ["_libffifunc_cache[*]", "arg_converters[*]"] def __init__(self, cpptype, method_index, result_type, arg_defs, args_required): self.cpptype = cpptype @@ -101,11 +96,11 @@ self.arg_defs = arg_defs self.args_required = args_required self.executor = executor.get_executor(self.space, result_type) + + # Setup of the method dispatch's innards is done lazily, i.e. only when + # the method is actually used. TODO: executor should be lazy as well. self.arg_converters = None - methgetter = get_methptr_getter(self.cpptype.handle, - self.method_index) - self.methgetter = methgetter - self._libffifunc_cache = {} + self._libffifunc = None @jit.unroll_safe def call(self, cppthis, w_type, args_w): @@ -117,11 +112,14 @@ if args_expected < args_given or args_given < self.args_required: raise TypeError("wrong number of arguments") - if self.methgetter and cppthis: # only for methods + if self.arg_converters is None: + self._setup(cppthis) + + if self._libffifunc: try: return self.do_fast_call(cppthis, w_type, args_w) except FastCallNotPossible: - pass + pass # can happen if converters or executor does not implement ffi args = self.prepare_arguments(args_w) try: @@ -132,11 +130,6 @@ @jit.unroll_safe def do_fast_call(self, cppthis, w_type, args_w): jit.promote(self) - funcptr = self.methgetter(rffi.cast(capi.C_OBJECT, cppthis)) - libffi_func = self._prepare_libffi_func(funcptr) - if not libffi_func: - raise FastCallNotPossible - argchain = libffi.ArgChain() argchain.arg(cppthis) i = len(self.arg_defs) @@ -147,37 +140,31 @@ for j in range(i+1, len(self.arg_defs)): conv = self.arg_converters[j] conv.default_argument_libffi(self.space, argchain) - return self.executor.execute_libffi(self.space, w_type, libffi_func, argchain) + return self.executor.execute_libffi(self.space, w_type, self._libffifunc, argchain) - @jit.elidable_promote() - def _prepare_libffi_func(self, funcptr): - key = rffi.cast(rffi.LONG, funcptr) - if key in self._libffifunc_cache: - return self._libffifunc_cache[key] - if self.arg_converters is None: - self._build_converters() - argtypes_libffi = [conv.libffitype for conv in self.arg_converters - if conv.libffitype] - if (len(argtypes_libffi) == len(self.arg_converters) and - self.executor.libffitype): - # add c++ this to the arguments - libffifunc = libffi.Func("XXX", - [libffi.types.pointer] + argtypes_libffi, - self.executor.libffitype, funcptr) - else: - libffifunc = None - self._libffifunc_cache[key] = libffifunc - return libffifunc - - def _build_converters(self): + def _setup(self, cppthis): self.arg_converters = [converter.get_converter(self.space, arg_type, arg_dflt) for arg_type, arg_dflt in self.arg_defs] + # Each CPPMethod corresponds one-to-one to a C++ equivalent and cppthis + # has been offset to the matching class. Hence, the libffi pointer is + # uniquely defined and needs to be setup only once. + methgetter = capi.c_get_methptr_getter(self.cpptype.handle, self.method_index) + if methgetter and cppthis: # methods only for now + funcptr = methgetter(rffi.cast(capi.C_OBJECT, cppthis)) + argtypes_libffi = [conv.libffitype for conv in self.arg_converters + if conv.libffitype] + if (len(argtypes_libffi) == len(self.arg_converters) and + self.executor.libffitype): + # add c++ this to the arguments + libffifunc = libffi.Func("XXX", + [libffi.types.pointer] + argtypes_libffi, + self.executor.libffitype, funcptr) + self._libffifunc = libffifunc + @jit.unroll_safe def prepare_arguments(self, args_w): jit.promote(self) - if self.arg_converters is None: - self._build_converters() args = capi.c_allocate_function_args(len(args_w)) stride = capi.c_function_arg_sizeof() for i in range(len(args_w)): diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -279,7 +279,18 @@ assert g(11., 2) == 2. assert g(11.) == 11. - def test11_underscore_in_class_name(self): + def test11_overload_on_arguments(self): + """Test functions overloaded on arguments""" + + import cppyy + e = cppyy.gbl.example01(1) + + assert e.addDataToInt(2) == 3 + assert e.overloadedAddDataToInt(3) == 4 + assert e.overloadedAddDataToInt(4, 5) == 10 + assert e.overloadedAddDataToInt(6, 7, 8) == 22 + + def test12_underscore_in_class_name(self): """Test recognition of '_' as part of a valid class name""" import cppyy From noreply at buildbot.pypy.org Thu Feb 16 19:44:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 16 Feb 2012 19:44:27 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: update bars Message-ID: <20120216184427.60FB98204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r334:474c2aefdc41 Date: 2012-02-16 20:44 +0200 http://bitbucket.org/pypy/pypy.org/changeset/474c2aefdc41/ Log: update bars diff --git a/don1.html b/don1.html --- a/don1.html +++ b/don1.html @@ -13,7 +13,7 @@ }); - $39882 of $105000 (38.0%) + $40035 of $105000 (38.0%)
    diff --git a/don3.html b/don3.html --- a/don3.html +++ b/don3.html @@ -8,12 +8,12 @@ - $41480 of $60000 (69.1%) + $43280 of $60000 (72.1%)
    From noreply at buildbot.pypy.org Thu Feb 16 19:58:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 19:58:56 +0100 (CET) Subject: [pypy-commit] buildbot default: Add aurora, for now in first position Message-ID: <20120216185856.DF8228204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r637:28be4bf494f4 Date: 2012-02-16 19:58 +0100 http://bitbucket.org/pypy/buildbot/changeset/28be4bf494f4/ Log: Add aurora, for now in first position diff --git a/bot2/pypybuildbot/master.py b/bot2/pypybuildbot/master.py --- a/bot2/pypybuildbot/master.py +++ b/bot2/pypybuildbot/master.py @@ -348,7 +348,7 @@ 'category' : 'mac64', }, {"name": WIN32, - "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], + "slavenames": ["aurora", "SalsaSalsa", "snakepit32", "bigboard"], "builddir": WIN32, "factory": pypyOwnTestFactoryWin, "category": 'win32' @@ -360,13 +360,13 @@ "category": 'win32' }, {"name": APPLVLWIN32, - "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], + "slavenames": ["aurora", "SalsaSalsa", "snakepit32", "bigboard"], "builddir": APPLVLWIN32, "factory": pypyTranslatedAppLevelTestFactoryWin, "category": "win32" }, {"name" : JITWIN32, - "slavenames": ["SalsaSalsa", "snakepit32", "bigboard"], + "slavenames": ["aurora", "SalsaSalsa", "snakepit32", "bigboard"], 'builddir' : JITWIN32, 'factory' : pypyJITTranslatedTestFactoryWin, 'category' : 'win32', From noreply at buildbot.pypy.org Thu Feb 16 20:18:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:18:23 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: handle hint(stm_write) like stm_writebarrier. Message-ID: <20120216191823.D23BF8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52561:8d501fff9677 Date: 2012-02-16 18:24 +0100 http://bitbucket.org/pypy/pypy/changeset/8d501fff9677/ Log: handle hint(stm_write) like stm_writebarrier. diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -151,8 +151,8 @@ generator.throw(). """ self = hint(self, stm_write=True) - hint(self.locals_stack_w, stm_write=True) - hint(self.cells, stm_immutable=True) + #hint(self.locals_stack_w, stm_write=True) -- later + #hint(self.cells, stm_immutable=True) -- later # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but # overridden in the {,Host}FrameClass subclasses of PyFrame. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -23,8 +23,11 @@ assert isinstance(variable, Variable) for src in self.gsrc[variable]: if isinstance(src, SpaceOperation): - if src.opname not in RETURNS_LOCAL_POINTER: - return False + if src.opname in RETURNS_LOCAL_POINTER: + continue + if src.opname == 'hint' and 'stm_write' in src.args[1].value: + continue + return False elif isinstance(src, Constant): if src.value: # a NULL pointer is still valid as local return False diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py --- a/pypy/translator/stm/test/test_localtracker.py +++ b/pypy/translator/stm/test/test_localtracker.py @@ -210,6 +210,15 @@ self.translate(f, [int]) self.check(['x']) + def test_hint_stm_write(self): + z = X(42) + def f(n): + x = hint(z, stm_write=True) + _see(x, 'x') + # + self.translate(f, [int]) + self.check(['x']) + S = lltype.GcStruct('S', ('n', lltype.Signed)) diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -32,13 +32,17 @@ x.n = n + 2 x.sub = n + 1 x.n *= 2 + if n < 10: + return x.n + else: + return 0 # graph = get_graph(f1, [int]) pre_insert_stm_writebarrier(graph) if option.view: graph.show() # weak test: check that there are exactly two stm_writebarrier inserted. - # one should be for 'x.n = n', and one should cover both field assignments - # to the Z instance. + # one should be for 'x.n = n', one should cover both field assignments + # to the Z instance, and the 3rd one is in the block 'x.n *= 2'. sum = summary(graph) assert sum['stm_writebarrier'] == 3 diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -163,6 +163,13 @@ stt_malloc_nonmovable = stt_malloc stt_malloc_nonmovable_varsize = stt_malloc + def stt_hint(self, newoperations, op): + if 'stm_write' in op.args[1].value: + op = SpaceOperation('stm_writebarrier', [op.args[0]], op.result) + self.stt_stm_writebarrier(newoperations, op) + return + newoperations.append(op) + def transform_graph(graph): # for tests: only transforms one graph From noreply at buildbot.pypy.org Thu Feb 16 20:18:25 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:18:25 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Test and fix. Message-ID: <20120216191825.654038204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52562:b44e53969410 Date: 2012-02-16 18:47 +0100 http://bitbucket.org/pypy/pypy/changeset/b44e53969410/ Log: Test and fix. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -83,6 +83,8 @@ for v1, v2 in zip(link.args, link.target.inputargs): if _is_gc(v2): assert _is_gc(v1) + if v1 is link.last_exc_value: + v1 = None resultlist.append((v1, v2)) # for graph in translator.graphs: diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py --- a/pypy/translator/stm/test/test_gcsource.py +++ b/pypy/translator/stm/test/test_gcsource.py @@ -115,3 +115,18 @@ v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] assert list(s) == [None] + +def test_exception(): + class FooError(Exception): + pass + def f(n): + raise FooError + def main(n): + try: + f(n) + except FooError, e: + return e + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert list(s) == [None] From noreply at buildbot.pypy.org Thu Feb 16 20:18:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:18:26 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Baah. Message-ID: <20120216191826.8C5488204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52563:c2e9ffcdfab2 Date: 2012-02-16 19:39 +0100 http://bitbucket.org/pypy/pypy/changeset/c2e9ffcdfab2/ Log: Baah. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -75,7 +75,8 @@ "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], "stmgc": [("translation.gctransformer", "framework"), - ("translation.gcrootfinder", "stm")], + ("translation.gcrootfinder", "stm"), + ("translation.rweakref", False)], # XXX temp }, cmdline="--gc"), ChoiceOption("gctransformer", "GC transformer that is used - internal", From noreply at buildbot.pypy.org Thu Feb 16 20:18:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:18:27 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: 'access_directly' seems necessary to avoid merges in the call Message-ID: <20120216191827.B4D1D8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52564:f5f411743418 Date: 2012-02-16 19:48 +0100 http://bitbucket.org/pypy/pypy/changeset/f5f411743418/ Log: 'access_directly' seems necessary to avoid merges in the call graph. diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -150,9 +150,11 @@ w_inputvalue is for generator.send() and operr is for generator.throw(). """ - self = hint(self, stm_write=True) - #hint(self.locals_stack_w, stm_write=True) -- later - #hint(self.cells, stm_immutable=True) -- later + if self.space.config.translation.stm: + self = hint(self, stm_write=True) + #hint(self.locals_stack_w, stm_write=True) -- later + #hint(self.cells, stm_immutable=True) -- later + self = hint(self, access_directly=True) # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but # overridden in the {,Host}FrameClass subclasses of PyFrame. From noreply at buildbot.pypy.org Thu Feb 16 20:27:59 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 20:27:59 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove definition of _all_size_descrs_with_vtable in model.py and revert changes to heaptracker.py Message-ID: <20120216192759.037C08204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52565:f6dc5b3cedd2 Date: 2012-02-16 19:56 +0100 http://bitbucket.org/pypy/pypy/changeset/f6dc5b3cedd2/ Log: remove definition of _all_size_descrs_with_vtable in model.py and revert changes to heaptracker.py diff --git a/pypy/jit/backend/model.py b/pypy/jit/backend/model.py --- a/pypy/jit/backend/model.py +++ b/pypy/jit/backend/model.py @@ -22,7 +22,7 @@ total_freed_bridges = 0 # for heaptracker - _all_size_descrs_with_vtable = None + # _all_size_descrs_with_vtable = None _vtable_to_descr_dict = None diff --git a/pypy/jit/codewriter/heaptracker.py b/pypy/jit/codewriter/heaptracker.py --- a/pypy/jit/codewriter/heaptracker.py +++ b/pypy/jit/codewriter/heaptracker.py @@ -89,7 +89,7 @@ except AttributeError: pass assert lltype.typeOf(vtable) == VTABLETYPE - if cpu._all_size_descrs_with_vtable is None: + if not hasattr(cpu, '_all_size_descrs_with_vtable'): cpu._all_size_descrs_with_vtable = [] cpu._vtable_to_descr_dict = None cpu._all_size_descrs_with_vtable.append(sizedescr) @@ -97,7 +97,7 @@ def finish_registering(cpu): # annotation hack for small examples which have no vtable at all - if cpu._all_size_descrs_with_vtable is None: + if not hasattr(cpu, '_all_size_descrs_with_vtable'): vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True) register_known_gctype(cpu, vtable, rclass.OBJECT) @@ -108,7 +108,6 @@ # Build the dict {vtable: sizedescr} at runtime. # This is necessary because the 'vtables' are just pointers to # static data, so they can't be used as keys in prebuilt dicts. - assert cpu._all_size_descrs_with_vtable is not None d = cpu._vtable_to_descr_dict if d is None: d = cpu._vtable_to_descr_dict = {} @@ -130,4 +129,3 @@ vtable = descr.as_vtable_size_descr()._corresponding_vtable vtable = llmemory.cast_ptr_to_adr(vtable) return adr2int(vtable) - From noreply at buildbot.pypy.org Thu Feb 16 20:28:00 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 16 Feb 2012 20:28:00 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: skip test_random_effects_on_stacklet_switch if platform is not supported Message-ID: <20120216192800.860718204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52566:cf684bd8d082 Date: 2012-02-16 20:20 +0100 http://bitbucket.org/pypy/pypy/changeset/cf684bd8d082/ Log: skip test_random_effects_on_stacklet_switch if platform is not supported diff --git a/pypy/jit/codewriter/test/test_call.py b/pypy/jit/codewriter/test/test_call.py --- a/pypy/jit/codewriter/test/test_call.py +++ b/pypy/jit/codewriter/test/test_call.py @@ -195,7 +195,14 @@ def test_random_effects_on_stacklet_switch(): from pypy.jit.backend.llgraph.runner import LLtypeCPU - from pypy.rlib._rffi_stacklet import switch, thread_handle, handle + from pypy.translator.platform import CompilationError + try: + from pypy.rlib._rffi_stacklet import switch, thread_handle, handle + except CompilationError as e: + if "Unsupported platform!" in e.out: + py.test.skip("Unsupported platform!") + else: + raise e @jit.dont_look_inside def f(): switch(rffi.cast(thread_handle, 0), rffi.cast(handle, 0)) From noreply at buildbot.pypy.org Thu Feb 16 20:38:55 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 16 Feb 2012 20:38:55 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: fill type slots tp_iter and tp_iternext Message-ID: <20120216193855.688718204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52567:873f837ca9f4 Date: 2012-02-16 20:37 +0100 http://bitbucket.org/pypy/pypy/changeset/873f837ca9f4/ Log: cpyext: fill type slots tp_iter and tp_iternext diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -291,6 +291,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) From noreply at buildbot.pypy.org Thu Feb 16 20:46:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:46:34 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix Message-ID: <20120216194634.428B38204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52568:2ea8061c4a47 Date: 2012-02-16 20:45 +0100 http://bitbucket.org/pypy/pypy/changeset/2ea8061c4a47/ Log: Fix diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -89,7 +89,8 @@ newoperations.append(op) return if op.args[0].concretetype.TO._gckind == 'raw': - turn_inevitable(newoperations, op.opname + '-raw') + if not is_immutable(op): + turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return if is_immutable(op): @@ -110,7 +111,8 @@ newoperations.append(op) return if op.args[0].concretetype.TO._gckind == 'raw': - turn_inevitable(newoperations, op.opname + '-raw') + if not is_immutable(op): + turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return if is_immutable(op): From noreply at buildbot.pypy.org Thu Feb 16 20:51:09 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 20:51:09 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add at least this placeholder test :-/ Hopefully, will be fixed soon. Message-ID: <20120216195109.91D038204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52569:d37f9bdfdea7 Date: 2012-02-16 20:50 +0100 http://bitbucket.org/pypy/pypy/changeset/d37f9bdfdea7/ Log: Add at least this placeholder test :-/ Hopefully, will be fixed soon. diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -46,3 +46,8 @@ # to the Z instance, and the 3rd one is in the block 'x.n *= 2'. sum = summary(graph) assert sum['stm_writebarrier'] == 3 + + +def test_all_the_rest_in_transform(): + import py + py.test.skip("XXX! tests missing!") From noreply at buildbot.pypy.org Thu Feb 16 21:15:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 21:15:21 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Can't call del_thread here. Multiple accesses to the dictionary Message-ID: <20120216201521.26FF38204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52570:7333da24124f Date: 2012-02-16 21:15 +0100 http://bitbucket.org/pypy/pypy/changeset/7333da24124f/ Log: Can't call del_thread here. Multiple accesses to the dictionary are not protected by a lock. Instead, leave the stuff behind and clear it from the main thread. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -52,10 +52,6 @@ ec._transaction_pending = Fifo() self.threadobjs[id] = ec - def del_thread(self, id): - # un-register a transaction thread - del self.threadobjs[id] - # ---------- interface for ThreadLocals ---------- # This works really like a thread-local, which may have slightly # strange consequences in multiple transactions, because you don't @@ -79,6 +75,11 @@ def getallvalues(self): return self.threadobjs + def clear_all_values_apart_from_main(self): + for id in self.threadobjs.keys(): + if id != MAIN_THREAD_ID: + del self.threadobjs[id] + # ---------- def set_num_threads(self, num): @@ -251,7 +252,6 @@ state.lock() _add_list(my_transactions_pending) # - state.del_thread(rstm.thread_id()) rstm.descriptor_done() if state.num_waiting_threads == 0: # only the last thread to leave state.unlock_unfinished() @@ -289,8 +289,8 @@ # assert state.num_waiting_threads == 0 assert state.pending.is_empty() - assert state.threadobjs.keys() == [MAIN_THREAD_ID] assert not state.is_locked_no_tasks_pending() + state.clear_all_values_apart_from_main() state.running = False # # now re-raise the exception that we got in a transaction From noreply at buildbot.pypy.org Thu Feb 16 21:52:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 21:52:59 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix. Message-ID: <20120216205259.92DE28204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52571:d485695d0d54 Date: 2012-02-16 21:52 +0100 http://bitbucket.org/pypy/pypy/changeset/d485695d0d54/ Log: Fix. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -21,7 +21,14 @@ def is_local(self, variable): assert isinstance(variable, Variable) - for src in self.gsrc[variable]: + try: + srcs = self.gsrc[variable] + except KeyError: + # XXX we shouldn't get here, but we do translating the whole + # pypy. We should investigate at some point. In the meantime + # returning False is always safe. + return False + for src in srcs: if isinstance(src, SpaceOperation): if src.opname in RETURNS_LOCAL_POINTER: continue From noreply at buildbot.pypy.org Thu Feb 16 21:57:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 16 Feb 2012 21:57:01 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add the modified version of richards I use. Message-ID: <20120216205701.D54B08204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52572:bba9b03f5e70 Date: 2012-02-16 21:56 +0100 http://bitbucket.org/pypy/pypy/changeset/bba9b03f5e70/ Log: Add the modified version of richards I use. diff --git a/pypy/translator/goal/richards.py b/pypy/translator/stm/test/richards.py copy from pypy/translator/goal/richards.py copy to pypy/translator/stm/test/richards.py --- a/pypy/translator/goal/richards.py +++ b/pypy/translator/stm/test/richards.py @@ -7,6 +7,9 @@ # Translation from C++, Mario Wolczko # Outer loop added by Alex Jacoby +import transaction + + # Task IDs I_IDLE = 1 I_WORK = 2 @@ -151,12 +154,11 @@ self.holdCount = 0 self.qpktCount = 0 -taskWorkArea = TaskWorkArea() - class Task(TaskState): - def __init__(self,i,p,w,initialState,r): + def __init__(self,i,p,w,initialState,r, taskWorkArea): + self.taskWorkArea = taskWorkArea self.link = taskWorkArea.taskList self.ident = i self.priority = p @@ -206,7 +208,7 @@ def hold(self): - taskWorkArea.holdCount += 1 + self.taskWorkArea.holdCount += 1 self.task_holding = True return self.link @@ -222,14 +224,14 @@ def qpkt(self,pkt): t = self.findtcb(pkt.ident) - taskWorkArea.qpktCount += 1 + self.taskWorkArea.qpktCount += 1 pkt.link = None pkt.ident = self.ident return t.addPacket(pkt,self) def findtcb(self,id): - t = taskWorkArea.taskTab[id] + t = self.taskWorkArea.taskTab[id] if t is None: raise Exception("Bad task id %d" % id) return t @@ -239,8 +241,8 @@ class DeviceTask(Task): - def __init__(self,i,p,w,s,r): - Task.__init__(self,i,p,w,s,r) + def __init__(self,i,p,w,s,r, taskWorkArea): + Task.__init__(self,i,p,w,s,r, taskWorkArea) def fn(self,pkt,r): d = r @@ -260,8 +262,8 @@ class HandlerTask(Task): - def __init__(self,i,p,w,s,r): - Task.__init__(self,i,p,w,s,r) + def __init__(self,i,p,w,s,r, taskWorkArea): + Task.__init__(self,i,p,w,s,r, taskWorkArea) def fn(self,pkt,r): h = r @@ -292,8 +294,8 @@ class IdleTask(Task): - def __init__(self,i,p,w,s,r): - Task.__init__(self,i,0,None,s,r) + def __init__(self,i,p,w,s,r, taskWorkArea): + Task.__init__(self,i,0,None,s,r, taskWorkArea) def fn(self,pkt,r): i = r @@ -315,8 +317,8 @@ A = ord('A') class WorkTask(Task): - def __init__(self,i,p,w,s,r): - Task.__init__(self,i,p,w,s,r) + def __init__(self,i,p,w,s,r, taskWorkArea): + Task.__init__(self,i,p,w,s,r, taskWorkArea) def fn(self,pkt,r): w = r @@ -345,9 +347,12 @@ -def schedule(): +def prepare_schedule(taskWorkArea): t = taskWorkArea.taskList - while t is not None: + transaction.add(schedule_one, taskWorkArea, t) + +def schedule_one(taskWorkArea, t): + if t is not None: pkt = None if tracing: @@ -359,41 +364,53 @@ if tracing: trace(chr(ord("0")+t.ident)) t = t.runTask() + transaction.add(schedule_one, taskWorkArea, t) + + else: + if taskWorkArea.holdCount == 9297 and taskWorkArea.qpktCount == 23246: + pass + else: + raise Exception, "Incorrect results!" + + class Richards(object): def run(self, iterations): for i in xrange(iterations): + self.prepare_once() + transaction.run() + return True + + def prepare_once(self): + taskWorkArea = TaskWorkArea() taskWorkArea.holdCount = 0 taskWorkArea.qpktCount = 0 - IdleTask(I_IDLE, 1, 10000, TaskState().running(), IdleTaskRec()) + IdleTask(I_IDLE, 1, 10000, TaskState().running(), IdleTaskRec(), taskWorkArea) wkq = Packet(None, 0, K_WORK) wkq = Packet(wkq , 0, K_WORK) - WorkTask(I_WORK, 1000, wkq, TaskState().waitingWithPacket(), WorkerTaskRec()) + WorkTask(I_WORK, 1000, wkq, TaskState().waitingWithPacket(), WorkerTaskRec(), + taskWorkArea) wkq = Packet(None, I_DEVA, K_DEV) wkq = Packet(wkq , I_DEVA, K_DEV) wkq = Packet(wkq , I_DEVA, K_DEV) - HandlerTask(I_HANDLERA, 2000, wkq, TaskState().waitingWithPacket(), HandlerTaskRec()) + HandlerTask(I_HANDLERA, 2000, wkq, TaskState().waitingWithPacket(), HandlerTaskRec(), + taskWorkArea) wkq = Packet(None, I_DEVB, K_DEV) wkq = Packet(wkq , I_DEVB, K_DEV) wkq = Packet(wkq , I_DEVB, K_DEV) - HandlerTask(I_HANDLERB, 3000, wkq, TaskState().waitingWithPacket(), HandlerTaskRec()) + HandlerTask(I_HANDLERB, 3000, wkq, TaskState().waitingWithPacket(), HandlerTaskRec(), + taskWorkArea) wkq = None; - DeviceTask(I_DEVA, 4000, wkq, TaskState().waiting(), DeviceTaskRec()); - DeviceTask(I_DEVB, 5000, wkq, TaskState().waiting(), DeviceTaskRec()); + DeviceTask(I_DEVA, 4000, wkq, TaskState().waiting(), DeviceTaskRec(), taskWorkArea); + DeviceTask(I_DEVB, 5000, wkq, TaskState().waiting(), DeviceTaskRec(), taskWorkArea); - schedule() + prepare_schedule(taskWorkArea) - if taskWorkArea.holdCount == 9297 and taskWorkArea.qpktCount == 23246: - pass - else: - return False - - return True def entry_point(iterations): r = Richards() @@ -405,9 +422,6 @@ def main(entry_point = entry_point, iterations = 10): print "Richards benchmark (Python) starting... [%r]" % entry_point result, startTime, endTime = entry_point(iterations) - if not result: - print "Incorrect results!" - return -1 print "finished." total_s = endTime - startTime print "Total time for %d iterations: %.2f secs" %(iterations,total_s) @@ -416,6 +430,7 @@ if __name__ == '__main__': import sys + transaction.set_num_threads(4) if len(sys.argv) >= 2: main(iterations = int(sys.argv[1])) else: From noreply at buildbot.pypy.org Thu Feb 16 22:32:23 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 16 Feb 2012 22:32:23 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Fix order of slots which lead to the same __method__ name. Message-ID: <20120216213223.9F05C8204C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52573:59a514e97b66 Date: 2012-02-16 22:24 +0100 http://bitbucket.org/pypy/pypy/changeset/59a514e97b66/ Log: cpyext: Fix order of slots which lead to the same __method__ name. First Number slots, then Mapping and Sequence. diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -640,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) From noreply at buildbot.pypy.org Fri Feb 17 00:25:19 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 17 Feb 2012 00:25:19 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: add a failing test Message-ID: <20120216232519.DC2478204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52574:4eef3287c40b Date: 2012-02-17 01:15 +0200 http://bitbucket.org/pypy/pypy/changeset/4eef3287c40b/ Log: add a failing test diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -783,7 +783,6 @@ self.values = None def create_sig(self): - print 'Call1::create_sig' if self.forced_result is not None: return self.forced_result.create_sig() return signature.Call1(self.ufunc, self.name, self.calc_dtype, diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -39,6 +39,21 @@ #test for view, and also test that forcing out also forces b assert (c[:, :, 1] == [[0, 0], [-4, -8]]).all() assert (b == [[-2, -4], [-6, -8]]).all() + #Test broadcast, type promotion + b = negative(3, out=a) + assert (a == -3).all() + c = zeros((2, 2), dtype=float) + b = negative(3, out=c) + assert b.dtype.kind == c.dtype.kind + assert b.shape == c.shape + + #Test shape agreement + a=zeros((3,4)) + b=zeros((3,5)) + raises(ValueError, 'negative(a, out=b)') + raises(ValueError, 'negative(a, out=b)') + + def test_ufunc_cast(self): from _numpypy import array, negative From noreply at buildbot.pypy.org Fri Feb 17 00:25:21 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 17 Feb 2012 00:25:21 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup_2: (sthalik) add python27.lib to windows build package Message-ID: <20120216232521.5FB0682B1F@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup_2 Changeset: r52575:d6ccb305f556 Date: 2012-02-17 01:24 +0200 http://bitbucket.org/pypy/pypy/changeset/d6ccb305f556/ Log: (sthalik) add python27.lib to windows build package diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("python27.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -559,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(os.path.join(os.path.dirname(str(exename)), 'libpypy-c.lib'), + os.path.join(os.path.dirname(str(newexename)), 'python27.lib')) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Fri Feb 17 08:00:16 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Fri, 17 Feb 2012 08:00:16 +0100 (CET) Subject: [pypy-commit] pypy default: Failing test for issue 1048 and a passing test of a similar situation not invalving labels Message-ID: <20120217070016.802C18204C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52576:80d15a9a3932 Date: 2012-02-17 07:59 +0100 http://bitbucket.org/pypy/pypy/changeset/80d15a9a3932/ Log: Failing test for issue 1048 and a passing test of a similar situation not invalving labels diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,42 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass From noreply at buildbot.pypy.org Fri Feb 17 12:09:59 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Fri, 17 Feb 2012 12:09:59 +0100 (CET) Subject: [pypy-commit] pypy pytest: sync pylib with 1.7.4 + mattip's patch Message-ID: <20120217110959.6352A8204C@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r52577:97e8ddb2e756 Date: 2012-02-17 12:07 +0100 http://bitbucket.org/pypy/pypy/changeset/97e8ddb2e756/ Log: sync pylib with 1.7.4 + mattip's patch diff --git a/py/__init__.py b/py/__init__.py --- a/py/__init__.py +++ b/py/__init__.py @@ -8,7 +8,7 @@ (c) Holger Krekel and others, 2004-2010 """ -__version__ = '1.4.7.dev3' +__version__ = '1.4.7' from py import _apipkg diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -194,10 +194,17 @@ if not self._newline: self.write("\r") self.write(line, **opts) - lastlen = getattr(self, '_lastlinelen', None) - self._lastlinelen = lenlastline = len(line) - if lenlastline < lastlen: - self.write(" " * (lastlen - lenlastline + 1)) + # see if we need to fill up some spaces at the end + # xxx have a more exact lastlinelen working from self.write? + lenline = len(line) + try: + lastlen = self._lastlinelen + except AttributeError: + pass + else: + if lenline < lastlen: + self.write(" " * (lastlen - lenline + 1)) + self._lastlinelen = lenline self._newline = False @@ -287,16 +294,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): From noreply at buildbot.pypy.org Fri Feb 17 12:10:00 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Fri, 17 Feb 2012 12:10:00 +0100 (CET) Subject: [pypy-commit] pypy pytest: sync pytest with the 2.2.3 release Message-ID: <20120217111000.ED8138204C@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r52578:d921bcdf3e52 Date: 2012-02-17 12:09 +0100 http://bitbucket.org/pypy/pypy/changeset/d921bcdf3e52/ Log: sync pytest with the 2.2.3 release diff --git a/_pytest/__init__.py b/_pytest/__init__.py --- a/_pytest/__init__.py +++ b/_pytest/__init__.py @@ -1,2 +1,2 @@ # -__version__ = '2.2.2.dev6' +__version__ = '2.2.3' diff --git a/_pytest/main.py b/_pytest/main.py --- a/_pytest/main.py +++ b/_pytest/main.py @@ -410,6 +410,7 @@ self._notfound = [] self._initialpaths = set() self._initialparts = [] + self.items = items = [] for arg in args: parts = self._parsearg(arg) self._initialparts.append(parts) @@ -425,7 +426,6 @@ if not genitems: return rep.result else: - self.items = items = [] if rep.passed: for node in rep.result: self.items.extend(self.genitems(node)) diff --git a/_pytest/python.py b/_pytest/python.py --- a/_pytest/python.py +++ b/_pytest/python.py @@ -629,9 +629,11 @@ if not isinstance(argnames, (tuple, list)): argnames = (argnames,) argvalues = [(val,) for val in argvalues] - for arg in argnames: - if arg not in self.funcargnames: - raise ValueError("%r has no argument %r" %(self.function, arg)) + if not indirect: + #XXX should we also check for the opposite case? + for arg in argnames: + if arg not in self.funcargnames: + raise ValueError("%r has no argument %r" %(self.function, arg)) valtype = indirect and "params" or "funcargs" if not ids: idmaker = IDMaker() diff --git a/_pytest/runner.py b/_pytest/runner.py --- a/_pytest/runner.py +++ b/_pytest/runner.py @@ -47,6 +47,8 @@ def pytest_sessionstart(session): session._setupstate = SetupState() +def pytest_sessionfinish(session): + session._setupstate.teardown_all() class NodeInfo: def __init__(self, location): diff --git a/_pytest/terminal.py b/_pytest/terminal.py --- a/_pytest/terminal.py +++ b/_pytest/terminal.py @@ -282,10 +282,18 @@ # we take care to leave out Instances aka () # because later versions are going to get rid of them anyway if self.config.option.verbose < 0: - for item in items: - nodeid = item.nodeid - nodeid = nodeid.replace("::()::", "::") - self._tw.line(nodeid) + if self.config.option.verbose < -1: + counts = {} + for item in items: + name = item.nodeid.split('::', 1)[0] + counts[name] = counts.get(name, 0) + 1 + for name, count in sorted(counts.items()): + self._tw.line("%s: %d" % (name, count)) + else: + for item in items: + nodeid = item.nodeid + nodeid = nodeid.replace("::()::", "::") + self._tw.line(nodeid) return stack = [] indent = "" From noreply at buildbot.pypy.org Fri Feb 17 12:13:50 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 17 Feb 2012 12:13:50 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup_2: (sthalik) do not hardcode lib name Message-ID: <20120217111350.DA6248204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup_2 Changeset: r52579:18656f8fe732 Date: 2012-02-17 13:13 +0200 http://bitbucket.org/pypy/pypy/changeset/18656f8fe732/ Log: (sthalik) do not hardcode lib name diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -83,7 +83,7 @@ shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) if sys.platform == 'win32': - shutil.copyfile(str(pypy_c.dirpath().join("python27.lib")), + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib"))), str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -560,8 +560,8 @@ shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) if sys.platform == 'win32': - shutil.copyfile(os.path.join(os.path.dirname(str(exename)), 'libpypy-c.lib'), - os.path.join(os.path.dirname(str(newexename)), 'python27.lib')) + shutil.copyfile(soname.new(ext='lib'), + newsoname.new(ext='lib')) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Fri Feb 17 12:15:00 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Fri, 17 Feb 2012 12:15:00 +0100 (CET) Subject: [pypy-commit] pypy pytest: merge default into pytest Message-ID: <20120217111500.0D54D8204C@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r52580:e49cc0372f83 Date: 2012-02-17 12:14 +0100 http://bitbucket.org/pypy/pypy/changeset/e49cc0372f83/ Log: merge default into pytest diff too long, truncating to 10000 out of 159486 lines diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 From noreply at buildbot.pypy.org Fri Feb 17 12:21:27 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 17 Feb 2012 12:21:27 +0100 (CET) Subject: [pypy-commit] pypy win32-cleanup_2: (sthalik) whoops Message-ID: <20120217112127.9041B8204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup_2 Changeset: r52581:cbfdff9f4414 Date: 2012-02-17 13:15 +0200 http://bitbucket.org/pypy/pypy/changeset/cbfdff9f4414/ Log: (sthalik) whoops diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -560,8 +560,8 @@ shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) if sys.platform == 'win32': - shutil.copyfile(soname.new(ext='lib'), - newsoname.new(ext='lib')) + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Fri Feb 17 12:32:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 17 Feb 2012 12:32:12 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: hg merge default. Message-ID: <20120217113212.6137B8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r52582:e582a6f9ec55 Date: 2012-02-17 12:31 +0100 http://bitbucket.org/pypy/pypy/changeset/e582a6f9ec55/ Log: hg merge default. This merge was painful because: 1. on default there was support for libffi.array_{get,set}item, which was mostly copied&adapted from this branch's struct_{get,set}field, and it conflicted all over the place 2. llmodel.descr was refactored on default. The net result is a simplification, because we no longer need a separate class for dynamic field descrs, we can just use instantiate the normal one. diff too long, truncating to 10000 out of 209370 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::=
    \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, From noreply at buildbot.pypy.org Fri Feb 17 13:45:53 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 17 Feb 2012 13:45:53 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: hg merge default (more up to date) Message-ID: <20120217124553.1C9E98204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r52583:9d1c5ebd466a Date: 2012-02-17 13:45 +0100 http://bitbucket.org/pypy/pypy/changeset/9d1c5ebd466a/ Log: hg merge default (more up to date) diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2238,6 +2238,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,42 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -191,3 +192,24 @@ raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1685,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,13 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1297,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1487,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s From noreply at buildbot.pypy.org Fri Feb 17 14:42:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 14:42:35 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: break the world and then fix it. strides are now based on bytes not on Message-ID: <20120217134235.937508204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52584:de511ede85c1 Date: 2012-02-17 15:42 +0200 http://bitbucket.org/pypy/pypy/changeset/de511ede85c1/ Log: break the world and then fix it. strides are now based on bytes not on items. saves a mul in test_zjit as well diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -42,17 +42,16 @@ return self.itemtype.coerce(space, self, w_item) def getitem(self, arr, i): - return self.itemtype.read(arr, self.itemtype.get_element_size(), i, 0) + return self.itemtype.read(arr, 1, i, 0) def getitem_bool(self, arr, i): - isize = self.itemtype.get_element_size() - return self.itemtype.read_bool(arr, isize, i, 0) + return self.itemtype.read_bool(arr, 1, i, 0) def setitem(self, arr, i, box): - self.itemtype.store(arr, self.itemtype.get_element_size(), i, 0, box) + self.itemtype.store(arr, 1, i, 0, box) def fill(self, storage, box, start, stop): - self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) + self.itemtype.fill(storage, self.get_size(), box, start, stop, 0) def descr_str(self, space): return space.wrap(self.name) @@ -120,6 +119,9 @@ return '' % self.fields return '' % self.itemtype + def get_size(self): + return self.itemtype.get_element_size() + def dtype_from_list(space, w_lst): lst_w = space.listview(w_lst) fields = {} diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -42,7 +42,7 @@ we can go faster. All the calculations happen in next() -next_step_x() tries to do the iteration for a number of steps at once, +next_skip_x() tries to do the iteration for a number of steps at once, but then we cannot gaurentee that we only overflow one single shape dimension, perhaps we could overflow times in one big step. """ @@ -95,17 +95,19 @@ raise NotImplementedError class ArrayIterator(BaseIterator): - def __init__(self, size): + def __init__(self, size, element_size): self.offset = 0 self.size = size + self.element_size = element_size def next(self, shapelen): return self.next_skip_x(1) - def next_skip_x(self, ofs): + def next_skip_x(self, x): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + ofs + arr.offset = self.offset + x * self.element_size + arr.element_size = self.element_size return arr def next_no_increase(self, shapelen): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -79,8 +79,8 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + shape = _find_shape(space, w_size) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def _unaryop_impl(ufunc_name): def impl(self, space): @@ -204,8 +204,7 @@ return scalar_w(space, dtype, space.wrap(0)) # Do the dims match? out_shape, other_critical_dim = match_dot_shapes(space, self, other) - out_size = support.product(out_shape) - result = W_NDimArray(out_size, out_shape, dtype) + result = W_NDimArray(out_shape, dtype) # This is the place to add fpypy and blas return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, @@ -224,7 +223,7 @@ return space.wrap(self.find_dtype().itemtype.get_element_size()) def descr_get_nbytes(self, space): - return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + return space.wrap(self.size) @jit.unroll_safe def descr_get_shape(self, space): @@ -232,13 +231,16 @@ def descr_set_shape(self, space, w_iterable): new_shape = get_shape_from_iterable(space, - self.size, w_iterable) + support.product(self.shape), w_iterable) if isinstance(self, Scalar): return self.get_concrete().setshape(space, new_shape) def descr_get_size(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) + + def get_size(self): + return self.size // self.find_dtype().get_size() def descr_copy(self, space): return self.copy(space) @@ -258,7 +260,7 @@ def empty_copy(self, space, dtype): shape = self.shape - return W_NDimArray(support.product(shape), shape[:], dtype, 'C') + return W_NDimArray(shape[:], dtype, 'C') def descr_len(self, space): if len(self.shape): @@ -299,6 +301,8 @@ """ The result of getitem/setitem is a single item if w_idx is a list of scalars that match the size of shape """ + if space.isinstance_w(w_idx, space.w_str): + return False shape_len = len(self.shape) if shape_len == 0: raise OperationError(space.w_IndexError, space.wrap( @@ -330,28 +334,28 @@ return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] - def count_all_true(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(arr.shape) + def count_all_true(self): + sig = self.find_sig() + frame = sig.create_frame(self) + shapelen = len(self.shape) s = 0 iter = None while not frame.done(): - count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + count_driver.jit_merge_point(arr=self, frame=frame, iter=iter, s=s, shapelen=shapelen) iter = frame.get_final_iter() - s += arr.dtype.getitem_bool(arr, iter.offset) + s += self.dtype.getitem_bool(self, iter.offset) frame.next(shapelen) return s def getitem_filter(self, space, arr): concr = arr.get_concrete() - if concr.size > self.size: + if concr.get_size() > self.get_size(): raise OperationError(space.w_IndexError, space.wrap("index out of range for array")) - size = self.count_all_true(concr) - res = W_NDimArray(size, [size], self.find_dtype()) - ri = ArrayIterator(size) + size = concr.count_all_true() + res = W_NDimArray([size], self.find_dtype()) + ri = res.create_iter() shapelen = len(self.shape) argi = concr.create_iter() sig = self.find_sig() @@ -372,7 +376,7 @@ return res def setitem_filter(self, space, idx, val): - size = self.count_all_true(idx) + size = idx.count_all_true() arr = SliceArray([size], self.dtype, self, val) sig = arr.find_sig() shapelen = len(self.shape) @@ -451,12 +455,13 @@ w_shape = args_w[0] else: w_shape = space.newtuple(args_w) - new_shape = get_shape_from_iterable(space, self.size, w_shape) + new_shape = get_shape_from_iterable(space, support.product(self.shape), + w_shape) return self.reshape(space, new_shape) def reshape(self, space, new_shape): concrete = self.get_concrete() - # Since we got to here, prod(new_shape) == self.size + # Since we got to here, prod(new_shape) == self.get_size() new_strides = calc_new_strides(new_shape, concrete.shape, concrete.strides, concrete.order) if new_strides: @@ -487,7 +492,7 @@ def descr_mean(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): w_axis = space.wrap(-1) - w_denom = space.wrap(self.size) + w_denom = space.wrap(support.product(self.shape)) else: dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) @@ -506,7 +511,7 @@ concr.fill(space, w_value) def descr_nonzero(self, space): - if self.size > 1: + if self.get_size() > 1: raise OperationError(space.w_ValueError, space.wrap( "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()")) concr = self.get_concrete_or_scalar() @@ -585,8 +590,7 @@ space.wrap("axis unsupported for take")) index_i = index.create_iter() res_shape = index.shape - size = support.product(res_shape) - res = W_NDimArray(size, res_shape[:], concr.dtype, concr.order) + res = W_NDimArray(res_shape[:], concr.dtype, concr.order) res_i = res.create_iter() shapelen = len(index.shape) sig = concr.find_sig() @@ -651,8 +655,7 @@ """ Intermediate class representing a literal. """ - size = 1 - _attrs_ = ["dtype", "value", "shape"] + _attrs_ = ["dtype", "value", "shape", "size"] def __init__(self, dtype, value): self.shape = [] @@ -660,6 +663,7 @@ self.dtype = dtype assert isinstance(value, interp_boxes.W_GenericBox) self.value = value + self.size = dtype.get_size() def find_dtype(self): return self.dtype @@ -677,8 +681,7 @@ return self def reshape(self, space, new_shape): - size = support.product(new_shape) - res = W_NDimArray(size, new_shape, self.dtype, 'C') + res = W_NDimArray(new_shape, self.dtype, 'C') res.setitem(0, self.value) return res @@ -691,6 +694,7 @@ self.forced_result = None self.res_dtype = res_dtype self.name = name + self.size = support.product(self.shape) * res_dtype.get_size() def _del_sources(self): # Function for deleting references to source arrays, @@ -698,7 +702,7 @@ raise NotImplementedError def compute(self): - ra = ResultArray(self, self.size, self.shape, self.res_dtype) + ra = ResultArray(self, self.shape, self.res_dtype) loop.compute(ra) return ra.left @@ -726,7 +730,6 @@ def __init__(self, child, chunks, shape): self.child = child self.chunks = chunks - self.size = support.product(shape) VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) def create_sig(self): @@ -773,7 +776,6 @@ self.left = left self.right = right self.calc_dtype = calc_dtype - self.size = support.product(self.shape) def _del_sources(self): self.left = None @@ -801,9 +803,9 @@ self.left.create_sig(), self.right.create_sig()) class ResultArray(Call2): - def __init__(self, child, size, shape, dtype, res=None, order='C'): + def __init__(self, child, shape, dtype, res=None, order='C'): if res is None: - res = W_NDimArray(size, shape, dtype, order) + res = W_NDimArray(shape, dtype, order) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): @@ -817,7 +819,7 @@ self.s = StringBuilder(child.size * self.itemsize) Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, child) - self.res = W_NDimArray(1, [1], dtype, 'C') + self.res = W_NDimArray([1], dtype, 'C') self.res_casted = rffi.cast(rffi.CArrayPtr(lltype.Char), self.res.storage) @@ -898,13 +900,13 @@ """ _immutable_fields_ = ['storage'] - def __init__(self, size, shape, dtype, order='C', parent=None): - self.size = size + def __init__(self, shape, dtype, order='C', parent=None): self.parent = parent + self.size = support.product(shape) * dtype.get_size() if parent is not None: self.storage = parent.storage else: - self.storage = dtype.itemtype.malloc(size) + self.storage = dtype.itemtype.malloc(self.size) self.order = order self.dtype = dtype if self.strides is None: @@ -930,6 +932,7 @@ self.dtype.setitem(self, item, value) def calc_strides(self, shape): + dtype = self.find_dtype() strides = [] backstrides = [] s = 1 @@ -937,8 +940,8 @@ if self.order == 'C': shape_rev.reverse() for sh in shape_rev: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -986,9 +989,9 @@ shapelen = len(self.shape) if shapelen == 1: rffi.c_memcpy( - rffi.ptradd(self.storage, self.start * itemsize), - rffi.ptradd(w_value.storage, w_value.start * itemsize), - self.size * itemsize + rffi.ptradd(self.storage, self.start), + rffi.ptradd(w_value.storage, w_value.start), + self.size ) else: dest = SkipLastAxisIterator(self) @@ -1003,7 +1006,7 @@ dest.next() def copy(self, space): - array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) + array = W_NDimArray(self.shape[:], self.dtype, self.order) array.setslice(space, self) return array @@ -1023,8 +1026,7 @@ parent = parent.parent self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, support.product(shape), shape, parent.dtype, - parent.order, parent) + ViewArray.__init__(self, shape, parent.dtype, parent.order, parent) self.start = start def create_iter(self, transforms=None): @@ -1039,12 +1041,13 @@ # but then calc_strides would have to accept a stepping factor strides = [] backstrides = [] - s = self.strides[0] + dtype = self.find_dtype() + s = self.strides[0] // dtype.get_size() if self.order == 'C': new_shape.reverse() for sh in new_shape: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -1079,7 +1082,9 @@ self.calc_strides(new_shape) def create_iter(self, transforms=None): - return ArrayIterator(self.size).apply_transformations(self, transforms) + esize = self.find_dtype().get_size() + return ArrayIterator(self.size, esize).apply_transformations(self, + transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1087,18 +1092,13 @@ def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) -def _find_size_and_shape(space, w_size): +def _find_shape(space, w_size): if space.isinstance_w(w_size, space.w_int): - size = space.int_w(w_size) - shape = [size] - else: - size = 1 - shape = [] - for w_item in space.fixedview(w_size): - item = space.int_w(w_item) - size *= item - shape.append(item) - return size, shape + return [space.int_w(w_size)] + shape = [] + for w_item in space.fixedview(w_size): + shape.append(space.int_w(w_item)) + return shape @unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) def array(space, w_item_or_iterable, w_dtype=None, w_order=None, @@ -1139,16 +1139,17 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype)) shape, elems_w = find_shape_and_elems(space, w_item_or_iterable, dtype) # they come back in C order - size = len(elems_w) if dtype is None: for w_elem in elems_w: dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, - w_dtype) + dtype) if dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: break - arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) + if dtype is None: + dtype = interp_dtype.get_dtype_cache(space).w_float64dtype + arr = W_NDimArray(shape[:], dtype=dtype, order=order) shapelen = len(shape) - arr_iter = ArrayIterator(arr.size) + arr_iter = arr.create_iter() # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] @@ -1161,22 +1162,22 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(0)) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def ones(space, w_size, w_dtype=None): dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(1)) - arr = W_NDimArray(size, shape[:], dtype=dtype) + arr = W_NDimArray(shape[:], dtype=dtype) one = dtype.box(1) - arr.dtype.fill(arr.storage, one, 0, size) + arr.dtype.fill(arr.storage, one, 0, arr.size) return space.wrap(arr) @unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) @@ -1224,7 +1225,7 @@ "array dimensions must agree except for axis being concatenated")) elif i == axis: shape[i] += axis_size - res = W_NDimArray(support.product(shape), shape, dtype, 'C') + res = W_NDimArray(shape, dtype, 'C') chunks = [Chunk(0, i, 1, i) for i in shape] axis_start = 0 for arr in args_w: @@ -1327,7 +1328,7 @@ self.iter = sig.create_frame(arr).get_final_iter() self.base = arr self.index = 0 - ViewArray.__init__(self, arr.size, [arr.size], arr.dtype, arr.order, + ViewArray.__init__(self, [arr.get_size()], arr.dtype, arr.order, arr) def descr_next(self, space): @@ -1342,7 +1343,7 @@ return self def descr_len(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) def descr_index(self, space): return space.wrap(self.index) @@ -1360,18 +1361,17 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) # setslice would have been better, but flat[u:v] for arbitrary # shapes of array a cannot be represented as a[x1:x2, y1:y2] basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) if lngth <2: return base.getitem(basei.offset) - ri = ArrayIterator(lngth) - res = W_NDimArray(lngth, [lngth], base.dtype, - base.order) + res = W_NDimArray([lngth], base.dtype, base.order) + ri = res.create_iter() while not ri.done(): flat_get_driver.jit_merge_point(shapelen=shapelen, base=base, @@ -1381,7 +1381,7 @@ ri=ri, ) w_val = base.getitem(basei.offset) - res.setitem(ri.offset,w_val) + res.setitem(ri.offset, w_val) basei = basei.next_skip_x(shapelen, step) ri = ri.next(shapelen) return res @@ -1392,11 +1392,12 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) arr = convert_to_array(space, w_value) + ri = arr.create_iter() ai = 0 basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) while lngth > 0: @@ -1408,11 +1409,13 @@ ai=ai, lngth=lngth, ) - v = arr.getitem(ai).convert_to(base.dtype) + v = arr.getitem(ri.offset).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done - ai = (ai + 1) % arr.size basei = basei.next_skip_x(shapelen, step) + ri = ri.next(shapelen) + # WTF is numpy thinking? + ri.offset %= arr.size lngth -= 1 def create_sig(self): @@ -1420,9 +1423,9 @@ def create_iter(self, transforms=None): return ViewIterator(self.base.start, self.base.strides, - self.base.backstrides, - self.base.shape).apply_transformations(self.base, - transforms) + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) def descr_base(self, space): return space.wrap(self.base) diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -51,9 +51,11 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(num_items, [num_items], dtype=dtype) - for i, val in enumerate(items): - a.dtype.setitem(a, i, val) + a = W_NDimArray([num_items], dtype=dtype) + ai = a.create_iter() + for val in items: + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) @@ -71,10 +73,12 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(count, [count], dtype=dtype) + a = W_NDimArray([count], dtype=dtype) + ai = a.create_iter() for i in range(count): val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) - a.dtype.setitem(a, i, val) + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -156,7 +156,7 @@ shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] - result = W_NDimArray(support.product(shape), shape, dtype) + result = W_NDimArray(shape, dtype) arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) loop.compute(arr) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -48,20 +48,20 @@ def find_shape_and_elems(space, w_iterable, dtype): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) - is_rec_type = dtype.is_record_type() + is_rec_type = dtype is not None and dtype.is_record_type() while True: new_batch = [] if not batch: return shape, [] if is_single_elem(space, batch[0], is_rec_type): for w_elem in batch: - if is_single_elem(space, w_elem, is_rec_type): + if not is_single_elem(space, w_elem, is_rec_type): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) return shape, batch size = space.len_w(batch[0]) for w_elem in batch: - if (not is_single_elem(space, w_elem, is_rec_type) or + if (is_single_elem(space, w_elem, is_rec_type) or space.len_w(w_elem) != size): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -21,8 +21,8 @@ float64_dtype = get_dtype_cache(space).w_float64dtype bool_dtype = get_dtype_cache(space).w_booldtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) - ar2 = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) + ar2 = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) v2 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(2.0))) sig1 = v1.find_sig() @@ -40,7 +40,7 @@ v4 = ar.descr_add(space, ar) assert v1.find_sig() is v4.find_sig() - bool_ar = W_NDimArray(10, [10], dtype=bool_dtype) + bool_ar = W_NDimArray([10], dtype=bool_dtype) v5 = ar.descr_add(space, bool_ar) assert v5.find_sig() is not v1.find_sig() assert v5.find_sig() is not v2.find_sig() @@ -57,7 +57,7 @@ def test_slice_signature(self, space): float64_dtype = get_dtype_cache(space).w_float64dtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_getitem(space, space.wrap(slice(1, 3, 1))) v2 = ar.descr_getitem(space, space.wrap(slice(4, 6, 1))) assert v1.find_sig() is v2.find_sig() diff --git a/pypy/module/micronumpy/test/test_iter.py b/pypy/module/micronumpy/test/test_iter.py --- a/pypy/module/micronumpy/test/test_iter.py +++ b/pypy/module/micronumpy/test/test_iter.py @@ -49,17 +49,17 @@ backstrides = [x * (y - 1) for x,y in zip(strides, shape)] assert backstrides == [10, 4] i = ViewIterator(start, strides, backstrides, shape) - i = i.next_skip_x(2,2) - i = i.next_skip_x(2,2) - i = i.next_skip_x(2,2) + i = i.next_skip_ofs(2,2) + i = i.next_skip_ofs(2,2) + i = i.next_skip_ofs(2,2) assert i.offset == 6 assert not i.done() assert i.indices == [1,1] #And for some big skips - i = i.next_skip_x(2,5) + i = i.next_skip_ofs(2,5) assert i.offset == 11 assert i.indices == [2,1] - i = i.next_skip_x(2,5) + i = i.next_skip_ofs(2,5) # Note: the offset does not overflow but recycles, # this is good for broadcast assert i.offset == 1 @@ -72,17 +72,17 @@ backstrides = [x * (y - 1) for x,y in zip(strides, shape)] assert backstrides == [2, 12] i = ViewIterator(start, strides, backstrides, shape) - i = i.next_skip_x(2,2) - i = i.next_skip_x(2,2) - i = i.next_skip_x(2,2) + i = i.next_skip_ofs(2,2) + i = i.next_skip_ofs(2,2) + i = i.next_skip_ofs(2,2) assert i.offset == 4 assert i.indices == [1,1] assert not i.done() - i = i.next_skip_x(2,5) + i = i.next_skip_ofs(2,5) assert i.offset == 5 assert i.indices == [2,1] assert not i.done() - i = i.next_skip_x(2,5) + i = i.next_skip_ofs(2,5) assert i.indices == [0,1] assert i.offset == 3 assert i.done() diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -16,6 +16,9 @@ def malloc(size): return None + def get_size(self): + return 1 + class TestNumArrayDirect(object): def newslice(self, *args): @@ -31,17 +34,17 @@ return self.space.newtuple(args_w) def test_strides_f(self): - a = W_NDimArray(100, [10, 5, 3], MockDtype(), 'F') + a = W_NDimArray([10, 5, 3], MockDtype(), 'F') assert a.strides == [1, 10, 50] assert a.backstrides == [9, 40, 100] def test_strides_c(self): - a = W_NDimArray(100, [10, 5, 3], MockDtype(), 'C') + a = W_NDimArray([10, 5, 3], MockDtype(), 'C') assert a.strides == [15, 3, 1] assert a.backstrides == [135, 12, 2] def test_create_slice_f(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') + a = W_NDimArray([10, 5, 3], MockDtype(), 'F') s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] @@ -59,7 +62,7 @@ assert s.shape == [10, 3] def test_create_slice_c(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') + a = W_NDimArray([10, 5, 3], MockDtype(), 'C') s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] @@ -79,7 +82,7 @@ assert s.shape == [10, 3] def test_slice_of_slice_f(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') + a = W_NDimArray([10, 5, 3], MockDtype(), 'F') s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 s2 = s.create_slice([Chunk(3, 0, 0, 1)]) @@ -96,7 +99,7 @@ assert s2.start == 1 * 15 + 2 * 3 def test_slice_of_slice_c(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') + a = W_NDimArray([10, 5, 3], MockDtype(), order='C') s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 s2 = s.create_slice([Chunk(3, 0, 0, 1)]) @@ -113,21 +116,21 @@ assert s2.start == 1 * 15 + 2 * 3 def test_negative_step_f(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') + a = W_NDimArray([10, 5, 3], MockDtype(), 'F') s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') + a = W_NDimArray([10, 5, 3], MockDtype(), order='C') s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] def test_index_of_single_item_f(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') + a = W_NDimArray([10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) @@ -137,7 +140,7 @@ assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 1)) def test_index_of_single_item_c(self): - a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') + a = W_NDimArray([10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) @@ -1468,6 +1471,7 @@ a = arange(12).reshape(3,4) b = a.T.flat b[6::2] = [-1, -2] + print a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]] assert (a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]]).all() b[0:2] = [[[100]]] assert(a[0,0] == 100) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -503,7 +503,7 @@ dtype = float64_dtype else: dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) + ar = W_NDimArray([n], dtype=dtype) i = 0 while i < n: ar.get_concrete().setitem(i, int32_dtype.box(7)) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -69,10 +69,9 @@ # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ # arctanh = _unimplemented_ufunc - def malloc(self, length): + def malloc(self, size): # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, - self.get_element_size() * length, + return lltype.malloc(VOID_STORAGE, size, zero=True, flavor="raw", track_allocation=False, add_memory_pressure=True) @@ -140,8 +139,8 @@ def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) - for i in xrange(start, stop): - self._write(storage, width, i, offset, value) + for i in xrange(start, stop, width): + self._write(storage, 1, i, offset, value) def runpack_str(self, s): return self.box(runpack(self.format_code, s)) @@ -667,7 +666,7 @@ items_w = space.fixedview(w_item) # XXX optimize it out one day, but for now we just allocate an # array - arr = W_NDimArray(1, [1], dtype) + arr = W_NDimArray([1], dtype) for i in range(len(items_w)): subdtype = dtype.fields[dtype.fieldnames[i]][1] ofs, itemtype = self.offsets_and_fields[i] From noreply at buildbot.pypy.org Fri Feb 17 15:17:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 15:17:39 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: update record dtypes until they match Message-ID: <20120217141739.D8A848204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52585:5fa9d10a397d Date: 2012-02-17 16:08 +0200 http://bitbucket.org/pypy/pypy/changeset/5fa9d10a397d/ Log: update record dtypes until they match diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -170,9 +170,9 @@ pass class W_VoidBox(W_FlexibleBox): - def __init__(self, arr, i): + def __init__(self, arr, ofs): self.arr = arr # we have to keep array alive - self.i = i + self.ofs = ofs def get_dtype(self, space): return self.arr.dtype @@ -184,8 +184,7 @@ except KeyError: raise OperationError(space.w_IndexError, space.wrap("Field %s does not exist" % item)) - width = self.arr.dtype.itemtype.get_element_size() - return dtype.itemtype.read(self.arr, width, self.i, ofs) + return dtype.itemtype.read(self.arr, 1, self.ofs, ofs) @unwrap_spec(item=str) def descr_setitem(self, space, item, w_value): @@ -194,8 +193,7 @@ except KeyError: raise OperationError(space.w_IndexError, space.wrap("Field %s does not exist" % item)) - width = self.arr.dtype.itemtype.get_element_size() - dtype.itemtype.store(self.arr, width, self.i, ofs, + dtype.itemtype.store(self.arr, 1, self.ofs, ofs, dtype.coerce(space, w_value)) class W_CharacterBox(W_FlexibleBox): diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -672,14 +672,13 @@ ofs, itemtype = self.offsets_and_fields[i] w_item = items_w[i] w_box = itemtype.coerce(space, subdtype, w_item) - width = itemtype.get_element_size() - itemtype.store(arr, width, 0, ofs, w_box) + itemtype.store(arr, 1, 0, ofs, w_box) return interp_boxes.W_VoidBox(arr, 0) @jit.unroll_safe - def store(self, arr, width, i, ofs, box): - for k in range(width): - arr.storage[k + i * width] = box.arr.storage[k + box.i * width] + def store(self, arr, _, i, ofs, box): + for k in range(self.get_element_size()): + arr.storage[k + i] = box.arr.storage[k + box.ofs] for tp in [Int32, Int64]: if tp.T == lltype.Signed: From noreply at buildbot.pypy.org Fri Feb 17 15:17:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 15:17:41 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: make repr work Message-ID: <20120217141741.1117D8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52586:087544142212 Date: 2012-02-17 16:17 +0200 http://bitbucket.org/pypy/pypy/changeset/087544142212/ Log: make repr work diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -88,6 +88,9 @@ def box(self, value): return self.BoxType(rffi.cast(self.T, value)) + def str_format(self, box): + return self._str_format(self.unbox(box)) + def unbox(self, box): assert isinstance(box, self.BoxType) return box.value @@ -269,8 +272,7 @@ def to_builtin_type(self, space, w_item): return space.wrap(self.unbox(w_item)) - def str_format(self, box): - value = self.unbox(box) + def _str_format(self, value): return "True" if value else "False" def for_computation(self, v): @@ -301,8 +303,7 @@ def _coerce(self, space, w_item): return self._base_coerce(space, w_item) - def str_format(self, box): - value = self.unbox(box) + def _str_format(self, value): return str(self.for_computation(value)) def for_computation(self, v): @@ -473,9 +474,9 @@ def _coerce(self, space, w_item): return self.box(space.float_w(space.call_function(space.w_float, w_item))) - def str_format(self, box): - value = self.unbox(box) - return float2string(self.for_computation(value), "g", rfloat.DTSF_STR_PRECISION) + def _str_format(self, value): + return float2string(self.for_computation(value), "g", + rfloat.DTSF_STR_PRECISION) def for_computation(self, v): return float(v) @@ -680,6 +681,20 @@ for k in range(self.get_element_size()): arr.storage[k + i] = box.arr.storage[k + box.ofs] + @jit.unroll_safe + def str_format(self, box): + pieces = ["("] + first = True + for ofs, tp in self.offsets_and_fields: + if first: + first = False + else: + pieces.append(", ") + pieces.append(tp._str_format(tp._read(box.arr.storage, 1, box.ofs, + ofs))) + pieces.append(")") + return "".join(pieces) + for tp in [Int32, Int64]: if tp.T == lltype.Signed: IntP = tp From noreply at buildbot.pypy.org Fri Feb 17 16:07:16 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 17 Feb 2012 16:07:16 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: mark cause of failing test with xxx Message-ID: <20120217150716.8CD568204C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52587:e5f97ec08f2c Date: 2012-02-17 17:06 +0200 http://bitbucket.org/pypy/pypy/changeset/e5f97ec08f2c/ Log: mark cause of failing test with xxx diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -785,6 +785,8 @@ def create_sig(self): if self.forced_result is not None: return self.forced_result.create_sig() + if self.shape != self.values.shape: + xxx return signature.Call1(self.ufunc, self.name, self.calc_dtype, self.values.create_sig()) @@ -838,8 +840,9 @@ Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): - return signature.ResultSignature(self.res_dtype, self.left.create_sig(), + sig = signature.ResultSignature(self.res_dtype, self.left.create_sig(), self.right.create_sig()) + return sig def done_if_true(dtype, val): return dtype.itemtype.bool(val) @@ -1088,7 +1091,7 @@ """ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self.storage, item, value.convert_to(self.dtype)) def setshape(self, space, new_shape): self.shape = new_shape diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -259,12 +259,23 @@ else: out = arr return space.wrap(out) - w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, - w_obj, out) - w_obj.add_invalidates(w_res) if out: + #Test shape compatability + if not shape_agreement(space, w_obj.shape, out.shape): + raise operationerrfmt(space.w_ValueError, + 'output parameter shape mismatch, expecting [%s]' + + ' , got [%s]', + ",".join([str(x) for x in shape]), + ",".join([str(x) for x in out.shape]), + ) + w_res = Call1(self.func, self.name, out.shape, calc_dtype, + res_dtype, w_obj, out) #Force it immediately w_res.get_concrete() + else: + w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, + res_dtype, w_obj) + w_obj.add_invalidates(w_res) return w_res diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -32,7 +32,7 @@ assert(b == [False, True, True]).all() def test_ufunc_out(self): - from _numpypy import array, negative, zeros + from _numpypy import array, negative, zeros, sin a = array([[1, 2], [3, 4]]) c = zeros((2,2,2)) b = negative(a + a, out=c[1]) @@ -46,14 +46,16 @@ b = negative(3, out=c) assert b.dtype.kind == c.dtype.kind assert b.shape == c.shape + a = array([1, 2]) + b = sin(a, out=c) + assert(c == [[-1, -2], [-1, -2]]).all() + b = sin(a, out=c+c) + assert (c == b).all() #Test shape agreement a=zeros((3,4)) b=zeros((3,5)) raises(ValueError, 'negative(a, out=b)') - raises(ValueError, 'negative(a, out=b)') - - def test_ufunc_cast(self): from _numpypy import array, negative From noreply at buildbot.pypy.org Fri Feb 17 16:11:16 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 16:11:16 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: remove silliness of str_format, make it back rpython. fix test Message-ID: <20120217151116.CE0448204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52588:faae60a012bd Date: 2012-02-17 17:10 +0200 http://bitbucket.org/pypy/pypy/changeset/faae60a012bd/ Log: remove silliness of str_format, make it back rpython. fix test diff --git a/pypy/module/micronumpy/test/test_iter.py b/pypy/module/micronumpy/test/test_iter.py --- a/pypy/module/micronumpy/test/test_iter.py +++ b/pypy/module/micronumpy/test/test_iter.py @@ -49,17 +49,17 @@ backstrides = [x * (y - 1) for x,y in zip(strides, shape)] assert backstrides == [10, 4] i = ViewIterator(start, strides, backstrides, shape) - i = i.next_skip_ofs(2,2) - i = i.next_skip_ofs(2,2) - i = i.next_skip_ofs(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) assert i.offset == 6 assert not i.done() assert i.indices == [1,1] #And for some big skips - i = i.next_skip_ofs(2,5) + i = i.next_skip_x(2,5) assert i.offset == 11 assert i.indices == [2,1] - i = i.next_skip_ofs(2,5) + i = i.next_skip_x(2,5) # Note: the offset does not overflow but recycles, # this is good for broadcast assert i.offset == 1 @@ -72,17 +72,17 @@ backstrides = [x * (y - 1) for x,y in zip(strides, shape)] assert backstrides == [2, 12] i = ViewIterator(start, strides, backstrides, shape) - i = i.next_skip_ofs(2,2) - i = i.next_skip_ofs(2,2) - i = i.next_skip_ofs(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) + i = i.next_skip_x(2,2) assert i.offset == 4 assert i.indices == [1,1] assert not i.done() - i = i.next_skip_ofs(2,5) + i = i.next_skip_x(2,5) assert i.offset == 5 assert i.indices == [2,1] assert not i.done() - i = i.next_skip_ofs(2,5) + i = i.next_skip_x(2,5) assert i.indices == [0,1] assert i.offset == 3 assert i.done() diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -88,9 +88,6 @@ def box(self, value): return self.BoxType(rffi.cast(self.T, value)) - def str_format(self, box): - return self._str_format(self.unbox(box)) - def unbox(self, box): assert isinstance(box, self.BoxType) return box.value @@ -272,8 +269,8 @@ def to_builtin_type(self, space, w_item): return space.wrap(self.unbox(w_item)) - def _str_format(self, value): - return "True" if value else "False" + def str_format(self, box): + return "True" if self.unbox(box) else "False" def for_computation(self, v): return int(v) @@ -303,8 +300,8 @@ def _coerce(self, space, w_item): return self._base_coerce(space, w_item) - def _str_format(self, value): - return str(self.for_computation(value)) + def str_format(self, box): + return str(self.for_computation(self.unbox(box))) def for_computation(self, v): return widen(v) @@ -474,8 +471,8 @@ def _coerce(self, space, w_item): return self.box(space.float_w(space.call_function(space.w_float, w_item))) - def _str_format(self, value): - return float2string(self.for_computation(value), "g", + def str_format(self, box): + return float2string(self.for_computation(self.unbox(box)), "g", rfloat.DTSF_STR_PRECISION) def for_computation(self, v): @@ -690,8 +687,7 @@ first = False else: pieces.append(", ") - pieces.append(tp._str_format(tp._read(box.arr.storage, 1, box.ofs, - ofs))) + pieces.append(tp.str_format(tp.read(box.arr, 1, box.ofs, ofs))) pieces.append(")") return "".join(pieces) From noreply at buildbot.pypy.org Fri Feb 17 16:29:31 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 16:29:31 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: add a working integration test (boring) Message-ID: <20120217152931.25E4B8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52589:2ff45be26c9c Date: 2012-02-17 17:29 +0200 http://bitbucket.org/pypy/pypy/changeset/2ff45be26c9c/ Log: add a working integration test (boring) diff --git a/pypy/jit/backend/x86/test/test_vectorize.py b/pypy/jit/backend/x86/test/test_vectorize.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/x86/test/test_vectorize.py @@ -0,0 +1,41 @@ + +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.jit.backend.x86.test.test_basic import Jit386Mixin +from pypy.rlib import jit + +class TestVectorize(Jit386Mixin): + def test_vectorize(self): + TP = rffi.CArray(lltype.Float) + + driver = jit.JitDriver(greens = [], reds = ['a', 'i', 'b', 'size']) + + def initialize(arr, size): + for i in range(size): + arr[i] = float(i) + + def sum(arr, size): + s = 0 + for i in range(size): + s += arr[i] + return s + + def f(size): + a = lltype.malloc(TP, size, flavor='raw') + b = lltype.malloc(TP, size, flavor='raw') + initialize(a, size) + initialize(b, size) + i = 0 + while i < size: + driver.jit_merge_point(a=a, i=i, size=size, b=b) + jit.assert_aligned(a, i) + jit.assert_aligned(b, i) + b[i] = a[i] + a[i] + i += 1 + b[i] = a[i] + a[i] + i += 1 + r = sum(b, size) + lltype.free(a, flavor='raw') + lltype.free(b, flavor='raw') + return r + + assert self.meta_interp(f, [20]) == f(20) From noreply at buildbot.pypy.org Fri Feb 17 17:02:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 17:02:41 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: good finally a failing test Message-ID: <20120217160241.5EB508204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52590:011140751305 Date: 2012-02-17 18:02 +0200 http://bitbucket.org/pypy/pypy/changeset/011140751305/ Log: good finally a failing test diff --git a/pypy/jit/backend/x86/test/test_vectorize.py b/pypy/jit/backend/x86/test/test_vectorize.py --- a/pypy/jit/backend/x86/test/test_vectorize.py +++ b/pypy/jit/backend/x86/test/test_vectorize.py @@ -1,7 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.jit.backend.x86.test.test_basic import Jit386Mixin -from pypy.rlib import jit +from pypy.rlib import jit, libffi, clibffi class TestVectorize(Jit386Mixin): def test_vectorize(self): @@ -39,3 +39,49 @@ return r assert self.meta_interp(f, [20]) == f(20) + + def test_vector_ops_libffi(self): + TP = rffi.CArray(lltype.Float) + elem_size = rffi.sizeof(lltype.Float) + ftype = clibffi.cast_type_to_ffitype(lltype.Float) + + driver = jit.JitDriver(greens = [], reds = ['a', 'i', 'b', 'size']) + + def read_item(arr, item): + return libffi.array_getitem(ftype, elem_size, arr, item, 0) + + def store_item(arr, item, v): + libffi.array_setitem(ftype, elem_size, arr, item, 0, v) + + def initialize(arr, size): + for i in range(size): + arr[i] = float(i) + + def sum(arr, size): + s = 0 + for i in range(size): + s += arr[i] + return s + + def f(size): + a = lltype.malloc(TP, size, flavor='raw') + b = lltype.malloc(TP, size, flavor='raw') + initialize(a, size) + initialize(b, size) + i = 0 + while i < size: + driver.jit_merge_point(a=a, i=i, size=size, b=b) + jit.assert_aligned(a, i) + jit.assert_aligned(b, i) + store_item(b, i, read_item(a, i) + read_item(a, i)) + i += 1 + store_item(b, i, read_item(a, i) + read_item(a, i)) + i += 1 + r = sum(b, size) + lltype.free(a, flavor='raw') + lltype.free(b, flavor='raw') + return r + + res = f(20) + assert self.meta_interp(f, [20]) == res + From noreply at buildbot.pypy.org Fri Feb 17 18:07:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 17 Feb 2012 18:07:23 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: Shave a giant yak adding a flavor to malloc, so C level calls posix_memalign Message-ID: <20120217170723.8B7A48204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52591:de1abef70971 Date: 2012-02-17 19:06 +0200 http://bitbucket.org/pypy/pypy/changeset/de1abef70971/ Log: Shave a giant yak adding a flavor to malloc, so C level calls posix_memalign diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -391,6 +391,7 @@ 'boehm_register_finalizer': LLOp(), 'boehm_disappearing_link': LLOp(), 'raw_malloc': LLOp(), + 'raw_malloc_align': LLOp(), 'raw_malloc_usage': LLOp(sideeffects=False), 'raw_free': LLOp(), 'raw_memclear': LLOp(), diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -483,6 +483,15 @@ return result mh.ll_malloc_varsize_no_length_zero = _ll_malloc_varsize_no_length_zero + def _ll_malloc_varsize_zero_align(length, size, itemsize, align): + tot_size = _ll_compute_size(length, size, itemsize) + result = llop.raw_malloc_align(llmemory.Address, tot_size, align) + if not result: + raise MemoryError() + llmemory.raw_memclear(result, tot_size) + return result + mh.ll_malloc_varsize_zero_align = _ll_malloc_varsize_zero_align + return mh class GCTransformer(BaseGCTransformer): @@ -496,6 +505,7 @@ ll_raw_malloc_varsize_no_length = mh.ll_malloc_varsize_no_length ll_raw_malloc_varsize = mh.ll_malloc_varsize ll_raw_malloc_varsize_no_length_zero = mh.ll_malloc_varsize_no_length_zero + ll_raw_malloc_varsize_zero_align = mh.ll_malloc_varsize_zero_align stack_mh = mallocHelpers() stack_mh.allocate = lambda size: llop.stack_malloc(llmemory.Address, size) @@ -513,6 +523,9 @@ self.stack_malloc_fixedsize_ptr = self.inittime_helper( ll_stack_malloc_fixedsize, [lltype.Signed], llmemory.Address) + self.raw_malloc_varsize_align_zero_ptr = self.inittime_helper( + ll_raw_malloc_varsize_zero_align, [lltype.Signed] * 4, + llmemory.Address) def gct_malloc(self, hop, add_flags=None): TYPE = hop.spaceop.result.concretetype.TO @@ -601,16 +614,28 @@ [self.raw_malloc_memory_pressure_varsize_ptr, v_length, c_item_size]) if c_offset_to_length is None: - if flags.get('zero'): - fnptr = self.raw_malloc_varsize_no_length_zero_ptr + mpa = flags.get('memory_position_alignment') + if mpa is not None: + assert flags.get('zero') + fnptr = self.raw_malloc_varsize_align_zero_ptr + c_align = rmodel.inputconst(lltype.Signed, mpa) + v_raw = hop.genop("direct_call", [fnptr, v_length, c_const_size, + c_item_size, c_align], + resulttype=llmemory.Address) else: - fnptr = self.raw_malloc_varsize_no_length_ptr - v_raw = hop.genop("direct_call", - [fnptr, v_length, c_const_size, c_item_size], - resulttype=llmemory.Address) + if flags.get('zero'): + fnptr = self.raw_malloc_varsize_no_length_zero_ptr + else: + fnptr = self.raw_malloc_varsize_no_length_ptr + v_raw = hop.genop("direct_call", + [fnptr, v_length, c_const_size, + c_item_size], + resulttype=llmemory.Address) else: if flags.get('zero'): raise NotImplementedError("raw zero varsize malloc with length field") + if flags.get('memory_position_alignment'): + raise NotImplementedError('raw varsize alloc with length and alignment') v_raw = hop.genop("direct_call", [self.raw_malloc_varsize_ptr, v_length, c_const_size, c_item_size, c_offset_to_length], diff --git a/pypy/rpython/rbuiltin.py b/pypy/rpython/rbuiltin.py --- a/pypy/rpython/rbuiltin.py +++ b/pypy/rpython/rbuiltin.py @@ -362,6 +362,10 @@ flags['track_allocation'] = v_track_allocation.value if i_add_memory_pressure is not None: flags['add_memory_pressure'] = v_add_memory_pressure.value + mpa = hop.r_result.lowleveltype.TO._hints.get('memory_position_alignment', + None) + if mpa is not None: + flags['memory_position_alignment'] = mpa vlist.append(hop.inputconst(lltype.Void, flags)) assert 1 <= hop.nb_args <= 2 diff --git a/pypy/translator/c/src/mem.h b/pypy/translator/c/src/mem.h --- a/pypy/translator/c/src/mem.h +++ b/pypy/translator/c/src/mem.h @@ -110,6 +110,14 @@ } \ } +#define OP_RAW_MALLOC_ALIGN(size, align, r) { \ + posix_memalign(&r, align, size); \ + if (r != NULL) { \ + memset((void*)r, 0, size); \ + COUNT_MALLOC; \ + } \ + } + #endif #define OP_RAW_FREE(p, r) PyObject_Free(p); COUNT_FREE; diff --git a/pypy/translator/c/test/test_genc.py b/pypy/translator/c/test/test_genc.py --- a/pypy/translator/c/test/test_genc.py +++ b/pypy/translator/c/test/test_genc.py @@ -13,6 +13,7 @@ from pypy.translator.interactive import Translation from pypy.rlib.entrypoint import entrypoint from pypy.tool.nullpath import NullPyPathLocal +from pypy.rpython.lltypesystem import lltype def compile(fn, argtypes, view=False, gcpolicy="ref", backendopt=True, annotatorpolicy=None): @@ -462,11 +463,22 @@ assert ' BarStruct ' in t.driver.cbuilder.c_source_filename.read() free(foo, flavor="raw") +def test_malloc_aligned(): + T = lltype.Array(lltype.Signed, hints={'nolength': True, + 'memory_position_alignment': 16}) + + def f(): + a = lltype.malloc(T, 16, flavor='raw', zero=True) + lltype.free(a, flavor='raw') + + t = Translation(f, [], backend='c') + t.annotate() + t.compile_c() + assert 'OP_RAW_MALLOC_ALIGN' in t.driver.cbuilder.c_source_filename.read() + def test_recursive_llhelper(): from pypy.rpython.annlowlevel import llhelper - from pypy.rpython.lltypesystem import lltype from pypy.rlib.objectmodel import specialize - from pypy.rlib.nonconst import NonConstant FT = lltype.ForwardReference() FTPTR = lltype.Ptr(FT) STRUCT = lltype.Struct("foo", ("bar", FTPTR)) @@ -514,7 +526,6 @@ assert fn(True) def test_inhibit_tail_call(): - from pypy.rpython.lltypesystem import lltype def foobar_fn(n): return 42 foobar_fn._dont_inline_ = True diff --git a/pypy/translator/goal/targetvector.py b/pypy/translator/goal/targetvector.py new file mode 100644 --- /dev/null +++ b/pypy/translator/goal/targetvector.py @@ -0,0 +1,51 @@ + +from pypy.rpython.lltypesystem import lltype +from pypy.rlib import jit + +TP = lltype.Array(lltype.Float, hints={'nolength': True, + 'memory_position_alignment': 16}) + +driver = jit.JitDriver(greens = [], reds = ['a', 'i', 'b', 'size']) + +def initialize(arr, size): + for i in range(size): + arr[i] = float(i) + +def sum(arr, size): + s = 0 + for i in range(size): + s += arr[i] + return s + +def main(n, size): + a = lltype.malloc(TP, size, flavor='raw', zero=True) + b = lltype.malloc(TP, size, flavor='raw', zero=True) + initialize(a, size) + initialize(b, size) + for i in range(n): + f(a, b, size) + lltype.free(a, flavor='raw') + lltype.free(b, flavor='raw') + +def f(a, b, size): + i = 0 + while i < size: + driver.jit_merge_point(a=a, i=i, size=size, b=b) + jit.assert_aligned(a, i) + jit.assert_aligned(b, i) + b[i] = a[i] + a[i] + i += 1 + b[i] = a[i] + a[i] + i += 1 + +def entry_point(argv): + main(int(argv[1]), int(argv[2])) + return 0 + +def jitpolicy(driver): + return None + +# _____ Define and setup target ___ + +def target(*args): + return entry_point, None From noreply at buildbot.pypy.org Fri Feb 17 18:20:32 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 17 Feb 2012 18:20:32 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, bivab): create a function descriptor for malloc_slowpath on PPC64 Message-ID: <20120217172032.6EE0B8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52592:d24f788807b2 Date: 2012-02-17 09:19 -0800 http://bitbucket.org/pypy/pypy/changeset/d24f788807b2/ Log: (edelsohn, bivab): create a function descriptor for malloc_slowpath on PPC64 diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -297,6 +297,10 @@ def _build_malloc_slowpath(self): mc = PPCBuilder() + if IS_PPC_64: + for _ in range(6): + mc.write32(0) + with Saved_Volatiles(mc): # Values to compute size stored in r3 and r4 mc.subf(r.r3.value, r.r3.value, r.r4.value) @@ -315,6 +319,8 @@ pmc.overwrite() mc.b_abs(self.propagate_exception_path) rawstart = mc.materialize(self.cpu.asmmemmgr, []) + if IS_PPC_64: + self.write_64_bit_func_descr(rawstart, rawstart+3*WORD) self.malloc_slowpath = rawstart def _build_propagate_exception_path(self): From noreply at buildbot.pypy.org Fri Feb 17 20:57:48 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 17 Feb 2012 20:57:48 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, bivab) Fix offsets where registers are stored around malloc calls and actually save them Message-ID: <20120217195748.0A97E8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52593:a702c4ce1008 Date: 2012-02-17 11:56 -0800 http://bitbucket.org/pypy/pypy/changeset/a702c4ce1008/ Log: (edelsohn, bivab) Fix offsets where registers are stored around malloc calls and actually save them diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -3,7 +3,7 @@ from pypy.jit.backend.ppc.ppc_form import PPCForm as Form from pypy.jit.backend.ppc.ppc_field import ppc_fields from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, - Regalloc) + Regalloc, PPCRegisterManager) from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup @@ -305,7 +305,17 @@ # Values to compute size stored in r3 and r4 mc.subf(r.r3.value, r.r3.value, r.r4.value) addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + if IS_PPC_32: + mc.stw(reg.value, r.SPP.value, ofs) + else: + mc.std(reg.value, r.SPP.value, ofs) mc.call(addr) + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + if IS_PPC_32: + mc.lwz(reg.value, r.SPP.value, ofs) + else: + mc.ld(reg.value, r.SPP.value, ofs) mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -50,37 +50,33 @@ save_around_call_regs = r.VOLATILES REGLOC_TO_COPY_AREA_OFS = { - r.r0: MY_COPY_OF_REGS + 0 * WORD, - r.r2: MY_COPY_OF_REGS + 1 * WORD, - r.r3: MY_COPY_OF_REGS + 2 * WORD, - r.r4: MY_COPY_OF_REGS + 3 * WORD, - r.r5: MY_COPY_OF_REGS + 4 * WORD, - r.r6: MY_COPY_OF_REGS + 5 * WORD, - r.r7: MY_COPY_OF_REGS + 6 * WORD, - r.r8: MY_COPY_OF_REGS + 7 * WORD, - r.r9: MY_COPY_OF_REGS + 8 * WORD, - r.r10: MY_COPY_OF_REGS + 9 * WORD, - r.r11: MY_COPY_OF_REGS + 10 * WORD, - r.r12: MY_COPY_OF_REGS + 11 * WORD, - r.r13: MY_COPY_OF_REGS + 12 * WORD, - r.r14: MY_COPY_OF_REGS + 13 * WORD, - r.r15: MY_COPY_OF_REGS + 14 * WORD, - r.r16: MY_COPY_OF_REGS + 15 * WORD, - r.r17: MY_COPY_OF_REGS + 16 * WORD, - r.r18: MY_COPY_OF_REGS + 17 * WORD, - r.r19: MY_COPY_OF_REGS + 18 * WORD, - r.r20: MY_COPY_OF_REGS + 19 * WORD, - r.r21: MY_COPY_OF_REGS + 20 * WORD, - r.r22: MY_COPY_OF_REGS + 21 * WORD, - r.r23: MY_COPY_OF_REGS + 22 * WORD, - r.r24: MY_COPY_OF_REGS + 23 * WORD, - r.r25: MY_COPY_OF_REGS + 24 * WORD, - r.r26: MY_COPY_OF_REGS + 25 * WORD, - r.r27: MY_COPY_OF_REGS + 26 * WORD, - r.r28: MY_COPY_OF_REGS + 27 * WORD, - r.r29: MY_COPY_OF_REGS + 28 * WORD, - r.r30: MY_COPY_OF_REGS + 29 * WORD, - r.r31: MY_COPY_OF_REGS + 30 * WORD, + r.r3: MY_COPY_OF_REGS + 0 * WORD, + r.r4: MY_COPY_OF_REGS + 1 * WORD, + r.r5: MY_COPY_OF_REGS + 2 * WORD, + r.r6: MY_COPY_OF_REGS + 3 * WORD, + r.r7: MY_COPY_OF_REGS + 4 * WORD, + r.r8: MY_COPY_OF_REGS + 5 * WORD, + r.r9: MY_COPY_OF_REGS + 6 * WORD, + r.r10: MY_COPY_OF_REGS + 7 * WORD, + r.r11: MY_COPY_OF_REGS + 8 * WORD, + r.r12: MY_COPY_OF_REGS + 9 * WORD, + r.r14: MY_COPY_OF_REGS + 10 * WORD, + r.r15: MY_COPY_OF_REGS + 11 * WORD, + r.r16: MY_COPY_OF_REGS + 12 * WORD, + r.r17: MY_COPY_OF_REGS + 13 * WORD, + r.r18: MY_COPY_OF_REGS + 14 * WORD, + r.r19: MY_COPY_OF_REGS + 15 * WORD, + r.r20: MY_COPY_OF_REGS + 16 * WORD, + r.r21: MY_COPY_OF_REGS + 17 * WORD, + r.r22: MY_COPY_OF_REGS + 18 * WORD, + r.r23: MY_COPY_OF_REGS + 19 * WORD, + r.r24: MY_COPY_OF_REGS + 20 * WORD, + r.r25: MY_COPY_OF_REGS + 21 * WORD, + r.r26: MY_COPY_OF_REGS + 22 * WORD, + r.r27: MY_COPY_OF_REGS + 23 * WORD, + r.r28: MY_COPY_OF_REGS + 24 * WORD, + r.r29: MY_COPY_OF_REGS + 25 * WORD, + r.r30: MY_COPY_OF_REGS + 26 * WORD, } def __init__(self, longevity, frame_manager=None, assembler=None): From noreply at buildbot.pypy.org Fri Feb 17 23:40:06 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 17 Feb 2012 23:40:06 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: added overload tests Message-ID: <20120217224006.A7F708204C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52594:83cc067a5dfc Date: 2012-02-16 13:57 -0800 http://bitbucket.org/pypy/pypy/changeset/83cc067a5dfc/ Log: added overload tests diff --git a/pypy/module/cppyy/test/Makefile b/pypy/module/cppyy/test/Makefile --- a/pypy/module/cppyy/test/Makefile +++ b/pypy/module/cppyy/test/Makefile @@ -1,4 +1,4 @@ -dicts = example01Dict.so datatypesDict.so advancedcppDict.so stltypesDict.so operatorsDict.so fragileDict.so std_streamsDict.so +dicts = example01Dict.so datatypesDict.so advancedcppDict.so overloadsDict.so stltypesDict.so operatorsDict.so fragileDict.so std_streamsDict.so all : $(dicts) ROOTSYS := ${ROOTSYS} diff --git a/pypy/module/cppyy/test/overloads.cxx b/pypy/module/cppyy/test/overloads.cxx new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/overloads.cxx @@ -0,0 +1,49 @@ +#include "overloads.h" + + +a_overload::a_overload() { i1 = 42; i2 = -1; } + +ns_a_overload::a_overload::a_overload() { i1 = 88; i2 = -34; } +int ns_a_overload::b_overload::f(const std::vector* v) { return (*v)[0]; } + +ns_b_overload::a_overload::a_overload() { i1 = -33; i2 = 89; } + +b_overload::b_overload() { i1 = -2; i2 = 13; } + +c_overload::c_overload() {} +int c_overload::get_int(a_overload* a) { return a->i1; } +int c_overload::get_int(ns_a_overload::a_overload* a) { return a->i1; } +int c_overload::get_int(ns_b_overload::a_overload* a) { return a->i1; } +int c_overload::get_int(short* p) { return *p; } +int c_overload::get_int(b_overload* b) { return b->i2; } +int c_overload::get_int(int* p) { return *p; } + +d_overload::d_overload() {} +int d_overload::get_int(int* p) { return *p; } +int d_overload::get_int(b_overload* b) { return b->i2; } +int d_overload::get_int(short* p) { return *p; } +int d_overload::get_int(ns_b_overload::a_overload* a) { return a->i1; } +int d_overload::get_int(ns_a_overload::a_overload* a) { return a->i1; } +int d_overload::get_int(a_overload* a) { return a->i1; } + + +more_overloads::more_overloads() {} +std::string more_overloads::call(const aa_ol&) { return "aa_ol"; } +std::string more_overloads::call(const bb_ol&, void* n) { n = 0; return "bb_ol"; } +std::string more_overloads::call(const cc_ol&) { return "cc_ol"; } +std::string more_overloads::call(const dd_ol&) { return "dd_ol"; } + +std::string more_overloads::call_unknown(const dd_ol&) { return "dd_ol"; } + +std::string more_overloads::call(double) { return "double"; } +std::string more_overloads::call(int) { return "int"; } +std::string more_overloads::call1(int) { return "int"; } +std::string more_overloads::call1(double) { return "double"; } + + +more_overloads2::more_overloads2() {} +std::string more_overloads2::call(const bb_ol&) { return "bb_olref"; } +std::string more_overloads2::call(const bb_ol*) { return "bb_olptr"; } + +std::string more_overloads2::call(const dd_ol*, int) { return "dd_olptr"; } +std::string more_overloads2::call(const dd_ol&, int) { return "dd_olref"; } diff --git a/pypy/module/cppyy/test/overloads.h b/pypy/module/cppyy/test/overloads.h new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/overloads.h @@ -0,0 +1,90 @@ +#include +#include + +class a_overload { +public: + a_overload(); + int i1, i2; +}; + +namespace ns_a_overload { + class a_overload { + public: + a_overload(); + int i1, i2; + }; + + class b_overload { + public: + int f(const std::vector* v); + }; +} + +namespace ns_b_overload { + class a_overload { + public: + a_overload(); + int i1, i2; + }; +} + +class b_overload { +public: + b_overload(); + int i1, i2; +}; + +class c_overload { +public: + c_overload(); + int get_int(a_overload* a); + int get_int(ns_a_overload::a_overload* a); + int get_int(ns_b_overload::a_overload* a); + int get_int(short* p); + int get_int(b_overload* b); + int get_int(int* p); +}; + +class d_overload { +public: + d_overload(); +// int get_int(void* p) { return *(int*)p; } + int get_int(int* p); + int get_int(b_overload* b); + int get_int(short* p); + int get_int(ns_b_overload::a_overload* a); + int get_int(ns_a_overload::a_overload* a); + int get_int(a_overload* a); +}; + + +class aa_ol {}; +class bb_ol; +class cc_ol {}; +class dd_ol; + +class more_overloads { +public: + more_overloads(); + std::string call(const aa_ol&); + std::string call(const bb_ol&, void* n=0); + std::string call(const cc_ol&); + std::string call(const dd_ol&); + + std::string call_unknown(const dd_ol&); + + std::string call(double); + std::string call(int); + std::string call1(int); + std::string call1(double); +}; + +class more_overloads2 { +public: + more_overloads2(); + std::string call(const bb_ol&); + std::string call(const bb_ol*); + + std::string call(const dd_ol*, int); + std::string call(const dd_ol&, int); +}; diff --git a/pypy/module/cppyy/test/overloads.xml b/pypy/module/cppyy/test/overloads.xml new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/overloads.xml @@ -0,0 +1,14 @@ + + + + + + + + + + + + + + diff --git a/pypy/module/cppyy/test/test_advancedcpp.py b/pypy/module/cppyy/test/test_advancedcpp.py --- a/pypy/module/cppyy/test/test_advancedcpp.py +++ b/pypy/module/cppyy/test/test_advancedcpp.py @@ -18,8 +18,8 @@ def setup_class(cls): cls.space = space env = os.environ - cls.w_test_dct = space.wrap(test_dct) - cls.w_datatypes = cls.space.appexec([], """(): + cls.w_test_dct = space.wrap(test_dct) + cls.w_advanced = cls.space.appexec([], """(): import cppyy return cppyy.load_reflection_info(%r)""" % (test_dct, )) diff --git a/pypy/module/cppyy/test/test_operators.py b/pypy/module/cppyy/test/test_operators.py --- a/pypy/module/cppyy/test/test_operators.py +++ b/pypy/module/cppyy/test/test_operators.py @@ -20,7 +20,7 @@ env = os.environ cls.w_N = space.wrap(5) # should be imported from the dictionary cls.w_test_dct = space.wrap(test_dct) - cls.w_datatypes = cls.space.appexec([], """(): + cls.w_operators = cls.space.appexec([], """(): import cppyy return cppyy.load_reflection_info(%r)""" % (test_dct, )) diff --git a/pypy/module/cppyy/test/test_overloads.py b/pypy/module/cppyy/test/test_overloads.py new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/test_overloads.py @@ -0,0 +1,136 @@ +import py, os, sys +from pypy.conftest import gettestobjspace + + +currpath = py.path.local(__file__).dirpath() +test_dct = str(currpath.join("overloadsDict.so")) + +space = gettestobjspace(usemodules=['cppyy']) + +def setup_module(mod): + if sys.platform == 'win32': + py.test.skip("win32 not supported so far") + err = os.system("cd '%s' && make overloadsDict.so" % currpath) + if err: + raise OSError("'make' failed (see stderr)") + +class AppTestOVERLOADS: + def setup_class(cls): + cls.space = space + env = os.environ + cls.w_test_dct = space.wrap(test_dct) + cls.w_overloads = cls.space.appexec([], """(): + import cppyy + return cppyy.load_reflection_info(%r)""" % (test_dct, )) + + def test01_class_based_overloads(self): + """Test functions overloaded on different C++ clases""" + + import cppyy + a_overload = cppyy.gbl.a_overload + b_overload = cppyy.gbl.b_overload + c_overload = cppyy.gbl.c_overload + d_overload = cppyy.gbl.d_overload + + ns_a_overload = cppyy.gbl.ns_a_overload + ns_b_overload = cppyy.gbl.ns_b_overload + + assert c_overload().get_int(a_overload()) == 42 + assert c_overload().get_int(b_overload()) == 13 + assert d_overload().get_int(a_overload()) == 42 + assert d_overload().get_int(b_overload()) == 13 + + assert c_overload().get_int(ns_a_overload.a_overload()) == 88 + assert c_overload().get_int(ns_b_overload.a_overload()) == -33 + + assert d_overload().get_int(ns_a_overload.a_overload()) == 88 + assert d_overload().get_int(ns_b_overload.a_overload()) == -33 + + def test02_class_based_overloads_explicit_resolution(self): + """Test explicitly resolved function overloads""" + + # TODO: write disp() or equivalent on methods for ol selection + + import cppyy + a_overload = cppyy.gbl.a_overload + b_overload = cppyy.gbl.b_overload + c_overload = cppyy.gbl.c_overload + d_overload = cppyy.gbl.d_overload + + ns_a_overload = cppyy.gbl.ns_a_overload + + c = c_overload() +# raises(TypeError, c.get_int.disp, 12) +# assert c.get_int.disp('a_overload* a')(a_overload()) == 42 +# assert c.get_int.disp('b_overload* b')(b_overload()) == 13 + +# assert c_overload().get_int.disp('a_overload* a')(a_overload()) == 42 +# assert c_overload.get_int.disp('b_overload* b')(c, b_overload()) == 13 + + d = d_overload() +# assert d.get_int.disp('a_overload* a')(a_overload()) == 42 +# assert d.get_int.disp('b_overload* b')(b_overload()) == 13 + + nb = ns_a_overload.b_overload() + raises(TypeError, nb.f, c_overload()) + + def test03_fragile_class_based_overloads(self): + """Test functions overloaded on void* and non-existing classes""" + + # TODO: make Reflex generate unknown classes ... + + import cppyy + more_overloads = cppyy.gbl.more_overloads + aa_ol = cppyy.gbl.aa_ol +# bb_ol = cppyy.gbl.bb_ol + cc_ol = cppyy.gbl.cc_ol +# dd_ol = cppyy.gbl.dd_ol + + assert more_overloads().call(aa_ol()).c_str() == "aa_ol" +# assert more_overloads().call(bb_ol()).c_str() == "dd_ol" # <- bb_ol has an unknown + void* + assert more_overloads().call(cc_ol()).c_str() == "cc_ol" +# assert more_overloads().call(dd_ol()).c_str() == "dd_ol" # <- dd_ol has an unknown + + def test04_fully_fragile_overloads(self): + """Test that unknown* is preferred over unknown&""" + + # TODO: make Reflex generate unknown classes ... + return + + import cppyy + more_overloads2 = cppyy.gbl.more_overloads2 + bb_ol = cppyy.gbl.bb_ol + dd_ol = cppyy.gbl.dd_ol + + assert more_overloads2().call(bb_ol()) == "bb_olptr" + assert more_overloads2().call(dd_ol(), 1) == "dd_olptr" + + def test05_array_overloads(self): + """Test functions overloaded on different arrays""" + + # TODO: buffer to pointer interface + return + + import cppyy + c_overload = cppyy.gbl.c_overload + + from array import array + + ai = array('i', [525252]) + assert c_overload().get_int(ai) == 525252 + assert d_overload().get_int(ai) == 525252 + + ah = array('h', [25]) + assert c_overload().get_int(ah) == 25 + assert d_overload().get_int(ah) == 25 + + def test06_double_int_overloads(self): + """Test overloads on int/doubles""" + + import cppyy + more_overloads = cppyy.gbl.more_overloads + +# assert more_overloads().call(1).c_str() == "int" +# assert more_overloads().call(1.).c_str() == "double" + assert more_overloads().call1(1).c_str() == "int" + assert more_overloads().call1(1.).c_str() == "double" From noreply at buildbot.pypy.org Fri Feb 17 23:40:07 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 17 Feb 2012 23:40:07 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) prepared some more tests for future dev Message-ID: <20120217224007.DD6908204C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52595:074e97fc7fc9 Date: 2012-02-17 14:39 -0800 http://bitbucket.org/pypy/pypy/changeset/074e97fc7fc9/ Log: o) prepared some more tests for future dev o) std::string returns as python str o) string comparisons diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -130,6 +130,11 @@ [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, compilation_info=backend.eci) +c_call_s = rffi.llexternal( + "cppyy_call_s", + [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CCHARP, + compilation_info=backend.eci) + c_get_methptr_getter = rffi.llexternal( "cppyy_get_methptr_getter", [C_TYPEHANDLE, rffi.INT], C_METHPTRGETTER_PTR, diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -269,6 +269,18 @@ raise FastCallNotPossible +class StdStringExecutor(InstancePtrExecutor): + _immutable_ = True + + def execute(self, space, w_returntype, func, cppthis, num_args, args): + charp_result = capi.c_call_s(func.cpptype.handle, func.method_index, cppthis, num_args, args) + return space.wrap(capi.charp2str_free(charp_result)) + + def execute_libffi(self, space, w_returntype, libffifunc, argchain): + from pypy.module.cppyy.interp_cppyy import FastCallNotPossible + raise FastCallNotPossible + + _executors = {} def get_executor(space, name): # Matching of 'name' to an executor factory goes through up to four levels: @@ -355,3 +367,6 @@ _executors["float*"] = FloatPtrExecutor _executors["double"] = DoubleExecutor _executors["double*"] = DoublePtrExecutor + +# special cases +_executors["std::basic_string"] = StdStringExecutor diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -30,6 +30,8 @@ double cppyy_call_f(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); double cppyy_call_d(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); + char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); + cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t handle, int method_index); /* handling of function argument buffer */ diff --git a/pypy/module/cppyy/pythonify.py b/pypy/module/cppyy/pythonify.py --- a/pypy/module/cppyy/pythonify.py +++ b/pypy/module/cppyy/pythonify.py @@ -250,6 +250,15 @@ raise StopIteration pyclass.__iter__ = __iter__ + # string comparisons + if pyclass.__name__ == 'std::string': + def eq(self, other): + if type(other) == pyclass: + return self.c_str() == other.c_str() + else: + return self.c_str() == other + pyclass.__eq__ = eq + _loaded_dictionaries = {} def load_reflection_info(name): diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -155,6 +155,20 @@ return cppyy_call_T(handle, method_index, self, numargs, args); } +char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, + cppyy_object_t self, int numargs, void* args) { + std::string result(""); + std::vector arguments = build_args(numargs, args); + Reflex::Scope s = scope_from_handle(handle); + Reflex::Member m = s.FunctionMemberAt(method_index); + if (self) { + Reflex::Object o((Reflex::Type)s, self); + m.Invoke(o, result, arguments); + } else { + m.Invoke(result, arguments); + } + return cppstring_to_cstring(result); +} static cppyy_methptrgetter_t get_methptr_getter(Reflex::Member m) { Reflex::PropertyList plist = m.Properties(); diff --git a/pypy/module/cppyy/test/stltypes.cxx b/pypy/module/cppyy/test/stltypes.cxx --- a/pypy/module/cppyy/test/stltypes.cxx +++ b/pypy/module/cppyy/test/stltypes.cxx @@ -11,6 +11,16 @@ const std::STLTYPE< TTYPE >::iterator&); \ } + //- explicit instantiations of used types STLTYPES_EXPLICIT_INSTANTIATION(vector, int) STLTYPES_EXPLICIT_INSTANTIATION(vector, just_a_class) + +//- class with lots of std::string handling +stringy_class::stringy_class(const char* s) : m_string(s) {} + +std::string stringy_class::get_string1() { return m_string; } +void stringy_class::get_string2(std::string& s) { s = m_string; } + +void stringy_class::set_string1(const std::string& s) { m_string = s; } +void stringy_class::set_string2(std::string s) { m_string = s; } diff --git a/pypy/module/cppyy/test/stltypes.h b/pypy/module/cppyy/test/stltypes.h --- a/pypy/module/cppyy/test/stltypes.h +++ b/pypy/module/cppyy/test/stltypes.h @@ -25,3 +25,18 @@ //- explicit instantiations of used types STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, int) STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, just_a_class) + + +//- class with lots of std::string handling +class stringy_class { +public: + stringy_class(const char* s); + + std::string get_string1(); + void get_string2(std::string& s); + + void set_string1(const std::string& s); + void set_string2(std::string s); + + std::string m_string; +}; diff --git a/pypy/module/cppyy/test/stltypes.xml b/pypy/module/cppyy/test/stltypes.xml --- a/pypy/module/cppyy/test/stltypes.xml +++ b/pypy/module/cppyy/test/stltypes.xml @@ -1,5 +1,7 @@ + + @@ -12,4 +14,7 @@ + + + diff --git a/pypy/module/cppyy/test/test_overloads.py b/pypy/module/cppyy/test/test_overloads.py --- a/pypy/module/cppyy/test/test_overloads.py +++ b/pypy/module/cppyy/test/test_overloads.py @@ -86,10 +86,10 @@ cc_ol = cppyy.gbl.cc_ol # dd_ol = cppyy.gbl.dd_ol - assert more_overloads().call(aa_ol()).c_str() == "aa_ol" -# assert more_overloads().call(bb_ol()).c_str() == "dd_ol" # <- bb_ol has an unknown + void* - assert more_overloads().call(cc_ol()).c_str() == "cc_ol" -# assert more_overloads().call(dd_ol()).c_str() == "dd_ol" # <- dd_ol has an unknown + assert more_overloads().call(aa_ol()) == "aa_ol" +# assert more_overloads().call(bb_ol()) == "dd_ol" # <- bb_ol has an unknown + void* + assert more_overloads().call(cc_ol()) == "cc_ol" +# assert more_overloads().call(dd_ol()) == "dd_ol" # <- dd_ol has an unknown def test04_fully_fragile_overloads(self): """Test that unknown* is preferred over unknown&""" @@ -130,7 +130,7 @@ import cppyy more_overloads = cppyy.gbl.more_overloads -# assert more_overloads().call(1).c_str() == "int" -# assert more_overloads().call(1.).c_str() == "double" - assert more_overloads().call1(1).c_str() == "int" - assert more_overloads().call1(1.).c_str() == "double" +# assert more_overloads().call(1) == "int" +# assert more_overloads().call(1.) == "double" + assert more_overloads().call1(1) == "int" + assert more_overloads().call1(1.) == "double" diff --git a/pypy/module/cppyy/test/test_pythonify.py b/pypy/module/cppyy/test/test_pythonify.py --- a/pypy/module/cppyy/test/test_pythonify.py +++ b/pypy/module/cppyy/test/test_pythonify.py @@ -253,9 +253,9 @@ # NOTE: when called through the stub, default args are fine f = a.stringRef s = cppyy.gbl.std.string - assert f(s("aap"), 0, s("noot")).c_str() == "aap" - assert f(s("noot"), 1).c_str() == "default" - assert f(s("mies")).c_str() == "mies" + assert f(s("aap"), 0, s("noot")) == "aap" + assert f(s("noot"), 1) == "default" + assert f(s("mies")) == "mies" for itype in ['short', 'ushort', 'int', 'uint', 'long', 'ulong']: g = getattr(a, '%sValue' % itype) diff --git a/pypy/module/cppyy/test/test_stltypes.py b/pypy/module/cppyy/test/test_stltypes.py --- a/pypy/module/cppyy/test/test_stltypes.py +++ b/pypy/module/cppyy/test/test_stltypes.py @@ -14,13 +14,13 @@ if err: raise OSError("'make' failed (see stderr)") -class AppTestSTL: +class AppTestSTLVECTOR: def setup_class(cls): cls.space = space env = os.environ cls.w_N = space.wrap(13) cls.w_test_dct = space.wrap(test_dct) - cls.w_datatypes = cls.space.appexec([], """(): + cls.w_stlvector = cls.space.appexec([], """(): import cppyy return cppyy.load_reflection_info(%r)""" % (test_dct, )) @@ -129,3 +129,83 @@ assert list(v) == [i for i in range(self.N)] v.destruct() + + +class AppTestSTLSTRING: + def setup_class(cls): + cls.space = space + env = os.environ + cls.w_test_dct = space.wrap(test_dct) + cls.w_stlstring = cls.space.appexec([], """(): + import cppyy + return cppyy.load_reflection_info(%r)""" % (test_dct, )) + + def test01_string_argument_passing(self): + """Test mapping of python strings and std::string""" + + import cppyy + std = cppyy.gbl.std + stringy_class = cppyy.gbl.stringy_class + + c, s = stringy_class(""), std.string("test1") + + # pass through const std::string& + c.set_string1(s) + assert c.get_string1() == s + + return + + c.set_string1("test2") + assert c.get_string1() == "test2" + + # pass through std::string (by value) + s = std.string("test3") + c.set_string2(s) + assert c.get_string1() == s + + c.set_string2("test4") + assert c.get_string1() == "test4" + + # getting through std::string& + s2 = std.string() + c.get_string2(s2) + assert s2 == "test4" + + raises(TypeError, c.get_string2, "temp string") + + def test02_string_data_ccess(self): + """Test access to std::string object data members""" + + import cppyy + std = cppyy.gbl.std + stringy_class = cppyy.gbl.stringy_class + + return + + c, s = stringy_class(""), std.string("test string") + + c.m_string = s + assert c.m_string == s + assert c.get_string1() == s + + c.m_string = "another test" + assert c.m_string == "another test" + assert c.get_string1() == "another test" + + def test03_string_with_null_character(self): + """Test that strings with NULL do not get truncated""" + + import cppyy + std = cppyy.gbl.std + stringy_class = cppyy.gbl.stringy_class + + return + + t0 = "aap\0noot" + self.assertEqual(t0, "aap\0noot") + + c, s = stringy_class(""), std.string(t0, len(t0)) + + c.set_string1(s) + assert t0 == c.get_string1() + assert s == c.get_string1() From noreply at buildbot.pypy.org Fri Feb 17 23:54:42 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Fri, 17 Feb 2012 23:54:42 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Add a test for the map function (untested so far). Message-ID: <20120217225442.2E8178204C@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r39:e5146e1917d1 Date: 2012-02-14 15:29 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/e5146e1917d1/ Log: Add a test for the map function (untested so far). Modified cons to raise the propper exception if called with the wrong args number diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -154,6 +154,9 @@ _symbol_name = "cons" def procedure(self, ctx, lst): + if len(lst) != 2: + raise WrongArgsNumber() + w_car = lst[0] w_cdr = lst[1] #cons is always creating a new pair diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -1026,3 +1026,20 @@ """) assert w_res.eq(symbol("consonant")) +def test_map(): + w_res = eval_noctx("(map car '((1 2) (3 4) (5 6) (7 8)))") + assert w_res.equal(parse_("(1 3 5 7)")) + + w_res = eval_noctx("(map cons '(1 2 3) '(4 5 6))") + assert w_res.equal(parse_("((1 . 4) (2 . 5) (3 . 6))")) + +# w_res = eval_noctx("(map error '())") # empty list no calls +# assert w_res is w_nil + + w_res = eval_noctx("(map (lambda (a) (+ a 1)) '(1 2 3))") + assert w_res.equal(parse_("(2 3 4)")) + + py.test.raises(WrongArgType, eval_noctx, "(map car 2)") + py.test.raises(WrongArgType, eval_noctx, "(map 2 '(1 2 3))") + py.test.raises(WrongArgsNumber, eval_noctx, "(map list)") + py.test.raises(WrongArgsNumber, eval_noctx, "(map cons '(1 2 3))") From noreply at buildbot.pypy.org Fri Feb 17 23:54:43 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Fri, 17 Feb 2012 23:54:43 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Improve error message of WrongArgsNumber: Add actual and expected values Message-ID: <20120217225443.3E5D78204C@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r40:dc37076bac9a Date: 2012-02-17 23:54 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/dc37076bac9a/ Log: Improve error message of WrongArgsNumber: Add actual and expected values diff --git a/scheme/object.py b/scheme/object.py --- a/scheme/object.py +++ b/scheme/object.py @@ -13,7 +13,11 @@ class WrongArgsNumber(SchemeException): def __str__(self): - return "Wrong number of args" + if len(self.args) == 2: + return ("Wrong number of args. Got: %d, expected: %s" % + (self.args[0], self.args[1])) + else: + return "Wrong number of args." class WrongArgType(SchemeException): def __str__(self): diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -12,7 +12,7 @@ def procedure(self, ctx, lst): if len(lst) == 0: if self.default_result is None: - raise WrongArgsNumber() + raise WrongArgsNumber(len(lst), ">1") return self.default_result @@ -155,7 +155,7 @@ def procedure(self, ctx, lst): if len(lst) != 2: - raise WrongArgsNumber() + raise WrongArgsNumber(len(lst), 2) w_car = lst[0] w_cdr = lst[1] @@ -183,7 +183,7 @@ class CarCdrCombination(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_pair = lst[0] return self.do_oper(w_pair) @@ -352,8 +352,8 @@ def procedure_tr(self, ctx, lst): if len(lst) != 2: - raise WrongArgsNumber - + raise WrongArgsNumber(len(lst), 2) + (w_procedure, w_lst) = lst if not isinstance(w_procedure, W_Procedure): #print w_procedure.to_repr(), "is not a procedure" @@ -376,7 +376,7 @@ def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_promise = lst[0] if not isinstance(w_promise, W_Promise): @@ -389,7 +389,7 @@ def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_inlist = lst[0] w_outlist = w_nil while w_inlist is not w_nil: @@ -405,7 +405,7 @@ def procedure_tr(self, ctx, lst): if len(lst) < 2: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), ">2") w_proc = lst[0] if not isinstance(w_proc, W_Procedure): @@ -452,7 +452,7 @@ def procedure(self, ctx, lst): if len(lst) < 1 or len(lst) > 2: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), "1-2") w_number = lst[0] if not isinstance(w_number, W_Integer): @@ -473,7 +473,7 @@ class AssocX(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 2: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 2) (w_obj, w_alst) = lst @@ -522,7 +522,7 @@ class MemX(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 2: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 2) (w_obj, w_lst) = lst @@ -565,7 +565,7 @@ class EquivalnecePredicate(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 2: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 2) (a, b) = lst return W_Boolean(self.predicate(a, b)) @@ -596,7 +596,7 @@ class PredicateNumber(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_obj = lst[0] if not isinstance(w_obj, W_Number): @@ -614,7 +614,7 @@ def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_obj = lst[0] if not isinstance(w_obj, W_Number): @@ -688,7 +688,7 @@ class TypePredicate(W_Procedure): def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) return W_Boolean(self.predicate(lst[0])) @@ -748,7 +748,7 @@ def procedure(self, ctx, lst): if len(lst) != 1: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 1) w_bool = lst[0] if w_bool.to_boolean(): @@ -770,7 +770,7 @@ (obj, port) = lst raise NotImplementedError else: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), "1-2") print obj.to_string(), return w_undefined @@ -780,7 +780,7 @@ def procedure(self, ctx, lst): if len(lst) != 0: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), 0) print return w_undefined @@ -795,7 +795,7 @@ (obj, port) = lst raise NotImplementedError else: - raise WrongArgsNumber + raise WrongArgsNumber(len(lst), "1-2") print obj.to_repr(), return w_undefined From noreply at buildbot.pypy.org Sat Feb 18 03:16:57 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 18 Feb 2012 03:16:57 +0100 (CET) Subject: [pypy-commit] pypy numpypy-ctypes: Added ctypes to arrays. Works, except 2 failing tests. Message-ID: <20120218021657.ED5D48204C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpypy-ctypes Changeset: r52596:69d8667c5a60 Date: 2012-02-17 21:16 -0500 http://bitbucket.org/pypy/pypy/changeset/69d8667c5a60/ Log: Added ctypes to arrays. Works, except 2 failing tests. diff --git a/lib_pypy/numpypy/core/_internal.py b/lib_pypy/numpypy/core/_internal.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/_internal.py @@ -0,0 +1,78 @@ +#A place for code to be called from C-code +# that implements more complicated stuff. + +def _getintp_ctype(): + from _numpypy import dtype + val = _getintp_ctype.cache + if val is not None: + return val + char = dtype('p').char + import ctypes + if (char == 'i'): + val = ctypes.c_int + elif char == 'l': + val = ctypes.c_long + elif char == 'q': + val = ctypes.c_longlong + else: + val = ctypes.c_long + _getintp_ctype.cache = val + return val +_getintp_ctype.cache = None + +# Used for .ctypes attribute of ndarray + +class _missing_ctypes(object): + def cast(self, num, obj): + return num + + def c_void_p(self, num): + return num + +class _ctypes(object): + def __init__(self, array, ptr=None): + try: + import ctypes + self._ctypes = ctypes + except ImportError: + self._ctypes = _missing_ctypes() + self._arr = array + self._data = ptr + if self._arr.ndim == 0: + self._zerod = True + else: + self._zerod = False + + def data_as(self, obj): + return self._ctypes.cast(self._data, obj) + + def shape_as(self, obj): + if self._zerod: + return None + return (obj*self._arr.ndim)(*self._arr.shape) + + def strides_as(self, obj): + if self._zerod: + return None + return (obj*self._arr.ndim)(*self._arr.strides) + + def get_data(self): + return self._data + + def get_shape(self): + if self._zerod: + return None + return (_getintp_ctype()*self._arr.ndim)(*self._arr.shape) + + def get_strides(self): + if self._zerod: + return None + return (_getintp_ctype()*self._arr.ndim)(*self._arr.strides) + + def get_as_parameter(self): + return self._ctypes.c_void_p(self._data) + + data = property(get_data, None, doc="c-types data") + shape = property(get_shape, None, doc="c-types shape") + strides = property(get_strides, None, doc="c-types strides") + _as_parameter_ = property(get_as_parameter, None, doc="_as parameter_") diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py --- a/pypy/module/micronumpy/appbridge.py +++ b/pypy/module/micronumpy/appbridge.py @@ -2,28 +2,34 @@ from pypy.rlib.objectmodel import specialize class AppBridgeCache(object): + w_numpypy_core__methods_module = None w__var = None w__std = None - w_module = None w_array_repr = None w_array_str = None + w_numpypy_core__internal_module = None + w__ctypes = None + def __init__(self, space): self.w_import = space.appexec([], """(): - def f(): + def f(module): import sys - __import__('numpypy.core._methods') - return sys.modules['numpypy.core._methods'] + __import__(module) + return sys.modules[module] return f """) - - @specialize.arg(2) - def call_method(self, space, name, *args): - w_meth = getattr(self, 'w_' + name) + + @specialize.arg(2, 3) + def call_method(self, space, module, name, *args): + module_attr = "w_" + module.replace(".", "_") + "_module" + meth_attr = "w_" + name + w_meth = getattr(self, meth_attr) if w_meth is None: - if self.w_module is None: - self.w_module = space.call_function(self.w_import) - w_meth = space.getattr(self.w_module, space.wrap(name)) + if getattr(self, module_attr) is None: + w_mod = space.call_function(self.w_import, space.wrap(module)) + setattr(self, module_attr, w_mod) + w_meth = space.getattr(getattr(self, module_attr), space.wrap(name)) setattr(self, 'w_' + name, w_meth) return space.call_function(w_meth, *args) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -121,11 +121,14 @@ itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), shape = GetSetProperty(W_Dtype.descr_get_shape), name = interp_attrproperty('name', cls=W_Dtype), + char = interp_attrproperty("char", cls=W_Dtype), ) W_Dtype.typedef.acceptable_as_base_class = False class DtypeCache(object): def __init__(self, space): + ptr_size = rffi.sizeof(rffi.VOIDP) + self.w_booldtype = W_Dtype( types.Bool(), num=0, @@ -173,6 +176,7 @@ kind=SIGNEDLTR, name="int32", char="i", + aliases = ["p"] if ptr_size == 4 else [], w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( @@ -211,6 +215,7 @@ name="int64", char="q", w_box_type=space.gettypefor(interp_boxes.W_Int64Box), + aliases = ["p"] if ptr_size == 8 else [], alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -256,9 +256,23 @@ return self.get_concrete().setshape(space, new_shape) + @jit.unroll_safe + def descr_get_strides(self, space): + return space.newtuple([space.wrap(i) for i in self.strides]) + def descr_get_size(self, space): return space.wrap(self.size) + def descr_get_ctypes(self, space): + if not self.shape: + raise OperationError(space.w_TypeError, space.wrap("Can't get the ctypes of a scalar yet.")) + concrete = self.get_concrete() + storage = concrete.storage + addr = rffi.cast(lltype.Signed, storage) + return get_appbridge_cache(space).call_method(space, + "numpypy.core._internal", "_ctypes", self, space.wrap(addr) + ) + def descr_copy(self, space): return self.copy(space) @@ -513,12 +527,14 @@ return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space, w_axis=None): - return get_appbridge_cache(space).call_method(space, '_var', self, - w_axis) + return get_appbridge_cache(space).call_method(space, + 'numpypy.core._methods', '_var', self, w_axis + ) def descr_std(self, space, w_axis=None): - return get_appbridge_cache(space).call_method(space, '_std', self, - w_axis) + return get_appbridge_cache(space).call_method(space, + 'numpypy.core._methods', '_std', self, w_axis + ) def descr_fill(self, space, w_value): concr = self.get_concrete_or_scalar() @@ -1291,10 +1307,12 @@ dtype = GetSetProperty(BaseArray.descr_get_dtype), shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), + strides = GetSetProperty(BaseArray.descr_get_strides), size = GetSetProperty(BaseArray.descr_get_size), ndim = GetSetProperty(BaseArray.descr_get_ndim), itemsize = GetSetProperty(BaseArray.descr_get_itemsize), nbytes = GetSetProperty(BaseArray.descr_get_nbytes), + ctypes = GetSetProperty(BaseArray.descr_get_ctypes), T = GetSetProperty(BaseArray.descr_get_transpose), transpose = interp2app(BaseArray.descr_get_transpose), diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -14,7 +14,7 @@ import numpy sys.modules['numpypy'] = numpy sys.modules['_numpypy'] = numpy - cls.space = gettestobjspace(usemodules=['micronumpy']) + cls.space = gettestobjspace(usemodules=['micronumpy', '_ffi', '_rawffi']) class TestSignature(object): def test_binop_signature(self, space): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1709,6 +1709,25 @@ assert (a + a).item(1) == 4 raises(ValueError, "array(5).item(1)") + def test_ctypes(self): + import gc + from _numpypy import array + + a = array([1, 2, 3, 4, 5]) + assert a.ctypes._data == a.__array_interface__["data"][0] + assert a is a.ctypes._arr + + shape = a.ctypes.get_shape() + assert len(shape) == 1 + assert shape[0] == 5 + + strides = a.ctypes.get_strides() + assert len(strides) == 1 + assert strides[0] == 1 + + a = array(2) + raises(TypeError, lambda: a.ctypes) + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct From noreply at buildbot.pypy.org Sat Feb 18 06:36:05 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 18 Feb 2012 06:36:05 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: allow python str to pass through std::string and const std::string& Message-ID: <20120218053605.8A6E011B2E77@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52597:a11f7dc9ad01 Date: 2012-02-17 16:23 -0800 http://bitbucket.org/pypy/pypy/changeset/a11f7dc9ad01/ Log: allow python str to pass through std::string and const std::string& diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -244,3 +244,18 @@ voidp = rffi.cast(rffi.VOIDP, charp) c_free(voidp) return string + +c_charp2stdstring = rffi.llexternal( + "cppyy_charp2stdstring", + [rffi.CCHARP], rffi.VOIDP, + compilation_info=backend.eci) + +c_stdstring2stdstring = rffi.llexternal( + "cppyy_stdstring2stdstring", + [rffi.VOIDP], rffi.VOIDP, + compilation_info=backend.eci) + +c_free_stdstring = rffi.llexternal( + "cppyy_free_stdstring", + [rffi.VOIDP], lltype.Void, + compilation_info=backend.eci) diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -1,5 +1,7 @@ import sys +from pypy.interpreter.error import OperationError + from pypy.rpython.lltypesystem import rffi, lltype from pypy.rlib.rarithmetic import r_singlefloat from pypy.rlib import jit, libffi, clibffi, rfloat @@ -532,6 +534,36 @@ return interp_cppyy.new_instance(space, w_type, self.cpptype, address, False) +class StdStringConverter(InstanceConverter): + _immutable_ = True + + def __init__(self, space, extra): + from pypy.module.cppyy import interp_cppyy + cpptype = interp_cppyy.type_byname(space, "std::string") + InstanceConverter.__init__(self, space, cpptype, "std::string") + + def _unwrap_object(self, space, w_obj): + try: + charp = rffi.str2charp(space.str_w(w_obj)) + arg = capi.c_charp2stdstring(charp) + rffi.free_charp(charp) + return arg + except OperationError: + arg = InstanceConverter._unwrap_object(self, space, w_obj) + return capi.c_stdstring2stdstring(arg) + + def free_argument(self, arg): + capi.c_free_stdstring(rffi.cast(rffi.VOIDPP, arg)[0]) + +class StdStringRefConverter(InstancePtrConverter): + _immutable_ = True + + def __init__(self, space, extra): + from pypy.module.cppyy import interp_cppyy + cpptype = interp_cppyy.type_byname(space, "std::string") + InstancePtrConverter.__init__(self, space, cpptype, "std::string") + + _converters = {} # builtin and custom types _a_converters = {} # array and ptr versions of above def get_converter(space, name, default): @@ -614,6 +646,14 @@ _converters["void**"] = VoidPtrPtrConverter _converters["void*&"] = VoidPtrRefConverter +# special cases +_converters["std::string"] = StdStringConverter +_converters["std::basic_string"] = StdStringConverter +_converters["const std::string&"] = StdStringConverter # TODO: shouldn't copy +_converters["const std::basic_string&"] = StdStringConverter +_converters["std::string&"] = StdStringRefConverter +_converters["std::basic_string&"] = StdStringRefConverter + # it should be possible to generate these: _a_converters["short int*"] = ShortPtrConverter _a_converters["short*"] = _a_converters["short int*"] diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -79,6 +79,10 @@ long long cppyy_strtoll(const char* str); unsigned long long cppyy_strtuoll(const char* str); + void* cppyy_charp2stdstring(const char* str); + void* cppyy_stdstring2stdstring(void* ptr); + void cppyy_free_stdstring(void* ptr); + #ifdef __cplusplus } #endif // ifdef __cplusplus diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -408,3 +408,16 @@ void cppyy_free(void* ptr) { free(ptr); } + +void* cppyy_charp2stdstring(const char* str) { + return new std::string(str); +} + +void* cppyy_stdstring2stdstring(void* ptr) { + return new std::string(*(std::string*)ptr); +} + +void cppyy_free_stdstring(void* ptr) { + delete (std::string*)ptr; +} + diff --git a/pypy/module/cppyy/test/test_stltypes.py b/pypy/module/cppyy/test/test_stltypes.py --- a/pypy/module/cppyy/test/test_stltypes.py +++ b/pypy/module/cppyy/test/test_stltypes.py @@ -153,8 +153,6 @@ c.set_string1(s) assert c.get_string1() == s - return - c.set_string1("test2") assert c.get_string1() == "test2" From noreply at buildbot.pypy.org Sat Feb 18 06:36:07 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 18 Feb 2012 06:36:07 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120218053607.2243311B2E78@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52598:63f05fc8565d Date: 2012-02-17 17:45 -0800 http://bitbucket.org/pypy/pypy/changeset/63f05fc8565d/ Log: merge default into branch diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,42 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -191,3 +192,24 @@ raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -439,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -454,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -267,6 +267,7 @@ alias.""" w_str = from_ref(space, string[0]) w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) string[0] = make_ref(space, w_str) @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1755,19 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,13 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Sat Feb 18 09:54:42 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 18 Feb 2012 09:54:42 +0100 (CET) Subject: [pypy-commit] pypy default: passing test Message-ID: <20120218085442.740E311B2E77@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52599:eedaa25113ac Date: 2012-02-18 09:00 +0100 http://bitbucket.org/pypy/pypy/changeset/eedaa25113ac/ Log: passing test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7796,6 +7796,23 @@ """ self.optimize_loop(ops, expected) + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass From noreply at buildbot.pypy.org Sat Feb 18 09:54:43 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 18 Feb 2012 09:54:43 +0100 (CET) Subject: [pypy-commit] pypy default: use optimizer.getvalue instead of accessing values directly (should fix issue 1048) Message-ID: <20120218085443.A6DEA11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52600:a6294c0d0f47 Date: 2012-02-18 09:16 +0100 http://bitbucket.org/pypy/pypy/changeset/a6294c0d0f47/ Log: use optimizer.getvalue instead of accessing values directly (should fix issue 1048) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) From noreply at buildbot.pypy.org Sat Feb 18 09:54:44 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 18 Feb 2012 09:54:44 +0100 (CET) Subject: [pypy-commit] pypy default: update tests to reflect api change introduced in a6294c0d0f47 Message-ID: <20120218085444.D67DF11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52601:1e95923a731f Date: 2012-02-18 09:53 +0100 http://bitbucket.org/pypy/pypy/changeset/1e95923a731f/ Log: update tests to reflect api change introduced in a6294c0d0f47 diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] From noreply at buildbot.pypy.org Sat Feb 18 10:15:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 18 Feb 2012 10:15:11 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Print some minimal information about the size of the collection. Message-ID: <20120218091511.AA6ED11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52602:e83210db5d76 Date: 2012-02-18 10:14 +0100 http://bitbucket.org/pypy/pypy/changeset/e83210db5d76/ Log: Print some minimal information about the size of the collection. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -6,6 +6,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.rarithmetic import LONG_BIT from pypy.rlib.debug import ll_assert, debug_start, debug_stop, fatalerror +from pypy.rlib.debug import debug_print from pypy.module.thread import ll_thread @@ -462,6 +463,8 @@ # called a nursery). To simplify things, we use a global lock # around the whole mark-and-move. self.gc.acquire(self.gc.mutex_lock) + debug_print("local arena:", tls.nursery_free - tls.nursery_start, + "bytes") # # We are starting from the tldict's local objects as roots. At # this point, these objects have GCFLAG_WAS_COPIED, and the other From noreply at buildbot.pypy.org Sat Feb 18 11:18:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 11:18:15 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: a failing test and a simplification Message-ID: <20120218101815.780A911B2E77@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52603:bb1957c12123 Date: 2012-02-18 12:17 +0200 http://bitbucket.org/pypy/pypy/changeset/bb1957c12123/ Log: a failing test and a simplification diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3164,7 +3164,34 @@ assert a[0] == 26 assert a[1] == 30 lltype.free(a, flavor='raw') + + def test_vector_ops_interiorfield(self): + if not self.cpu.supports_vector_ops: + py.test.skip("unsupported vector ops") + A = lltype.Array(lltype.Float, hints={'nolength': True, + 'memory_position_alignment': 16}) + fsize = rffi.sizeof(lltype.Float) + descr0 = self.cpu.interiorfielddescrof_dynamic(0, 1, fsize, False, True, + False) + looptoken = JitCellToken() + ops = parse(""" + [p0, p1] + vec0 = getarrayitem_vector_raw(p0, 0, descr=descr0) + vec1 = getarrayitem_vector_raw(p1, 0, descr=descr0) + vec2 = float_vector_add(vec0, vec1) + setarrayitem_vector_raw(p0, 0, vec2, descr=descr0) + finish() + """, namespace=locals()) + self.cpu.compile_loop(ops.inputargs, ops.operations, looptoken) + a = lltype.malloc(A, 10, flavor='raw') + a[0] = 13.0 + a[1] = 15.0 + self.cpu.execute_token(looptoken, a, a) + assert a[0] == 26 + assert a[1] == 30 + lltype.free(a, flavor='raw') + class OOtypeBackendTest(BaseBackendTest): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1006,7 +1006,7 @@ ofs = arraydescr.basesize size = arraydescr.itemsize sign = arraydescr.is_item_signed() - return size, ofs, sign + return imm(size), imm(ofs), sign def _unpack_fielddescr(self, fielddescr): assert isinstance(fielddescr, FieldDescr) @@ -1088,7 +1088,7 @@ itemsize, ofs, _ = self._unpack_arraydescr(op.getdescr()) args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) - if itemsize == 1: + if itemsize.value == 1: need_lower_byte = True else: need_lower_byte = False @@ -1097,7 +1097,7 @@ ofs_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) self.possibly_free_vars(args) self.PerformDiscard(op, [base_loc, ofs_loc, value_loc, - imm(itemsize), imm(ofs)]) + itemsize, ofs]) consider_setarrayitem_raw = consider_setarrayitem_gc consider_setarrayitem_vector_raw = consider_setarrayitem_gc @@ -1129,7 +1129,7 @@ sign_loc = imm1 else: sign_loc = imm0 - self.Perform(op, [base_loc, ofs_loc, imm(itemsize), imm(ofs), + self.Perform(op, [base_loc, ofs_loc, itemsize, ofs, sign_loc], result_loc) consider_getarrayitem_raw = consider_getarrayitem_gc From noreply at buildbot.pypy.org Sat Feb 18 11:18:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 11:18:27 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: work some more on slides Message-ID: <20120218101827.42D4C11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4089:22d1acba1fad Date: 2012-02-18 12:18 +0200 http://bitbucket.org/pypy/extradoc/changeset/22d1acba1fad/ Log: work some more on slides diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -42,3 +42,6 @@ - things like take/item/fancy indexing can use some knowledge about the density of data and either evaluate interesting points (without forcing) or do what they do now. + +- counting by element_size instead of by 1 and then multiply sounds + like a much faster option sometimes diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -5,8 +5,11 @@ ------------------------ * what is pypy and why + * numeric landscape in python + * what we achieved in pypy + * where we're going What is PyPy? @@ -52,7 +55,38 @@ Numerics in Python ------------------ -XXX numeric expressions, plots etc. +* ``numpy`` - for array operations + +* ``scipy``, ``scikits`` - various algorithms, also exposing C/fortran + libraries + +* ``matplotlib`` - pretty pictures + +* ``ipython`` + +There is an entire ecosystem! +----------------------------- + +* Which I don't even know very well + +* ``PyCUDA`` + +* ``pandas`` + +* ``mayavi`` + +What's important? +----------------- + +* There is an entire ecosystem built by people + +* It's available for free, no shady licensing + +* It's being expanded + +* It's growing + +* It'll keep up with hardware advancments Problems with numerics in python -------------------------------- @@ -92,12 +126,21 @@ * Assembler generation backend needs works -* No vectorization yet +* Vectorization in progress Status benchmarks ----------------- +* laplace solution + +* solutions: + + +---+ + | | + +---+ + This is just the beginning... ----------------------------- * PyPy is an easy platform to experiment with + From noreply at buildbot.pypy.org Sat Feb 18 11:25:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 18 Feb 2012 11:25:26 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Comment Message-ID: <20120218102526.D261C11B2E77@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4090:8bf215d4add2 Date: 2012-02-18 11:25 +0100 http://bitbucket.org/pypy/extradoc/changeset/8bf215d4add2/ Log: Comment diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -113,6 +113,8 @@ Examples -------- +XXX say that the variables are e.g. 1-dim numpy arrays + * ``a + a`` would generate different code than ``a + b`` * ``a + b * c`` is as fast as a loop From noreply at buildbot.pypy.org Sat Feb 18 12:52:42 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 18 Feb 2012 12:52:42 +0100 (CET) Subject: [pypy-commit] pypy default: Add a test for a function in the _demo module. Message-ID: <20120218115242.81AF611B2E78@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52604:4dcdc1ebfa2c Date: 2012-02-18 12:52 +0100 http://bitbucket.org/pypy/pypy/changeset/4dcdc1ebfa2c/ Log: Add a test for a function in the _demo module. Shows the structure of app tests. diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] From noreply at buildbot.pypy.org Sat Feb 18 15:57:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 15:57:50 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Fix crash in PyDict_Next when the pointer for values is NULL. Message-ID: <20120218145750.0D56011B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52605:568fc4237bf8 Date: 2012-02-18 15:56 +0100 http://bitbucket.org/pypy/pypy/changeset/568fc4237bf8/ Log: cpyext: Fix crash in PyDict_Next when the pointer for values is NULL. diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -184,8 +184,10 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -112,6 +112,37 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + def test_dictproxy(self, space, api): w_dict = space.sys.get('modules') w_proxy = api.PyDictProxy_New(w_dict) From noreply at buildbot.pypy.org Sat Feb 18 16:07:03 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 16:07:03 +0100 (CET) Subject: [pypy-commit] pypy default: numpy: Added ufuncs for sinh, cosh, tanh Message-ID: <20120218150703.BE34311B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52606:41b77bbdfddd Date: 2012-02-18 16:05 +0100 http://bitbucket.org/pypy/pypy/changeset/41b77bbdfddd/ Log: numpy: Added ufuncs for sinh, cosh, tanh diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -71,6 +71,7 @@ ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +91,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,6 +435,9 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,6 +489,18 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) From noreply at buildbot.pypy.org Sat Feb 18 16:24:34 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 16:24:34 +0100 (CET) Subject: [pypy-commit] pypy default: numpy: add ufunc for arccosh Message-ID: <20120218152434.6616A11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52607:0f03693b05ac Date: 2012-02-18 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/0f03693b05ac/ Log: numpy: add ufunc for arccosh diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,6 +67,7 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -439,6 +439,7 @@ ("cosh", "cosh", 1, {"promote_to_float": True}), ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -345,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -505,6 +505,12 @@ return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) From noreply at buildbot.pypy.org Sat Feb 18 17:06:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 17:06:07 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Finally found a way to add methods to controller classes Message-ID: <20120218160607.4CF1D11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52608:64577a635477 Date: 2012-02-11 11:55 +0100 http://bitbucket.org/pypy/pypy/changeset/64577a635477/ Log: Finally found a way to add methods to controller classes diff --git a/pypy/rpython/controllerentry.py b/pypy/rpython/controllerentry.py --- a/pypy/rpython/controllerentry.py +++ b/pypy/rpython/controllerentry.py @@ -83,7 +83,18 @@ from pypy.rpython.rcontrollerentry import rtypedelegate return rtypedelegate(self.new, hop, revealargs=[], revealresult=True) + def bound_method_controller(self, attr): + class BoundMethod(object): pass + class BoundMethodController(Controller): + knowntype = BoundMethod + def call(_self, obj, *args): + return getattr(self, 'method_' + attr)(obj, *args) + return BoundMethodController() + bound_method_controller._annspecialcase_ = 'specialize:memo' + def getattr(self, obj, attr): + if hasattr(self, 'method_' + attr): + return self.bound_method_controller(attr).box(obj) return getattr(self, 'get_' + attr)(obj) getattr._annspecialcase_ = 'specialize:arg(0, 2)' diff --git a/pypy/rpython/test/test_controllerentry.py b/pypy/rpython/test/test_controllerentry.py --- a/pypy/rpython/test/test_controllerentry.py +++ b/pypy/rpython/test/test_controllerentry.py @@ -30,6 +30,9 @@ def set_foo(self, obj, value): value.append(obj) + def method_compute(self, obj, value): + return obj + value + def getitem(self, obj, key): return obj + key @@ -112,3 +115,16 @@ assert ''.join(res.item0.chars) == "4_bar" assert ''.join(res.item1.chars) == "4_foo" assert ''.join(res.item2.chars) == "4_baz" + +def fun4(a): + c = C(a) + return c.compute('bar') + +def test_boundmethods_annotate(): + a = RPythonAnnotator() + s = a.build_types(fun4, [a.bookkeeper.immutablevalue("5")]) + assert s.const == "5_bar" + +def test_boundmethods_specialize(): + res = interpret(fun4, ["5"]) + assert ''.join(res.chars) == "5_bar" From noreply at buildbot.pypy.org Sat Feb 18 17:06:09 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 17:06:09 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: hg merge default Message-ID: <20120218160609.7C83E11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52609:089ef654f9b8 Date: 2012-02-14 12:10 +0100 http://bitbucket.org/pypy/pypy/changeset/089ef654f9b8/ Log: hg merge default diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -1297,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1487,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -126,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Sat Feb 18 17:06:11 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 18 Feb 2012 17:06:11 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: hg merge default Message-ID: <20120218160611.4EE1911B2E78@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52610:cb3c4e52890f Date: 2012-02-18 17:05 +0100 http://bitbucket.org/pypy/pypy/changeset/cb3c4e52890f/ Log: hg merge default diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -183,11 +184,34 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1685,15 +1647,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1755,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,44 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,12 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +92,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,7 +435,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Sat Feb 18 17:35:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:35:39 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: boring. Add support for dtypes. A lot of refactoring, not too much of Message-ID: <20120218163539.1060B11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52611:2f1522449cee Date: 2012-02-18 18:35 +0200 http://bitbucket.org/pypy/pypy/changeset/2f1522449cee/ Log: boring. Add support for dtypes. A lot of refactoring, not too much of actual code. diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -49,17 +49,58 @@ # structures to describe slicing -class Chunk(object): +class BaseChunk(object): + pass + +class RecordChunk(BaseChunk): + def __init__(self, name): + self.name = name + + def apply(self, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice + + arr = arr.get_concrete() + ofs, subdtype = arr.dtype.fields[self.name] + # strides backstrides are identical, ofs only changes start + return W_NDimSlice(arr.start + ofs, arr.strides[:], arr.backstrides[:], + arr.shape[:], arr, subdtype) + +class Chunks(BaseChunk): + def __init__(self, l): + self.l = l + + @jit.unroll_safe + def extend_shape(self, old_shape): + shape = [] + i = -1 + for i, c in enumerate(self.l): + if c.step != 0: + shape.append(c.lgt) + s = i + 1 + assert s >= 0 + return shape[:] + old_shape[s:] + + def apply(self, space, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice,\ + VirtualSlice, ConcreteArray + + shape = self.extend_shape(arr.shape) + if not isinstance(arr, ConcreteArray): + return VirtualSlice(arr, self, shape) + r = calculate_slice_strides(arr.shape, arr.start, arr.strides, + arr.backstrides, self.l) + _, start, strides, backstrides = r + return W_NDimSlice(start, strides[:], backstrides[:], + shape[:], arr) + + +class Chunk(BaseChunk): def __init__(self, start, stop, step, lgt): self.start = start self.stop = stop self.step = step self.lgt = lgt - def extend_shape(self, shape): - if self.step != 0: - shape.append(self.lgt) - def __repr__(self): return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, self.lgt) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -13,7 +13,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) + SkipLastAxisIterator, Chunks, Chunk, ViewIterator, RecordChunk) from pypy.module.micronumpy.appbridge import get_appbridge_cache @@ -328,11 +328,18 @@ @jit.unroll_safe def _prepare_slice_args(self, space, w_idx): + if space.isinstance_w(w_idx, space.w_str): + idx = space.str_w(w_idx) + dtype = self.find_dtype() + if not dtype.is_record_type() or idx not in dtype.fields: + raise OperationError(space.w_ValueError, space.wrap( + "field named %s not defined" % idx)) + return RecordChunk(idx) if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] - return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in - enumerate(space.fixedview(w_idx))] + return Chunks([Chunk(*space.decode_index4(w_idx, self.shape[0]))]) + return Chunks([Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in + enumerate(space.fixedview(w_idx))]) def count_all_true(self): sig = self.find_sig() @@ -375,6 +382,17 @@ frame.next(shapelen) return res + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) + if self._single_item_result(space, w_idx): + concrete = self.get_concrete() + item = concrete._index_of_single_item(space, w_idx) + return concrete.getitem(item) + chunks = self._prepare_slice_args(space, w_idx) + return chunks.apply(self) + def setitem_filter(self, space, idx, val): size = idx.count_all_true() arr = SliceArray([size], self.dtype, self, val) @@ -392,17 +410,6 @@ frame.next_first(shapelen) idxi = idxi.next(shapelen) - def descr_getitem(self, space, w_idx): - if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and - w_idx.find_dtype().is_bool_type()): - return self.getitem_filter(space, w_idx) - if self._single_item_result(space, w_idx): - concrete = self.get_concrete() - item = concrete._index_of_single_item(space, w_idx) - return concrete.getitem(item) - chunks = self._prepare_slice_args(space, w_idx) - return self.create_slice(chunks) - def descr_setitem(self, space, w_idx, w_value): self.invalidated() if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and @@ -419,26 +426,9 @@ if not isinstance(w_value, BaseArray): w_value = convert_to_array(space, w_value) chunks = self._prepare_slice_args(space, w_idx) - view = self.create_slice(chunks).get_concrete() + view = chunks.apply(self).get_concrete() view.setslice(space, w_value) - @jit.unroll_safe - def create_slice(self, chunks): - shape = [] - i = -1 - for i, chunk in enumerate(chunks): - chunk.extend_shape(shape) - s = i + 1 - assert s >= 0 - shape += self.shape[s:] - if not isinstance(self, ConcreteArray): - return VirtualSlice(self, chunks, shape) - r = calculate_slice_strides(self.shape, self.start, self.strides, - self.backstrides, chunks) - _, start, strides, backstrides = r - return W_NDimSlice(start, strides[:], backstrides[:], - shape[:], self) - def descr_reshape(self, space, args_w): """reshape(...) a.reshape(shape) @@ -741,7 +731,7 @@ def force_if_needed(self): if self.forced_result is None: concr = self.child.get_concrete() - self.forced_result = concr.create_slice(self.chunks) + self.forced_result = self.chunks.apply(concr) def _del_sources(self): self.child = None @@ -1020,13 +1010,15 @@ class W_NDimSlice(ViewArray): - def __init__(self, start, strides, backstrides, shape, parent): + def __init__(self, start, strides, backstrides, shape, parent, dtype=None): assert isinstance(parent, ConcreteArray) if isinstance(parent, W_NDimSlice): parent = parent.parent self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, shape, parent.dtype, parent.order, parent) + if dtype is None: + dtype = parent.dtype + ViewArray.__init__(self, shape, dtype, parent.order, parent) self.start = start def create_iter(self, transforms=None): @@ -1231,7 +1223,7 @@ for arr in args_w: chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, arr.shape[axis]) - res.create_slice(chunks).setslice(space, arr) + chunks.apply(res).setslice(space, arr) axis_start += arr.shape[axis] return res diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1815,7 +1815,18 @@ assert a[1]['y'] == 2 def test_views(self): - skip("xx") + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + raises(ValueError, 'array([1])["x"]') + raises(ValueError, 'a["z"]') + assert a['x'][1] == 3 + assert a['y'][1] == 4 + a['x'][0] = 15 + assert a['x'][0] == 15 + b = a['x'] + a['y'] + print b, a + assert (b == [15+2, 3+4]).all() + assert b.dtype == float def test_creation(self): from _numpypy import array From noreply at buildbot.pypy.org Sat Feb 18 17:40:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:40:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: boring a passing test Message-ID: <20120218164030.6C72411B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52612:847b64f81dd5 Date: 2012-02-18 18:40 +0200 http://bitbucket.org/pypy/pypy/changeset/847b64f81dd5/ Log: boring a passing test diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1824,11 +1824,17 @@ a['x'][0] = 15 assert a['x'][0] == 15 b = a['x'] + a['y'] - print b, a assert (b == [15+2, 3+4]).all() assert b.dtype == float - def test_creation(self): + def test_assign_tuple(self): + from _numpypy import zeros + a = zeros((2, 3), dtype=[('x', int), ('y', float)]) + a[1, 2] = (1, 2) + assert a['x'][1, 2] == 1 + assert a['y'][1, 2] == 2 + + def test_creation_and_repr(self): from _numpypy import array a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) assert repr(a[0]) == '(1, 2.0)' From noreply at buildbot.pypy.org Sat Feb 18 17:48:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:48:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix translation Message-ID: <20120218164837.35A9611B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52613:6ee326b81f86 Date: 2012-02-18 18:45 +0200 http://bitbucket.org/pypy/pypy/changeset/6ee326b81f86/ Log: fix translation diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -144,13 +144,15 @@ fieldnames=fieldnames) def dtype_from_dict(space, w_dict): - xxx + raise OperationError(space.w_NotImplementedError, space.wrap( + "dtype from dict")) def variable_dtype(space, name): if name[0] in '<>': # ignore byte order, not sure if it's worth it for unicode only if name[0] != byteorder_prefix and name[1] == 'U': - xxx + raise OperationError(space.w_NotImplementedError, space.wrap( + "unimplemented non-native unicode")) name = name[1:] char = name[0] if len(name) == 1: @@ -169,7 +171,8 @@ num = 20 basename = 'void' w_box_type = space.gettypefor(interp_boxes.W_VoidBox) - xxx + raise OperationError(space.w_NotImplementedError, space.wrap( + "pure void dtype")) else: assert char == 'U' basename = 'unicode' @@ -180,6 +183,9 @@ basename + str(8 * itemtype.get_element_size()), char, w_box_type) +def dtype_from_spec(space, name): + raise OperationError(space.w_NotImplementedError, space.wrap( + "dtype from spec")) def descr__new__(space, w_subtype, w_dtype): cache = get_dtype_cache(space) From noreply at buildbot.pypy.org Sat Feb 18 17:48:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:48:38 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix the merge point in flat_set_driver Message-ID: <20120218164838.672D011B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52614:2644ecaa4fc2 Date: 2012-02-18 18:47 +0200 http://bitbucket.org/pypy/pypy/changeset/2644ecaa4fc2/ Log: fix the merge point in flat_set_driver diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -47,7 +47,7 @@ ) flat_set_driver = jit.JitDriver( greens=['shapelen', 'base'], - reds=['step', 'ai', 'lngth', 'arr', 'basei'], + reds=['step', 'ai', 'ri', 'lngth', 'arr', 'basei'], name='numpy_flatset', ) @@ -1370,8 +1370,7 @@ basei=basei, step=step, res=res, - ri=ri, - ) + ri=ri) w_val = base.getitem(basei.offset) res.setitem(ri.offset, w_val) basei = basei.next_skip_x(shapelen, step) @@ -1394,13 +1393,13 @@ basei = basei.next_skip_x(shapelen, start) while lngth > 0: flat_set_driver.jit_merge_point(shapelen=shapelen, - basei=basei, - base=base, - step=step, - arr=arr, - ai=ai, - lngth=lngth, - ) + basei=basei, + base=base, + step=step, + arr=arr, + ai=ai, + lngth=lngth, + ri=ri) v = arr.getitem(ri.offset).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done From noreply at buildbot.pypy.org Sat Feb 18 17:51:08 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:51:08 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: how did that slip in? Message-ID: <20120218165108.4387E11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52615:34e79c0b19ff Date: 2012-02-18 18:50 +0200 http://bitbucket.org/pypy/pypy/changeset/34e79c0b19ff/ Log: how did that slip in? diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -80,7 +80,7 @@ assert s >= 0 return shape[:] + old_shape[s:] - def apply(self, space, arr): + def apply(self, arr): from pypy.module.micronumpy.interp_numarray import W_NDimSlice,\ VirtualSlice, ConcreteArray From noreply at buildbot.pypy.org Sat Feb 18 17:53:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 17:53:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix those tests Message-ID: <20120218165357.83ACB11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52616:7f4453d0d3d4 Date: 2012-02-18 18:52 +0200 http://bitbucket.org/pypy/pypy/changeset/7f4453d0d3d4/ Log: fix those tests diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -5,7 +5,7 @@ from pypy.interpreter.error import OperationError from pypy.module.micronumpy import signature from pypy.module.micronumpy.appbridge import get_appbridge_cache -from pypy.module.micronumpy.interp_iter import Chunk +from pypy.module.micronumpy.interp_iter import Chunk, Chunks from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest @@ -20,6 +20,9 @@ return 1 +def create_slice(a, chunks): + return Chunks(chunks).apply(a) + class TestNumArrayDirect(object): def newslice(self, *args): return self.space.newslice(*[self.space.wrap(arg) for arg in args]) @@ -45,54 +48,54 @@ def test_create_slice_f(self): a = W_NDimArray([10, 5, 3], MockDtype(), 'F') - s = a.create_slice([Chunk(3, 0, 0, 1)]) + s = create_slice(a, [Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([Chunk(1, 9, 2, 4)]) + s = create_slice(a, [Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) + s = create_slice(a, [Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray([10, 5, 3], MockDtype(), 'C') - s = a.create_slice([Chunk(3, 0, 0, 1)]) + s = create_slice(a, [Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([Chunk(1, 9, 2, 4)]) + s = create_slice(a, [Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + s = create_slice(a, [Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray([10, 5, 3], MockDtype(), 'F') - s = a.create_slice([Chunk(5, 0, 0, 1)]) + s = create_slice(a, [Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([Chunk(3, 0, 0, 1)]) + s2 = create_slice(s, [Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([Chunk(1, 5, 3, 2)]) - s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(1, 5, 3, 2)]) + s2 = create_slice(s, [Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -100,16 +103,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray([10, 5, 3], MockDtype(), order='C') - s = a.create_slice([Chunk(5, 0, 0, 1)]) + s = create_slice(a, [Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([Chunk(3, 0, 0, 1)]) + s2 = create_slice(s, [Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([Chunk(1, 5, 3, 2)]) - s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(1, 5, 3, 2)]) + s2 = create_slice(s, [Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -117,14 +120,14 @@ def test_negative_step_f(self): a = W_NDimArray([10, 5, 3], MockDtype(), 'F') - s = a.create_slice([Chunk(9, -1, -2, 5)]) + s = create_slice(a, [Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray([10, 5, 3], MockDtype(), order='C') - s = a.create_slice([Chunk(9, -1, -2, 5)]) + s = create_slice(a, [Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -133,7 +136,7 @@ a = W_NDimArray([10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -143,7 +146,7 @@ a = W_NDimArray([10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) + s = create_slice(a, [Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) From noreply at buildbot.pypy.org Sat Feb 18 18:48:46 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 18 Feb 2012 18:48:46 +0100 (CET) Subject: [pypy-commit] pypy numpypy-ctypes: fix translation. Message-ID: <20120218174846.F155A11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpypy-ctypes Changeset: r52617:36d4deeac481 Date: 2012-02-18 12:48 -0500 http://bitbucket.org/pypy/pypy/changeset/36d4deeac481/ Log: fix translation. diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py --- a/pypy/module/micronumpy/appbridge.py +++ b/pypy/module/micronumpy/appbridge.py @@ -1,6 +1,11 @@ from pypy.rlib.objectmodel import specialize + + at specialize.memo() +def get_module_attr(module): + return "w_" + module.replace(".", "_") + "_module" + class AppBridgeCache(object): w_numpypy_core__methods_module = None w__var = None @@ -22,7 +27,7 @@ @specialize.arg(2, 3) def call_method(self, space, module, name, *args): - module_attr = "w_" + module.replace(".", "_") + "_module" + module_attr = get_module_attr(module) meth_attr = "w_" + name w_meth = getattr(self, meth_attr) if w_meth is None: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -258,7 +258,8 @@ @jit.unroll_safe def descr_get_strides(self, space): - return space.newtuple([space.wrap(i) for i in self.strides]) + concrete = self.get_concrete() + return space.newtuple([space.wrap(i) for i in concrete.strides]) def descr_get_size(self, space): return space.wrap(self.size) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1710,10 +1710,9 @@ raises(ValueError, "array(5).item(1)") def test_ctypes(self): - import gc from _numpypy import array - a = array([1, 2, 3, 4, 5]) + a = array([1, 2, 3, 4, 5.0]) assert a.ctypes._data == a.__array_interface__["data"][0] assert a is a.ctypes._arr @@ -1725,6 +1724,15 @@ assert len(strides) == 1 assert strides[0] == 1 + b = a[2:] + assert b.ctypes._data == a.ctypes._data + b.ctypes.get_strides() + b.ctypes.get_shape() + + c = b + b + c.ctypes.get_strides() + c.ctypes.get_shape() + a = array(2) raises(TypeError, lambda: a.ctypes) From noreply at buildbot.pypy.org Sat Feb 18 22:18:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 22:18:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: one more fix Message-ID: <20120218211815.6D0F911B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52618:b4afd2d244b8 Date: 2012-02-18 23:17 +0200 http://bitbucket.org/pypy/pypy/changeset/b4afd2d244b8/ Log: one more fix diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1223,7 +1223,7 @@ for arr in args_w: chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, arr.shape[axis]) - chunks.apply(res).setslice(space, arr) + Chunks(chunks).apply(res).setslice(space, arr) axis_start += arr.shape[axis] return res From noreply at buildbot.pypy.org Sat Feb 18 22:24:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 22:24:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: another fix Message-ID: <20120218212415.0DDFD11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52619:115f1100e193 Date: 2012-02-18 23:23 +0200 http://bitbucket.org/pypy/pypy/changeset/115f1100e193/ Log: another fix diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -195,7 +195,7 @@ elif isinstance(t, ViewTransform): r = calculate_slice_strides(self.res_shape, self.offset, self.strides, - self.backstrides, t.chunks) + self.backstrides, t.chunks.l) return ViewIterator(r[1], r[2], r[3], r[0]) @jit.unroll_safe From noreply at buildbot.pypy.org Sat Feb 18 22:27:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 18 Feb 2012 22:27:53 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix test_zjit Message-ID: <20120218212753.F18FA11B2E78@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52620:2cb4d217dc3d Date: 2012-02-18 23:27 +0200 http://bitbucket.org/pypy/pypy/changeset/2cb4d217dc3d/ Log: fix test_zjit diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -126,6 +126,11 @@ return int(w_obj.floatval) raise NotImplementedError + def str_w(self, w_obj): + if isinstance(w_obj, StringObject): + return w_obj.v + raise NotImplementedError + def int(self, w_obj): if isinstance(w_obj, IntObject): return w_obj From noreply at buildbot.pypy.org Sat Feb 18 22:45:00 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 18 Feb 2012 22:45:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Distinguish ENCODING_AREA from FORCE_INDEX_AREA. Message-ID: <20120218214500.CABFF11B2E78@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52621:650340cfe68d Date: 2012-02-18 16:43 -0500 http://bitbucket.org/pypy/pypy/changeset/650340cfe68d/ Log: Distinguish ENCODING_AREA from FORCE_INDEX_AREA. Use with scratch_reg. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -94,6 +94,7 @@ EMPTY_LOC = '\xFE' END_OF_LOCS = '\xFF' + FORCE_INDEX_AREA = len(r.MANAGED_REGS) * WORD ENCODING_AREA = len(r.MANAGED_REGS) * WORD OFFSET_SPP_TO_GPR_SAVE_AREA = (FORCE_INDEX + FLOAT_INT_CONVERSION + ENCODING_AREA) @@ -797,10 +798,9 @@ memaddr = self.gen_descr_encoding(descr, args, arglocs) # store addr in force index field - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, memaddr) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, memaddr) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.FORCE_INDEX_AREA) if save_exc: path = self._leave_jitted_hook_save_exc @@ -1041,10 +1041,9 @@ return 0 def _write_fail_index(self, fail_index): - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, fail_index) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, fail_index) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.FORCE_INDEX_AREA) def load(self, loc, value): assert loc.is_reg() and value.is_imm() From noreply at buildbot.pypy.org Sat Feb 18 22:45:02 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 18 Feb 2012 22:45:02 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use with scratch_reg. Message-ID: <20120218214502.08CF311B2E78@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52622:49d4fc740389 Date: 2012-02-18 16:44 -0500 http://bitbucket.org/pypy/pypy/changeset/49d4fc740389/ Log: Use with scratch_reg. diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -12,7 +12,7 @@ from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) from pypy.jit.backend.ppc.jump import remap_frame_layout -from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder, scratch_reg from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype @@ -288,10 +288,9 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - self.mc.storex(loc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + self.mc.storex(loc.value, 0, r.SCRATCH.value) elif loc.is_vfp_reg(): assert box.type == FLOAT assert 0, "not implemented yet" @@ -305,13 +304,12 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mov_loc_loc(loc, r.SCRATCH) - # store content of r5 temporary in ENCODING AREA - self.mc.store(r.r5.value, r.SPP.value, 0) - self.mc.load_imm(r.r5, adr) - self.mc.store(r.SCRATCH.value, r.r5.value, 0) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mov_loc_loc(loc, r.SCRATCH) + # store content of r5 temporary in ENCODING AREA + self.mc.store(r.r5.value, r.SPP.value, 0) + self.mc.load_imm(r.r5, adr) + self.mc.store(r.SCRATCH.value, r.r5.value, 0) # restore r5 self.mc.load(r.r5.value, r.SPP.value, 0) else: @@ -1103,10 +1101,9 @@ def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(guard_op, arglocs, c.LT, save_exc=True) emit_guard_call_release_gil = emit_guard_call_may_force From noreply at buildbot.pypy.org Sun Feb 19 00:36:49 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 00:36:49 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: add BroadcastUfunc iter, more tests pass Message-ID: <20120218233649.30C8511B2E78@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52623:b3836fce3c20 Date: 2012-02-18 21:34 +0200 http://bitbucket.org/pypy/pypy/changeset/b3836fce3c20/ Log: add BroadcastUfunc iter, more tests pass diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -786,7 +786,11 @@ if self.forced_result is not None: return self.forced_result.create_sig() if self.shape != self.values.shape: - xxx + #This happens if out arg is used + return signature.BroadcastUfunc(self.ufunc, self.name, + self.calc_dtype, + self.values.create_sig(), + self.res.create_sig()) return signature.Call1(self.ufunc, self.name, self.calc_dtype, self.values.create_sig()) @@ -837,7 +841,8 @@ if res is None: res = W_NDimArray(size, shape, dtype, order) assert isinstance(res, BaseArray) - Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) + concr = res.get_concrete() + Call2.__init__(self, None, 'assign', shape, dtype, dtype, concr, child) def create_sig(self): sig = signature.ResultSignature(self.res_dtype, self.left.create_sig(), diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -216,13 +216,14 @@ return self.child.eval(frame, arr.child) class Call1(Signature): - _immutable_fields_ = ['unfunc', 'name', 'child', 'dtype'] + _immutable_fields_ = ['unfunc', 'name', 'child', 'res', 'dtype'] - def __init__(self, func, name, dtype, child): + def __init__(self, func, name, dtype, child, res=None): self.unfunc = func self.child = child self.name = name self.dtype = dtype + self.res = res def hash(self): return compute_hash(self.name) ^ intmask(self.child.hash() << 1) @@ -256,6 +257,29 @@ v = self.child.eval(frame, arr.values).convert_to(arr.calc_dtype) return self.unfunc(arr.calc_dtype, v) + +class BroadcastUfunc(Call1): + def _invent_numbering(self, cache, allnumbers): + self.res._invent_numbering(cache, allnumbers) + self.child._invent_numbering(new_cache(), allnumbers) + + def debug_repr(self): + return 'BroadcastUfunc(%s, %s)' % (self.name, self.child.debug_repr()) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call1 + + assert isinstance(arr, Call1) + vtransforms = transforms + [BroadcastTransform(arr.values.shape)] + self.child._create_iter(iterlist, arraylist, arr.values, vtransforms) + self.res._create_iter(iterlist, arraylist, arr.res, transforms) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call1 + assert isinstance(arr, Call1) + v = self.child.eval(frame, arr.values).convert_to(arr.calc_dtype) + return self.unfunc(arr.calc_dtype, v) + class Call2(Signature): _immutable_fields_ = ['binfunc', 'name', 'calc_dtype', 'left', 'right'] diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -33,6 +33,7 @@ def test_ufunc_out(self): from _numpypy import array, negative, zeros, sin + from math import sin as msin a = array([[1, 2], [3, 4]]) c = zeros((2,2,2)) b = negative(a + a, out=c[1]) @@ -48,7 +49,7 @@ assert b.shape == c.shape a = array([1, 2]) b = sin(a, out=c) - assert(c == [[-1, -2], [-1, -2]]).all() + assert(c == [[msin(1), msin(2)]] * 2).all() b = sin(a, out=c+c) assert (c == b).all() From noreply at buildbot.pypy.org Sun Feb 19 00:36:50 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 00:36:50 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: shape_agreement needs a bit of help Message-ID: <20120218233650.61B5F11B2E79@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52624:9c8325480862 Date: 2012-02-18 21:50 +0200 http://bitbucket.org/pypy/pypy/changeset/9c8325480862/ Log: shape_agreement needs a bit of help diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -261,11 +261,12 @@ return space.wrap(out) if out: #Test shape compatability - if not shape_agreement(space, w_obj.shape, out.shape): + broadcast_shape = shape_agreement(space, w_obj.shape, out.shape) + if not broadcast_shape or broadcast_shape != out.shape: raise operationerrfmt(space.w_ValueError, - 'output parameter shape mismatch, expecting [%s]' + - ' , got [%s]', - ",".join([str(x) for x in shape]), + 'output parameter shape mismatch, could not broadcast [%s]' + + ' , to [%s]', + ",".join([str(x) for x in w_obj.shape]), ",".join([str(x) for x in out.shape]), ) w_res = Call1(self.func, self.name, out.shape, calc_dtype, diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -54,8 +54,10 @@ assert (c == b).all() #Test shape agreement - a=zeros((3,4)) - b=zeros((3,5)) + a = zeros((3,4)) + b = zeros((3,5)) + raises(ValueError, 'negative(a, out=b)') + b = zeros((1,4)) raises(ValueError, 'negative(a, out=b)') def test_ufunc_cast(self): From noreply at buildbot.pypy.org Sun Feb 19 00:36:51 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 00:36:51 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: progress: move on to binfunc Message-ID: <20120218233651.8F5E011B2E78@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52625:e2f97194b36f Date: 2012-02-18 22:27 +0200 http://bitbucket.org/pypy/pypy/changeset/e2f97194b36f/ Log: progress: move on to binfunc diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -265,7 +265,7 @@ if not broadcast_shape or broadcast_shape != out.shape: raise operationerrfmt(space.w_ValueError, 'output parameter shape mismatch, could not broadcast [%s]' + - ' , to [%s]', + ' to [%s]', ",".join([str(x) for x in w_obj.shape]), ",".join([str(x) for x in out.shape]), ) @@ -327,11 +327,21 @@ )) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) # Test correctness of out.shape + if out and out.shape != shape_agreement(space, new_shape, out.shape): + raise operationerrfmt(space.w_ValueError, + 'output parameter shape mismatch, could not broadcast [%s]' + + ' to [%s]', + ",".join([str(x) for x in new_shape]), + ",".join([str(x) for x in out.shape]), + ) w_res = Call2(self.func, self.name, new_shape, calc_dtype, res_dtype, w_lhs, w_rhs, out) w_lhs.add_invalidates(w_res) w_rhs.add_invalidates(w_res) + if out: + #out.add_invalidates(w_res) #causes a recursion loop + w_res.get_concrete() return w_res diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -60,6 +60,16 @@ b = zeros((1,4)) raises(ValueError, 'negative(a, out=b)') + def test_binfunc_out(self): + from _numpypy import array, add + a = array([[1, 2], [3, 4]]) + out = array([[1, 2], [3, 4]]) + c = add(a, a, out=out) + assert (c == out).all() + assert c.shape == a.shape + assert c.dtype is a.dtype + raises(ValueError, 'c = add(a, a, out=out[1])') + def test_ufunc_cast(self): from _numpypy import array, negative cast_error = raises(TypeError, negative, array(16,dtype=float), From noreply at buildbot.pypy.org Sun Feb 19 00:36:52 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 00:36:52 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: add yet another signature, fix ViewIterator.apply_transformations() optimization Message-ID: <20120218233652.C4B6B11B2E78@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52626:f6e33237468c Date: 2012-02-19 01:34 +0200 http://bitbucket.org/pypy/pypy/changeset/f6e33237468c/ Log: add yet another signature, fix ViewIterator.apply_transformations() optimization diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -214,7 +214,7 @@ def apply_transformations(self, arr, transformations): v = BaseIterator.apply_transformations(self, arr, transformations) - if len(arr.shape) == 1: + if len(arr.shape) == 1 and len(v.res_shape) == 1: return OneDimIterator(self.offset, self.strides[0], self.res_shape[0]) return v diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -726,6 +726,11 @@ ra = ResultArray(self, self.size, self.shape, self.res_dtype, self.res) loop.compute(ra) + if self.res: + broadcast_dims = len(self.res.shape) - len(self.shape) + chunks = [Chunk(0,0,0,0)] * broadcast_dims + \ + [Chunk(0, i, 1, i) for i in self.shape] + return ra.left.create_slice(chunks) return ra.left def force_if_needed(self): @@ -840,13 +845,19 @@ def __init__(self, child, size, shape, dtype, res=None, order='C'): if res is None: res = W_NDimArray(size, shape, dtype, order) - assert isinstance(res, BaseArray) - concr = res.get_concrete() - Call2.__init__(self, None, 'assign', shape, dtype, dtype, concr, child) + else: + assert isinstance(res, BaseArray) + #Make sure it is not a virtual array i.e. out=a+a + res = res.get_concrete() + Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): - sig = signature.ResultSignature(self.res_dtype, self.left.create_sig(), - self.right.create_sig()) + if self.left.shape != self.right.shape: + sig = signature.BroadcastResultSignature(self.res_dtype, + self.left.create_sig(), self.right.create_sig()) + else: + sig = signature.ResultSignature(self.res_dtype, + self.left.create_sig(), self.right.create_sig()) return sig def done_if_true(dtype, val): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -340,7 +340,17 @@ assert isinstance(arr, ResultArray) offset = frame.get_final_iter().offset - arr.left.setitem(offset, self.right.eval(frame, arr.right)) + val = self.right.eval(frame, arr.right) + arr.left.setitem(offset, val) + +class BroadcastResultSignature(ResultSignature): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import ResultArray + + assert isinstance(arr, ResultArray) + rtransforms = transforms + [BroadcastTransform(arr.left.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -68,7 +68,16 @@ assert (c == out).all() assert c.shape == a.shape assert c.dtype is a.dtype + c[0,0] = 100 + assert out[0, 0] == 100 raises(ValueError, 'c = add(a, a, out=out[1])') + c = add(a[0], a[1], out=out[1]) + assert (c == out[1]).all() + assert (c == [4, 6]).all() + c = add(a[0], a[1], out=out) + assert (c == out[1]).all() + assert (c == out[0]).all() + def test_ufunc_cast(self): from _numpypy import array, negative From noreply at buildbot.pypy.org Sun Feb 19 00:36:53 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 00:36:53 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: clean up gratuitous 'print' in tests Message-ID: <20120218233653.F3E7511B2E78@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52627:7b030dbc76bd Date: 2012-02-19 01:36 +0200 http://bitbucket.org/pypy/pypy/changeset/7b030dbc76bd/ Log: clean up gratuitous 'print' in tests diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1375,8 +1375,6 @@ a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] - print a - print b assert (b == [[1, 2], [5, 6], [9, 10], [13, 14]]).all() c = b + b assert c[1][1] == 12 From notifications-noreply at bitbucket.org Sun Feb 19 09:32:18 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sun, 19 Feb 2012 08:32:18 -0000 Subject: [pypy-commit] Notification: pypy-c4gc Message-ID: <20120219083218.4280.95125@bitbucket02.managed.contegix.com> You have received a notification from GregBowyer. Hi, I forked pypy. My fork is at https://bitbucket.org/GregBowyer/pypy-c4gc. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Sun Feb 19 10:22:12 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 19 Feb 2012 10:22:12 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: translation fix, revert incorrect box conversion Message-ID: <20120219092212.9546F820D1@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52628:ac37eef6dfb6 Date: 2012-02-19 11:20 +0200 http://bitbucket.org/pypy/pypy/changeset/ac37eef6dfb6/ Log: translation fix, revert incorrect box conversion diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1107,7 +1107,7 @@ """ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value.convert_to(self.dtype)) + self.dtype.setitem(self.storage, item, value) def setshape(self, space, new_shape): self.shape = new_shape diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -260,7 +260,7 @@ out = arr return space.wrap(out) if out: - #Test shape compatability + assert isinstance(out, BaseArray) # For translation broadcast_shape = shape_agreement(space, w_obj.shape, out.shape) if not broadcast_shape or broadcast_shape != out.shape: raise operationerrfmt(space.w_ValueError, diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -81,8 +81,19 @@ def test_ufunc_cast(self): from _numpypy import array, negative - cast_error = raises(TypeError, negative, array(16,dtype=float), - out=array(0, dtype=int)) - assert str(cast_error.value) == \ + a = array(16, dtype = int) + c = array(0, dtype = float) + b = negative(a, out=c) + assert b == c + try: + from _numpypy import version + v = version.version.split('.') + except: + v = ['1', '6', '0'] # numpypy is api compatable to what version? + if v[0]<'2': + b = negative(c, out=a) + assert b == a + else: + cast_error = raises(TypeError, negative, c, a) + assert str(cast_error.value) == \ "Cannot cast ufunc negative output from dtype('float64') to dtype('int64') with casting rule 'same_kind'" - From noreply at buildbot.pypy.org Sun Feb 19 13:30:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 13:30:58 +0100 (CET) Subject: [pypy-commit] pypy stm: A version of the cache that models a hardware L1 cache. Message-ID: <20120219123058.356EB820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r52629:df6f7424b1d3 Date: 2012-02-19 12:35 +0100 http://bitbucket.org/pypy/pypy/changeset/df6f7424b1d3/ Log: A version of the cache that models a hardware L1 cache. (disabled by default because it's buggy in subtle cases) diff --git a/pypy/translator/stm/src_stm/associative.c b/pypy/translator/stm/src_stm/associative.c new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/src_stm/associative.c @@ -0,0 +1,81 @@ + +/* parameters taken from my laptop's Core i5, tweaked according to + * http://en.wikipedia.org/wiki/Haswell_%28microarchitecture%29 + * + * this is not an optimized implementation at all. just for measuring + * the number of collisions that we get on some examples. + */ + + +#define HWEMULATOR_ASSOCIATIVITY_BITS 3 /* = 8 */ +#define HWEMULATOR_CACHE_LINE_BITS 7 /* = 128 */ +#define HWEMULATOR_NUMBER_OF_SETS_BITS 6 /* = 64 */ + +/* this defines a cache size of 64 KB == 1 << (sum of the three numbers) */ + + + +typedef struct { + char data[1 << HWEMULATOR_CACHE_LINE_BITS]; + long tag; + unsigned long creation; +} cacheline_t; + +typedef struct { + cacheline_t choices[1 << HWEMULATOR_ASSOCIATIVITY_BITS]; +} cacheset_t; + +static cacheset_t hwemulator_orecs[1 << HWEMULATOR_NUMBER_OF_SETS_BITS]; + + +static orec_t *get_orec(void* addr) +{ + /* find the set number */ + int setnum = (((long)addr) >> HWEMULATOR_CACHE_LINE_BITS) & + ((1 << HWEMULATOR_NUMBER_OF_SETS_BITS) - 1); + + /* grab it */ + cacheset_t *set = &hwemulator_orecs[setnum]; + + /* round down the addr to get the tag */ + long tag = ((long)addr) >> HWEMULATOR_CACHE_LINE_BITS; + + /* is the cacheline already in the set? */ + int i; + cacheline_t *line; + for (i=0; i < (1 << HWEMULATOR_ASSOCIATIVITY_BITS); i++) { + line = &set->choices[i]; + if (line->tag == tag) + goto found; + } + + /* find the oldest cacheline in the set */ + cacheline_t *oldest_line = 0; + unsigned long oldest = (unsigned long) -1; + unsigned long youngest = 0; + for (i=0; i < (1 << HWEMULATOR_ASSOCIATIVITY_BITS); i++) { + line = &set->choices[i]; + if (line->creation < oldest) { + oldest = line->creation; + oldest_line = line; + } + if (line->creation > youngest) + youngest = line->creation; + } + + /* replace it */ + line = oldest_line; + line->tag = tag; + line->creation = youngest + 1; + /* XXX this is wrong!! it can evict a line containing relevant data, + which can then be recreated at a different index! The issue is + that this operation may forget that we did some changes! But + it's probably good enough for now just to measure the effect */ + + found:; + /* return the orec_t that is in line->data at the correct offset */ + assert((((long)addr) & (sizeof(orec_t)-1)) == 0); + char *p = line->data + (((long)addr) & + ((1 << HWEMULATOR_CACHE_LINE_BITS) - 1)); + return (orec_t *)p; +} diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -36,6 +36,10 @@ typedef volatile owner_version_t orec_t; +/*#define USE_ASSOCIATIVE_CACHE*/ + +#ifndef USE_ASSOCIATIVE_CACHE /* simple version */ + /*** Specify the number of orecs in the global array. */ #define NUM_STRIPES 1048576 @@ -53,6 +57,10 @@ return (orec_t *)p; } +#else /* USE_ASSOCIATIVE_CACHE: follows more closely what hardware would do */ +# include "src_stm/associative.c" +#endif + #include "src_stm/lists.c" /************************************************************/ From noreply at buildbot.pypy.org Sun Feb 19 14:10:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 14:10:33 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix test_rstm. Message-ID: <20120219131033.2E981820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52630:1e22b3703ea6 Date: 2012-02-19 14:08 +0100 http://bitbucket.org/pypy/pypy/changeset/1e22b3703ea6/ Log: Fix test_rstm. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,4 +1,4 @@ -import thread +import threading from pypy.rlib.objectmodel import specialize, we_are_translated from pypy.rlib.objectmodel import keepalive_until_here from pypy.rlib.debug import ll_assert @@ -9,7 +9,7 @@ llhelper) from pypy.translator.stm.stmgcintf import StmOperations -_global_lock = thread.allocate_lock() +_global_lock = threading.RLock() @specialize.memo() def _get_stm_callback(func, argcls): @@ -32,8 +32,6 @@ def perform_transaction(func, argcls, arg): ll_assert(arg is None or isinstance(arg, argcls), "perform_transaction: wrong class") - ll_assert(argcls._alloc_nonmovable_, - "perform_transaction: XXX") # XXX kill me if we_are_translated(): llarg = cast_instance_to_base_ptr(arg) llarg = rffi.cast(rffi.VOIDP, llarg) @@ -63,7 +61,10 @@ if not we_are_translated(): _global_lock.release() def _debug_get_state(): - return StmOperations._debug_get_state() + if not we_are_translated(): _global_lock.acquire() + res = StmOperations._debug_get_state() + if not we_are_translated(): _global_lock.release() + return res def thread_id(): return StmOperations.thread_id() diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -1,38 +1,37 @@ import os, thread, time -from pypy.rlib.debug import debug_print +from pypy.rlib.debug import debug_print, ll_assert from pypy.rlib import rstm from pypy.translator.stm.test.support import CompiledSTMTests class Arg(object): - _alloc_nonmovable_ = True + pass +arg = Arg() def setx(arg, retry_counter): debug_print(arg.x) - assert rstm.debug_get_state() == 1 + assert rstm._debug_get_state() == 1 if arg.x == 303: # this will trigger stm_become_inevitable() os.write(1, "hello\n") - assert rstm.debug_get_state() == 2 + assert rstm._debug_get_state() == 2 arg.x = 42 - -def test_stm_perform_transaction(initial_x=202): - arg = Arg() +def stm_perform_transaction(initial_x=202): arg.x = initial_x - assert rstm.debug_get_state() == -1 + ll_assert(rstm._debug_get_state() == -2, "bad debug_get_state (1)") rstm.descriptor_init() - assert rstm.debug_get_state() == 0 + ll_assert(rstm._debug_get_state() == 0, "bad debug_get_state (2)") rstm.perform_transaction(setx, Arg, arg) - assert rstm.debug_get_state() == 0 + ll_assert(rstm._debug_get_state() == 0, "bad debug_get_state (3)") rstm.descriptor_done() - assert rstm.debug_get_state() == -1 - assert arg.x == 42 + ll_assert(rstm._debug_get_state() == -2, "bad debug_get_state (4)") + ll_assert(arg.x == 42, "bad arg.x") def test_stm_multiple_threads(): ok = [] def f(i): - test_stm_perform_transaction() + stm_perform_transaction() ok.append(i) for i in range(10): thread.start_new_thread(f, (i,)) @@ -58,20 +57,34 @@ assert dataout == '' assert '102' in dataerr.splitlines() - def test_perform_transaction(self): + def build_perform_transaction(self): + from pypy.module.thread import ll_thread + class Done: done = False + done = Done() + def g(): + stm_perform_transaction(done.initial_x) + done.done = True def f(argv): - test_stm_perform_transaction() + done.initial_x = int(argv[1]) + assert rstm._debug_get_state() == -1 # main thread + ll_thread.start_new_thread(g, ()) + for i in range(20): + if done.done: break + time.sleep(0.1) + else: + print "timeout!" + raise Exception return 0 t, cbuilder = self.compile(f) - dataout, dataerr = cbuilder.cmdexec('', err=True) + return cbuilder + + def test_perform_transaction(self): + cbuilder = self.build_perform_transaction() + # + dataout, dataerr = cbuilder.cmdexec('202', err=True) assert dataout == '' assert '202' in dataerr.splitlines() - - def test_perform_transaction_inevitable(self): - def f(argv): - test_stm_perform_transaction(303) - return 0 - t, cbuilder = self.compile(f) - dataout, dataerr = cbuilder.cmdexec('', err=True) + # + dataout, dataerr = cbuilder.cmdexec('303', err=True) assert 'hello' in dataout.splitlines() assert '303' in dataerr.splitlines() From noreply at buildbot.pypy.org Sun Feb 19 14:10:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 14:10:34 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Kill _alloc_nonmovable_ here. Message-ID: <20120219131034.5FC17820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52631:c7d3b90302f2 Date: 2012-02-19 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/c7d3b90302f2/ Log: Kill _alloc_nonmovable_ here. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -151,7 +151,6 @@ class AbstractPending(object): - _alloc_nonmovable_ = True def register(self): """Register this AbstractPending instance in the pending list diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -17,7 +17,7 @@ glob = Global() class Arg: - _alloc_nonmovable_ = True # XXX kill me + pass def add_at_end_of_chained_list(arg, retry_counter): From noreply at buildbot.pypy.org Sun Feb 19 14:46:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 14:46:35 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: A hint that crashes if a variable is not proven local. Use it to check Message-ID: <20120219134635.C7FCB820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52632:a0bf9b1b1049 Date: 2012-02-19 14:46 +0100 http://bitbucket.org/pypy/pypy/changeset/a0bf9b1b1049/ Log: A hint that crashes if a variable is not proven local. Use it to check that propagation of localness of the PyFrame works (which doesn't seem to be the case right now). diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -197,11 +197,13 @@ # stack manipulation helpers def pushvalue(self, w_object): + hint(self, stm_assert_local=True) depth = self.valuestackdepth self.locals_stack_w[depth] = w_object self.valuestackdepth = depth + 1 def popvalue(self): + hint(self, stm_assert_local=True) depth = self.valuestackdepth - 1 assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -27,6 +27,7 @@ # XXX we shouldn't get here, but we do translating the whole # pypy. We should investigate at some point. In the meantime # returning False is always safe. + self.reason = 'variable not in gsrc!' return False for src in srcs: if isinstance(src, SpaceOperation): @@ -34,14 +35,25 @@ continue if src.opname == 'hint' and 'stm_write' in src.args[1].value: continue + self.reason = src return False elif isinstance(src, Constant): if src.value: # a NULL pointer is still valid as local + self.reason = src return False elif src is None: + self.reason = 'found a None' return False elif src == 'instantiate': pass else: raise AssertionError(repr(src)) return True + + def assert_local(self, variable, graph='?'): + if self.is_local(variable): + return # fine + else: + raise AssertionError( + "assert_local() failed (%s, %s):\n%r" % (variable, graph, + self.reason)) diff --git a/pypy/translator/stm/test/test_localtracker.py b/pypy/translator/stm/test/test_localtracker.py --- a/pypy/translator/stm/test/test_localtracker.py +++ b/pypy/translator/stm/test/test_localtracker.py @@ -27,6 +27,7 @@ for name, v in self.translator._seen_locals.items(): if self.localtracker.is_local(v): got_local_names.add(name) + self.localtracker.assert_local(v, 'foo') assert got_local_names == set(expected_names) diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -41,7 +41,7 @@ pre_insert_stm_writebarrier(graph) if option.view: graph.show() - # weak test: check that there are exactly two stm_writebarrier inserted. + # weak test: check that there are exactly 3 stm_writebarrier inserted. # one should be for 'x.n = n', one should cover both field assignments # to the Z instance, and the 3rd one is in the block 'x.n *= 2'. sum = summary(graph) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -79,8 +79,10 @@ self.current_block = None def transform_graph(self, graph): + self.graph = graph for block in graph.iterblocks(): self.transform_block(block) + del self.graph # ---------- @@ -170,6 +172,10 @@ op = SpaceOperation('stm_writebarrier', [op.args[0]], op.result) self.stt_stm_writebarrier(newoperations, op) return + if 'stm_assert_local' in op.args[1].value: + self.localtracker.assert_local(op.args[0], + getattr(self, 'graph', None)) + return newoperations.append(op) From noreply at buildbot.pypy.org Sun Feb 19 16:34:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 16:34:27 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix. Message-ID: <20120219153427.21248820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52633:e6052f474569 Date: 2012-02-19 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/e6052f474569/ Log: Fix. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -87,6 +87,12 @@ v1 = None resultlist.append((v1, v2)) # + # also add as a callee the graphs that are explicitly callees in the + # callgraph. Useful because some graphs may end up not being called + # any more, if they were inlined. + for _, graph in translator.callgraph.itervalues(): + was_a_callee.add(graph) + # for graph in translator.graphs: if graph not in was_a_callee: for v in graph.getargs(): From noreply at buildbot.pypy.org Sun Feb 19 16:34:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 16:34:28 +0100 (CET) Subject: [pypy-commit] pypy default: This test_datetime doesn't test pypy's own implementation, but just Message-ID: <20120219153428.62C5A82AAB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52634:18b924f3d1b0 Date: 2012-02-19 16:33 +0100 http://bitbucket.org/pypy/pypy/changeset/18b924f3d1b0/ Log: This test_datetime doesn't test pypy's own implementation, but just CPython's. Fix. diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) From noreply at buildbot.pypy.org Sun Feb 19 16:56:35 2012 From: noreply at buildbot.pypy.org (hager) Date: Sun, 19 Feb 2012 16:56:35 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: do not copy lists Message-ID: <20120219155635.C0CF6820D1@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52635:9b1bc846ce01 Date: 2012-02-15 08:41 -0800 http://bitbucket.org/pypy/pypy/changeset/9b1bc846ce01/ Log: do not copy lists diff --git a/pypy/jit/backend/ppc/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py --- a/pypy/jit/backend/ppc/helper/regalloc.py +++ b/pypy/jit/backend/ppc/helper/regalloc.py @@ -76,7 +76,7 @@ def prepare_binary_int_op(): def f(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() b0, b1 = boxes reg1 = self._ensure_value_is_boxed(b0, forbidden_vars=boxes) diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -177,7 +177,7 @@ def prepare_loop(self, inputargs, operations): self._prepare(inputargs, operations) self._set_initial_bindings(inputargs) - self.possibly_free_vars(list(inputargs)) + self.possibly_free_vars(inputargs) def prepare_bridge(self, inputargs, arglocs, ops): self._prepare(inputargs, ops) @@ -425,7 +425,7 @@ prepare_guard_not_invalidated = prepare_guard_no_overflow def prepare_guard_exception(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) loc = self._ensure_value_is_boxed(arg0) loc1 = self.get_scratch_reg(INT, boxes) @@ -447,7 +447,7 @@ return arglocs def prepare_guard_value(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes l0 = self._ensure_value_is_boxed(a0, boxes) l1 = self._ensure_value_is_boxed(a1, boxes) @@ -459,7 +459,7 @@ def prepare_guard_class(self, op): assert isinstance(op.getarg(0), Box) - boxes = list(op.getarglist()) + boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) y = self.get_scratch_reg(REF, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) @@ -559,7 +559,7 @@ return [] def prepare_setfield_gc(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes ofs, size, sign = unpack_fielddescr(op.getdescr()) base_loc = self._ensure_value_is_boxed(a0, boxes) From noreply at buildbot.pypy.org Sun Feb 19 16:56:37 2012 From: noreply at buildbot.pypy.org (hager) Date: Sun, 19 Feb 2012 16:56:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120219155637.1833D820D1@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52636:62baa025967b Date: 2012-02-16 06:50 -0800 http://bitbucket.org/pypy/pypy/changeset/62baa025967b/ Log: merge diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -882,6 +882,7 @@ pass emit_jit_debug = emit_debug_merge_point + emit_keepalive = emit_debug_merge_point def emit_cond_call_gc_wb(self, op, arglocs, regalloc): # Write code equivalent to write_barrier() in the GC: it checks diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -811,6 +811,7 @@ prepare_debug_merge_point = void prepare_jit_debug = void + prepare_keepalive = void def prepare_cond_call_gc_wb(self, op): assert op.result is None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1677,6 +1677,7 @@ c_box = self.alloc_string("hi there").constbox() c_nest = ConstInt(0) self.execute_operation(rop.DEBUG_MERGE_POINT, [c_box, c_nest], 'void') + self.execute_operation(rop.KEEPALIVE, [c_box], 'void') self.execute_operation(rop.JIT_DEBUG, [c_box, c_nest, c_nest, c_nest, c_nest], 'void') From noreply at buildbot.pypy.org Sun Feb 19 16:56:38 2012 From: noreply at buildbot.pypy.org (hager) Date: Sun, 19 Feb 2012 16:56:38 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120219155638.49A9B820D1@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52637:879919510bc6 Date: 2012-02-19 06:06 -0800 http://bitbucket.org/pypy/pypy/changeset/879919510bc6/ Log: merge diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -12,7 +12,7 @@ from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) from pypy.jit.backend.ppc.jump import remap_frame_layout -from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder, scratch_reg from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype @@ -288,10 +288,9 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - self.mc.storex(loc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + self.mc.storex(loc.value, 0, r.SCRATCH.value) elif loc.is_vfp_reg(): assert box.type == FLOAT assert 0, "not implemented yet" @@ -305,13 +304,12 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mov_loc_loc(loc, r.SCRATCH) - # store content of r5 temporary in ENCODING AREA - self.mc.store(r.r5.value, r.SPP.value, 0) - self.mc.load_imm(r.r5, adr) - self.mc.store(r.SCRATCH.value, r.r5.value, 0) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mov_loc_loc(loc, r.SCRATCH) + # store content of r5 temporary in ENCODING AREA + self.mc.store(r.r5.value, r.SPP.value, 0) + self.mc.load_imm(r.r5, adr) + self.mc.store(r.SCRATCH.value, r.r5.value, 0) # restore r5 self.mc.load(r.r5.value, r.SPP.value, 0) else: @@ -1103,10 +1101,9 @@ def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(guard_op, arglocs, c.LT, save_exc=True) emit_guard_call_release_gil = emit_guard_call_may_force diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -3,7 +3,7 @@ from pypy.jit.backend.ppc.ppc_form import PPCForm as Form from pypy.jit.backend.ppc.ppc_field import ppc_fields from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, - Regalloc) + Regalloc, PPCRegisterManager) from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup @@ -94,6 +94,7 @@ EMPTY_LOC = '\xFE' END_OF_LOCS = '\xFF' + FORCE_INDEX_AREA = len(r.MANAGED_REGS) * WORD ENCODING_AREA = len(r.MANAGED_REGS) * WORD OFFSET_SPP_TO_GPR_SAVE_AREA = (FORCE_INDEX + FLOAT_INT_CONVERSION + ENCODING_AREA) @@ -297,11 +298,25 @@ def _build_malloc_slowpath(self): mc = PPCBuilder() + if IS_PPC_64: + for _ in range(6): + mc.write32(0) + with Saved_Volatiles(mc): # Values to compute size stored in r3 and r4 mc.subf(r.r3.value, r.r3.value, r.r4.value) addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + if IS_PPC_32: + mc.stw(reg.value, r.SPP.value, ofs) + else: + mc.std(reg.value, r.SPP.value, ofs) mc.call(addr) + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + if IS_PPC_32: + mc.lwz(reg.value, r.SPP.value, ofs) + else: + mc.ld(reg.value, r.SPP.value, ofs) mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() @@ -315,6 +330,8 @@ pmc.overwrite() mc.b_abs(self.propagate_exception_path) rawstart = mc.materialize(self.cpu.asmmemmgr, []) + if IS_PPC_64: + self.write_64_bit_func_descr(rawstart, rawstart+3*WORD) self.malloc_slowpath = rawstart def _build_propagate_exception_path(self): @@ -781,10 +798,9 @@ memaddr = self.gen_descr_encoding(descr, args, arglocs) # store addr in force index field - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, memaddr) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, memaddr) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.FORCE_INDEX_AREA) if save_exc: path = self._leave_jitted_hook_save_exc @@ -1025,10 +1041,9 @@ return 0 def _write_fail_index(self, fail_index): - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, fail_index) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, fail_index) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.FORCE_INDEX_AREA) def load(self, loc, value): assert loc.is_reg() and value.is_imm() diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -50,37 +50,33 @@ save_around_call_regs = r.VOLATILES REGLOC_TO_COPY_AREA_OFS = { - r.r0: MY_COPY_OF_REGS + 0 * WORD, - r.r2: MY_COPY_OF_REGS + 1 * WORD, - r.r3: MY_COPY_OF_REGS + 2 * WORD, - r.r4: MY_COPY_OF_REGS + 3 * WORD, - r.r5: MY_COPY_OF_REGS + 4 * WORD, - r.r6: MY_COPY_OF_REGS + 5 * WORD, - r.r7: MY_COPY_OF_REGS + 6 * WORD, - r.r8: MY_COPY_OF_REGS + 7 * WORD, - r.r9: MY_COPY_OF_REGS + 8 * WORD, - r.r10: MY_COPY_OF_REGS + 9 * WORD, - r.r11: MY_COPY_OF_REGS + 10 * WORD, - r.r12: MY_COPY_OF_REGS + 11 * WORD, - r.r13: MY_COPY_OF_REGS + 12 * WORD, - r.r14: MY_COPY_OF_REGS + 13 * WORD, - r.r15: MY_COPY_OF_REGS + 14 * WORD, - r.r16: MY_COPY_OF_REGS + 15 * WORD, - r.r17: MY_COPY_OF_REGS + 16 * WORD, - r.r18: MY_COPY_OF_REGS + 17 * WORD, - r.r19: MY_COPY_OF_REGS + 18 * WORD, - r.r20: MY_COPY_OF_REGS + 19 * WORD, - r.r21: MY_COPY_OF_REGS + 20 * WORD, - r.r22: MY_COPY_OF_REGS + 21 * WORD, - r.r23: MY_COPY_OF_REGS + 22 * WORD, - r.r24: MY_COPY_OF_REGS + 23 * WORD, - r.r25: MY_COPY_OF_REGS + 24 * WORD, - r.r26: MY_COPY_OF_REGS + 25 * WORD, - r.r27: MY_COPY_OF_REGS + 26 * WORD, - r.r28: MY_COPY_OF_REGS + 27 * WORD, - r.r29: MY_COPY_OF_REGS + 28 * WORD, - r.r30: MY_COPY_OF_REGS + 29 * WORD, - r.r31: MY_COPY_OF_REGS + 30 * WORD, + r.r3: MY_COPY_OF_REGS + 0 * WORD, + r.r4: MY_COPY_OF_REGS + 1 * WORD, + r.r5: MY_COPY_OF_REGS + 2 * WORD, + r.r6: MY_COPY_OF_REGS + 3 * WORD, + r.r7: MY_COPY_OF_REGS + 4 * WORD, + r.r8: MY_COPY_OF_REGS + 5 * WORD, + r.r9: MY_COPY_OF_REGS + 6 * WORD, + r.r10: MY_COPY_OF_REGS + 7 * WORD, + r.r11: MY_COPY_OF_REGS + 8 * WORD, + r.r12: MY_COPY_OF_REGS + 9 * WORD, + r.r14: MY_COPY_OF_REGS + 10 * WORD, + r.r15: MY_COPY_OF_REGS + 11 * WORD, + r.r16: MY_COPY_OF_REGS + 12 * WORD, + r.r17: MY_COPY_OF_REGS + 13 * WORD, + r.r18: MY_COPY_OF_REGS + 14 * WORD, + r.r19: MY_COPY_OF_REGS + 15 * WORD, + r.r20: MY_COPY_OF_REGS + 16 * WORD, + r.r21: MY_COPY_OF_REGS + 17 * WORD, + r.r22: MY_COPY_OF_REGS + 18 * WORD, + r.r23: MY_COPY_OF_REGS + 19 * WORD, + r.r24: MY_COPY_OF_REGS + 20 * WORD, + r.r25: MY_COPY_OF_REGS + 21 * WORD, + r.r26: MY_COPY_OF_REGS + 22 * WORD, + r.r27: MY_COPY_OF_REGS + 23 * WORD, + r.r28: MY_COPY_OF_REGS + 24 * WORD, + r.r29: MY_COPY_OF_REGS + 25 * WORD, + r.r30: MY_COPY_OF_REGS + 26 * WORD, } def __init__(self, longevity, frame_manager=None, assembler=None): From noreply at buildbot.pypy.org Sun Feb 19 16:56:40 2012 From: noreply at buildbot.pypy.org (hager) Date: Sun, 19 Feb 2012 16:56:40 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: replace all occurences of alloc_scratch_reg and free_scratch_reg with "with scratch_reg(mc):" Message-ID: <20120219155640.B1379820D1@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52638:88498311be7e Date: 2012-02-19 07:55 -0800 http://bitbucket.org/pypy/pypy/changeset/88498311be7e/ Log: replace all occurences of alloc_scratch_reg and free_scratch_reg with "with scratch_reg(mc):" diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -999,13 +999,12 @@ self.ldx(rD.value, 0, rD.value) def store_reg(self, source_reg, addr): - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, addr) - if IS_PPC_32: - self.stwx(source_reg.value, 0, r.SCRATCH.value) - else: - self.stdx(source_reg.value, 0, r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, addr) + if IS_PPC_32: + self.stwx(source_reg.value, 0, r.SCRATCH.value) + else: + self.stdx(source_reg.value, 0, r.SCRATCH.value) def b_offset(self, target): curpos = self.currpos() @@ -1025,17 +1024,15 @@ BI = condition[0] BO = condition[1] - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, addr) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, addr) + self.mtctr(r.SCRATCH.value) self.bcctr(BO, BI) def b_abs(self, address, trap=False): - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, address) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, address) + self.mtctr(r.SCRATCH.value) if trap: self.trap() self.bctr() @@ -1043,17 +1040,16 @@ def call(self, address): """ do a call to an absolute address """ - self.alloc_scratch_reg() - if IS_PPC_32: - self.load_imm(r.SCRATCH, address) - else: - self.store(r.TOC.value, r.SP.value, 5 * WORD) - self.load_imm(r.r11, address) - self.load(r.SCRATCH.value, r.r11.value, 0) - self.load(r.r2.value, r.r11.value, WORD) - self.load(r.r11.value, r.r11.value, 2 * WORD) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + if IS_PPC_32: + self.load_imm(r.SCRATCH, address) + else: + self.store(r.TOC.value, r.SP.value, 5 * WORD) + self.load_imm(r.r11, address) + self.load(r.SCRATCH.value, r.r11.value, 0) + self.load(r.r2.value, r.r11.value, WORD) + self.load(r.r11.value, r.r11.value, 2 * WORD) + self.mtctr(r.SCRATCH.value) self.bctrl() if IS_PPC_64: diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -210,12 +210,11 @@ # instead of XER could be more efficient def _emit_ovf_guard(self, op, arglocs, cond): # move content of XER to GPR - self.mc.alloc_scratch_reg() - self.mc.mfspr(r.SCRATCH.value, 1) - # shift and mask to get comparison result - self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, 1, 0, 0) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.mfspr(r.SCRATCH.value, 1) + # shift and mask to get comparison result + self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, 1, 0, 0) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(op, arglocs, cond) def emit_guard_no_overflow(self, op, arglocs, regalloc): @@ -244,14 +243,13 @@ def _cmp_guard_class(self, op, locs, regalloc): offset = locs[2] if offset is not None: - self.mc.alloc_scratch_reg() - if offset.is_imm(): - self.mc.load(r.SCRATCH.value, locs[0].value, offset.value) - else: - assert offset.is_reg() - self.mc.loadx(r.SCRATCH.value, locs[0].value, offset.value) - self.mc.cmp_op(0, r.SCRATCH.value, locs[1].value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + if offset.is_imm(): + self.mc.load(r.SCRATCH.value, locs[0].value, offset.value) + else: + assert offset.is_reg() + self.mc.loadx(r.SCRATCH.value, locs[0].value, offset.value) + self.mc.cmp_op(0, r.SCRATCH.value, locs[1].value) else: assert 0, "not implemented yet" self._emit_guard(op, locs[3:], c.NE) @@ -360,10 +358,9 @@ failargs = arglocs[5:] self.mc.load_imm(loc1, pos_exception.value) - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, loc1.value, 0) - self.mc.cmp_op(0, r.SCRATCH.value, loc.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, loc1.value, 0) + self.mc.cmp_op(0, r.SCRATCH.value, loc.value) self._emit_guard(op, failargs, c.NE, save_exc=True) self.mc.load_imm(loc, pos_exc_value.value) @@ -371,11 +368,10 @@ if resloc: self.mc.load(resloc.value, loc.value, 0) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, 0) - self.mc.store(r.SCRATCH.value, loc.value, 0) - self.mc.store(r.SCRATCH.value, loc1.value, 0) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, 0) + self.mc.store(r.SCRATCH.value, loc.value, 0) + self.mc.store(r.SCRATCH.value, loc1.value, 0) def emit_call(self, op, args, regalloc, force_index=-1): adr = args[0].value @@ -424,13 +420,12 @@ param_offset = ((BACKCHAIN_SIZE + MAX_REG_PARAMS) * WORD) # space for first 8 parameters - self.mc.alloc_scratch_reg() - for i, arg in enumerate(stack_args): - offset = param_offset + i * WORD - if arg is not None: - self.regalloc_mov(regalloc.loc(arg), r.SCRATCH) - self.mc.store(r.SCRATCH.value, r.SP.value, offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + for i, arg in enumerate(stack_args): + offset = param_offset + i * WORD + if arg is not None: + self.regalloc_mov(regalloc.loc(arg), r.SCRATCH) + self.mc.store(r.SCRATCH.value, r.SP.value, offset) # collect variables that need to go in registers # and the registers they will be stored in @@ -540,26 +535,25 @@ def emit_getinteriorfield_gc(self, op, arglocs, regalloc): (base_loc, index_loc, res_loc, ofs_loc, ofs, itemsize, fieldsize) = arglocs - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, itemsize.value) - self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) - if ofs.value > 0: - if ofs_loc.is_imm(): - self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, itemsize.value) + self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) + if ofs.value > 0: + if ofs_loc.is_imm(): + self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + else: + self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + + if fieldsize.value == 8: + self.mc.ldx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 4: + self.mc.lwzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 2: + self.mc.lhzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 1: + self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) else: - self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) - - if fieldsize.value == 8: - self.mc.ldx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 4: - self.mc.lwzx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 2: - self.mc.lhzx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 1: - self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) - else: - assert 0 - self.mc.free_scratch_reg() + assert 0 #XXX Hack, Hack, Hack if not we_are_translated(): @@ -750,13 +744,12 @@ bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, 1 << scale) - if IS_PPC_32: - self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) - else: - self.mc.mulld(bytes_loc.value, r.SCRATCH.value, length_loc.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, 1 << scale) + if IS_PPC_32: + self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) + else: + self.mc.mulld(bytes_loc.value, r.SCRATCH.value, length_loc.value) length_box = bytes_box length_loc = bytes_loc # call memcpy() @@ -871,10 +864,9 @@ def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) def emit_debug_merge_point(self, op, arglocs, regalloc): pass @@ -905,26 +897,25 @@ raise AssertionError(opnum) loc_base = arglocs[0] - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, loc_base.value, 0) + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, loc_base.value, 0) - # get the position of the bit we want to test - bitpos = descr.jit_wb_if_flag_bitpos + # get the position of the bit we want to test + bitpos = descr.jit_wb_if_flag_bitpos - if IS_PPC_32: - # put this bit to the rightmost bitposition of r0 - if bitpos > 0: - self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, - 32 - bitpos, 31, 31) - # test whether this bit is set - self.mc.cmpwi(0, r.SCRATCH.value, 1) - else: - if bitpos > 0: - self.mc.rldicl(r.SCRATCH.value, r.SCRATCH.value, - 64 - bitpos, 63) - # test whether this bit is set - self.mc.cmpdi(0, r.SCRATCH.value, 1) - self.mc.free_scratch_reg() + if IS_PPC_32: + # put this bit to the rightmost bitposition of r0 + if bitpos > 0: + self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, + 32 - bitpos, 31, 31) + # test whether this bit is set + self.mc.cmpwi(0, r.SCRATCH.value, 1) + else: + if bitpos > 0: + self.mc.rldicl(r.SCRATCH.value, r.SCRATCH.value, + 64 - bitpos, 63) + # test whether this bit is set + self.mc.cmpdi(0, r.SCRATCH.value, 1) jz_location = self.mc.currpos() self.mc.nop() @@ -988,10 +979,9 @@ # check value resloc = regalloc.try_allocate_reg(resbox) assert resloc is r.RES - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, value) - self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, value) + self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) regalloc.possibly_free_var(resbox) fast_jmp_pos = self.mc.currpos() @@ -1034,11 +1024,10 @@ assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset resloc = regalloc.force_allocate_reg(resbox) - self.mc.alloc_scratch_reg() - self.mov_loc_loc(arglocs[1], r.SCRATCH) - self.mc.li(resloc.value, 0) - self.mc.storex(resloc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mov_loc_loc(arglocs[1], r.SCRATCH) + self.mc.li(resloc.value, 0) + self.mc.storex(resloc.value, 0, r.SCRATCH.value) regalloc.possibly_free_var(resbox) if op.result is not None: @@ -1054,13 +1043,12 @@ raise AssertionError(kind) resloc = regalloc.force_allocate_reg(op.result) regalloc.possibly_free_var(resbox) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - if op.result.type == FLOAT: - assert 0, "not implemented yet" - else: - self.mc.loadx(resloc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + if op.result.type == FLOAT: + assert 0, "not implemented yet" + else: + self.mc.loadx(resloc.value, 0, r.SCRATCH.value) # merge point offset = self.mc.currpos() - jmp_pos @@ -1069,10 +1057,9 @@ pmc.b(offset) pmc.overwrite() - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, 0) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, 0) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.LT) diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -859,11 +859,10 @@ return # move immediate value to memory elif loc.is_stack(): - self.mc.alloc_scratch_reg() - offset = loc.value - self.mc.load_imm(r.SCRATCH, value) - self.mc.store(r.SCRATCH.value, r.SPP.value, offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + offset = loc.value + self.mc.load_imm(r.SCRATCH, value) + self.mc.store(r.SCRATCH.value, r.SPP.value, offset) return assert 0, "not supported location" elif prev_loc.is_stack(): @@ -876,10 +875,9 @@ # move in memory elif loc.is_stack(): target_offset = loc.value - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, offset) - self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, offset) + self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) return assert 0, "not supported location" elif prev_loc.is_reg(): From noreply at buildbot.pypy.org Sun Feb 19 17:25:13 2012 From: noreply at buildbot.pypy.org (hager) Date: Sun, 19 Feb 2012 17:25:13 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use load instead of lwz/ld and store instead of stw/std Message-ID: <20120219162513.87AC4820D1@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52639:8c66e6606eb0 Date: 2012-02-19 08:24 -0800 http://bitbucket.org/pypy/pypy/changeset/8c66e6606eb0/ Log: use load instead of lwz/ld and store instead of stw/std diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -307,16 +307,10 @@ mc.subf(r.r3.value, r.r3.value, r.r4.value) addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): - if IS_PPC_32: - mc.stw(reg.value, r.SPP.value, ofs) - else: - mc.std(reg.value, r.SPP.value, ofs) + mc.store(reg.value, r.SPP.value, ofs) mc.call(addr) for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): - if IS_PPC_32: - mc.lwz(reg.value, r.SPP.value, ofs) - else: - mc.ld(reg.value, r.SPP.value, ofs) + mc.load(reg.value, r.SPP.value, ofs) mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() @@ -912,10 +906,7 @@ elif loc.is_reg(): self.mc.addi(r.SP.value, r.SP.value, -WORD) # decrease stack pointer # push value - if IS_PPC_32: - self.mc.stw(loc.value, r.SP.value, 0) - else: - self.mc.std(loc.value, r.SP.value, 0) + self.mc.store(loc.value, r.SP.value, 0) elif loc.is_imm(): assert 0, "not implemented yet" elif loc.is_imm_float(): From noreply at buildbot.pypy.org Sun Feb 19 18:53:25 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 18:53:25 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Use modern syntax for specialization Message-ID: <20120219175325.D3DD2820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52640:39b6ad3d0001 Date: 2012-02-18 17:16 +0100 http://bitbucket.org/pypy/pypy/changeset/39b6ad3d0001/ Log: Use modern syntax for specialization diff --git a/pypy/rpython/controllerentry.py b/pypy/rpython/controllerentry.py --- a/pypy/rpython/controllerentry.py +++ b/pypy/rpython/controllerentry.py @@ -5,6 +5,7 @@ from pypy.rpython.extregistry import ExtRegistryEntry from pypy.rpython.annlowlevel import cachedtype from pypy.rpython.error import TyperError +from pypy.rlib.objectmodel import specialize class ControllerEntry(ExtRegistryEntry): @@ -54,17 +55,17 @@ def _freeze_(self): return True + @specialize.arg(0) def box(self, obj): return controlled_instance_box(self, obj) - box._annspecialcase_ = 'specialize:arg(0)' + @specialize.arg(0) def unbox(self, obj): return controlled_instance_unbox(self, obj) - unbox._annspecialcase_ = 'specialize:arg(0)' + @specialize.arg(0) def is_box(self, obj): return controlled_instance_is_box(self, obj) - is_box._annspecialcase_ = 'specialize:arg(0)' def ctrl_new(self, *args_s, **kwds_s): if kwds_s: @@ -83,6 +84,7 @@ from pypy.rpython.rcontrollerentry import rtypedelegate return rtypedelegate(self.new, hop, revealargs=[], revealresult=True) + @specialize.memo() def bound_method_controller(self, attr): class BoundMethod(object): pass class BoundMethodController(Controller): @@ -90,13 +92,12 @@ def call(_self, obj, *args): return getattr(self, 'method_' + attr)(obj, *args) return BoundMethodController() - bound_method_controller._annspecialcase_ = 'specialize:memo' + @specialize.arg(0, 2) def getattr(self, obj, attr): if hasattr(self, 'method_' + attr): return self.bound_method_controller(attr).box(obj) return getattr(self, 'get_' + attr)(obj) - getattr._annspecialcase_ = 'specialize:arg(0, 2)' def ctrl_getattr(self, s_obj, s_attr): return delegate(self.getattr, s_obj, s_attr) @@ -105,9 +106,9 @@ from pypy.rpython.rcontrollerentry import rtypedelegate return rtypedelegate(self.getattr, hop) + @specialize.arg(0, 2) def setattr(self, obj, attr, value): return getattr(self, 'set_' + attr)(obj, value) - setattr._annspecialcase_ = 'specialize:arg(0, 2)' def ctrl_setattr(self, s_obj, s_attr, s_value): return delegate(self.setattr, s_obj, s_attr, s_value) From noreply at buildbot.pypy.org Sun Feb 19 18:53:27 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 18:53:27 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyUnicode_Replace Message-ID: <20120219175327.1542B82AAB@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52641:0f8cad650dd3 Date: 2012-02-19 18:24 +0100 http://bitbucket.org/pypy/pypy/changeset/0f8cad650dd3/ Log: cpyext: add PyUnicode_Replace diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2373,16 +2373,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -429,3 +429,11 @@ w_char = api.PyUnicode_FromOrdinal(0xFFFF) assert space.unwrap(w_char) == u'\uFFFF' + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -548,6 +548,15 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + From noreply at buildbot.pypy.org Sun Feb 19 18:53:28 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 18:53:28 +0100 (CET) Subject: [pypy-commit] pypy default: merge win32-cleanup_2: copy and zip python27.lib, Message-ID: <20120219175328.55A1D820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52642:ff2a7382fc20 Date: 2012-02-19 18:52 +0100 http://bitbucket.org/pypy/pypy/changeset/ff2a7382fc20/ Log: merge win32-cleanup_2: copy and zip python27.lib, the "import library" needed to build extensions on windows. diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib"))), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -559,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Sun Feb 19 18:53:29 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 18:53:29 +0100 (CET) Subject: [pypy-commit] pypy default: Merge heads Message-ID: <20120219175329.82ADF820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52643:950eab16acf3 Date: 2012-02-19 18:52 +0100 http://bitbucket.org/pypy/pypy/changeset/950eab16acf3/ Log: Merge heads diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) From noreply at buildbot.pypy.org Sun Feb 19 20:11:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:01 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fixes, maybe. Message-ID: <20120219191101.444C4820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52644:f37b48ea6529 Date: 2012-02-19 17:00 +0100 http://bitbucket.org/pypy/pypy/changeset/f37b48ea6529/ Log: Fixes, maybe. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -16,8 +16,8 @@ """Enumerate pairs (var-or-const-or-op, var) that together describe the whole control flow of GC pointers in the program. If the source is a SpaceOperation, it means 'produced by this operation but we can't - follow what this operation does'. If the source is None, it means - 'coming from somewhere, unsure where'. + follow what this operation does'. The source is a string to describe + special cases. """ # Tracking dependencies of only GC pointers simplifies the logic here. # We don't have to worry about external calls and callbacks. @@ -84,27 +84,32 @@ if _is_gc(v2): assert _is_gc(v1) if v1 is link.last_exc_value: - v1 = None + v1 = 'last_exc_value' resultlist.append((v1, v2)) # # also add as a callee the graphs that are explicitly callees in the # callgraph. Useful because some graphs may end up not being called # any more, if they were inlined. + was_originally_a_callee = set() for _, graph in translator.callgraph.itervalues(): - was_a_callee.add(graph) + was_originally_a_callee.add(graph) # for graph in translator.graphs: if graph not in was_a_callee: + if graph in was_originally_a_callee: + src = 'originally_a_callee' + else: + src = 'unknown' for v in graph.getargs(): if _is_gc(v): - resultlist.append((None, v)) + resultlist.append((src, v)) return resultlist class GcSource(object): """Works like a dict {gcptr-var: set-of-sources}. A source is a - Constant, or a SpaceOperation that creates the value, or None which - means 'no clue'.""" + Constant, or a SpaceOperation that creates the value, or a string + which describes a special case.""" def __init__(self, translator): self.translator = translator diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -41,11 +41,13 @@ if src.value: # a NULL pointer is still valid as local self.reason = src return False - elif src is None: - self.reason = 'found a None' - return False elif src == 'instantiate': pass + elif src == 'originally_a_callee': + pass + elif isinstance(src, str): + self.reason = src + return False else: raise AssertionError(repr(src)) return True diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py --- a/pypy/translator/stm/test/test_gcsource.py +++ b/pypy/translator/stm/test/test_gcsource.py @@ -114,7 +114,7 @@ gsrc = gcsource(main, [lltype.Ptr(lltype.GcStruct('S'))]) v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] - assert list(s) == [None] + assert list(s) == ['unknown'] def test_exception(): class FooError(Exception): @@ -129,4 +129,4 @@ gsrc = gcsource(main, [int]) v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] - assert list(s) == [None] + assert list(s) == ['last_exc_value'] From noreply at buildbot.pypy.org Sun Feb 19 20:11:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:02 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add missing declarations. Message-ID: <20120219191102.7842D82AAB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52645:77069d76a77c Date: 2012-02-19 18:26 +0100 http://bitbucket.org/pypy/pypy/changeset/77069d76a77c/ Log: Add missing declarations. diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -26,6 +26,8 @@ short stm_read_int2(void *, long); int stm_read_int4(void *, long); long long stm_read_int8(void *, long); +double stm_read_int8f(void *, long); +float stm_read_int4f(void *, long); #ifdef RPY_STM_ASSERT From noreply at buildbot.pypy.org Sun Feb 19 20:11:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:03 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Kill an outdated test. Message-ID: <20120219191103.A6999820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52646:15dfea589fd3 Date: 2012-02-19 18:50 +0100 http://bitbucket.org/pypy/pypy/changeset/15dfea589fd3/ Log: Kill an outdated test. diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py --- a/pypy/translator/stm/test/test_ztranslated.py +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -10,15 +10,3 @@ data = cbuilder.cmdexec('4 5000') assert 'done sleeping.' in data assert 'check ok!' in data - - -class TestSTMFramework(CompiledSTMTests): - gc = "minimark" - - def test_hello_world(self): - py.test.skip("in-progress") - t, cbuilder = self.compile(targetdemo.entry_point) - data = cbuilder.cmdexec('4 5000 1') - # ^^^ should check that it doesn't take 1G of RAM - assert 'done sleeping.' in data - assert 'check ok!' in data From noreply at buildbot.pypy.org Sun Feb 19 20:11:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:04 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Improve the error message by displaying the "traceback" Message-ID: <20120219191104.D676A820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52647:561cb54a9769 Date: 2012-02-19 18:53 +0100 http://bitbucket.org/pypy/pypy/changeset/561cb54a9769/ Log: Improve the error message by displaying the "traceback" that leads to the malloc(gc) call. diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -643,8 +643,20 @@ func = getattr(graph, 'func', None) if func and getattr(func, '_gc_no_collect_', False): if self.collect_analyzer.analyze_direct_call(graph): + # 'no_collect' function can trigger collection + import cStringIO + err = cStringIO.StringIO() + prev = sys.stdout + try: + sys.stdout = err + ca = CollectAnalyzer(self.translator) + ca.verbose = True + ca.analyze_direct_call(graph) # print the "traceback" here + sys.stdout = prev + except: + sys.stdout = prev raise Exception("'no_collect' function can trigger collection:" - " %s" % func) + " %s\n%s" % (func, err.getvalue())) if self.write_barrier_ptr: self.clean_sets = ( From noreply at buildbot.pypy.org Sun Feb 19 20:11:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:06 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Ensure that no collect at all occur in the main thread when Message-ID: <20120219191106.11061820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52648:d13a7a65bcc8 Date: 2012-02-19 18:55 +0100 http://bitbucket.org/pypy/pypy/changeset/d13a7a65bcc8/ Log: Ensure that no collect at all occur in the main thread when the other threads may be running. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -2,7 +2,7 @@ from pypy.interpreter.gateway import unwrap_spec from pypy.module.transaction import threadintf from pypy.module.transaction.fifo import Fifo -from pypy.rlib import rstm +from pypy.rlib import rstm, rgc from pypy.rlib.debug import ll_assert @@ -257,6 +257,16 @@ state.unlock() + at rgc.no_collect +def _run(): + # --- start the threads --- don't use the GC here any more! --- + for i in range(state.num_threads): + threadintf.start_new_thread(_run_thread, ()) + # + state.lock_unfinished() # wait for all threads to finish + # --- done, we can use the GC again --- + + def run(space): if state.running: raise OperationError( @@ -279,12 +289,8 @@ state.running = True state.init_exceptions() # - # --- start the threads --- don't use the GC here any more! --- - for i in range(state.num_threads): - threadintf.start_new_thread(_run_thread, ()) - # - state.lock_unfinished() # wait for all threads to finish - # --- done, we can use the GC again --- + # start the threads and wait for all of them to finish + _run() # assert state.num_waiting_threads == 0 assert state.pending.is_empty() From noreply at buildbot.pypy.org Sun Feb 19 20:11:07 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:07 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: The part of the code in the main thread that starts new threads Message-ID: <20120219191107.43806820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52649:cb64c1fa7f40 Date: 2012-02-19 19:12 +0100 http://bitbucket.org/pypy/pypy/changeset/cb64c1fa7f40/ Log: The part of the code in the main thread that starts new threads must not use the GC itself. diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -45,6 +45,10 @@ threadsafe=True) # release the GIL, but most # importantly, reacquire it # around the callback +c_thread_start_NOGIL = llexternal('RPyThreadStart', [CALLBACK], rffi.LONG, + _callable=_emulated_start_new_thread, + _nowrapper=True, # just call directly + random_effects_on_gcobjs=False) c_thread_get_ident = llexternal('RPyThreadGetIdent', [], rffi.LONG, _nowrapper=True) # always call directly diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -1,7 +1,8 @@ -import time +from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.thread import ll_thread -from pypy.rlib import rstm +from pypy.rlib import rstm, rgc from pypy.rlib.debug import debug_print +from pypy.rpython.annlowlevel import llhelper class Node: @@ -81,6 +82,27 @@ rstm.descriptor_done() + at rgc.no_collect # don't use the gc as long as other threads are running +def _run(): + i = 0 + while i < glob.NUM_THREADS: + glob._arg = glob._arglist[i] + ll_run_me = llhelper(ll_thread.CALLBACK, run_me) + ll_thread.c_thread_start_NOGIL(ll_run_me) + ll_thread.acquire_NOAUTO(glob.lock, True) + i += 1 + debug_print("sleeping...") + while glob.done < glob.NUM_THREADS: # poor man's lock + _sleep(rffi.cast(rffi.ULONG, 1)) + debug_print("done sleeping.") + + +# Posix only +_sleep = rffi.llexternal('sleep', [rffi.ULONG], rffi.ULONG, + _nowrapper=True, + random_effects_on_gcobjs=False) + + # __________ Entry point __________ def entry_point(argv): @@ -94,14 +116,8 @@ glob.done = 0 glob.lock = ll_thread.allocate_ll_lock() ll_thread.acquire_NOAUTO(glob.lock, True) - for i in range(glob.NUM_THREADS): - glob._arg = Arg() - ll_thread.start_new_thread(run_me, ()) - ll_thread.acquire_NOAUTO(glob.lock, True) - print "sleeping..." - while glob.done < glob.NUM_THREADS: # poor man's lock - time.sleep(1) - print "done sleeping." + glob._arglist = [Arg() for i in range(glob.NUM_THREADS)] + _run() check_chained_list(glob.anchor.next) return 0 diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py --- a/pypy/translator/stm/test/test_ztranslated.py +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -7,6 +7,6 @@ def test_hello_world(self): t, cbuilder = self.compile(targetdemo.entry_point) - data = cbuilder.cmdexec('4 5000') - assert 'done sleeping.' in data + data, dataerr = cbuilder.cmdexec('4 5000', err=True) + assert 'done sleeping.' in dataerr assert 'check ok!' in data From noreply at buildbot.pypy.org Sun Feb 19 20:11:08 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:08 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix the 'transaction' module too. Message-ID: <20120219191108.74C70820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52650:18482c0cbdd6 Date: 2012-02-19 19:12 +0100 http://bitbucket.org/pypy/pypy/changeset/18482c0cbdd6/ Log: Fix the 'transaction' module too. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -261,7 +261,7 @@ def _run(): # --- start the threads --- don't use the GC here any more! --- for i in range(state.num_threads): - threadintf.start_new_thread(_run_thread, ()) + threadintf.start_new_thread(_run_thread) # state.lock_unfinished() # wait for all threads to finish # --- done, we can use the GC again --- diff --git a/pypy/module/transaction/threadintf.py b/pypy/module/transaction/threadintf.py --- a/pypy/module/transaction/threadintf.py +++ b/pypy/module/transaction/threadintf.py @@ -1,6 +1,8 @@ import thread from pypy.module.thread import ll_thread from pypy.rlib.objectmodel import we_are_translated +from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.debug import fatalerror null_ll_lock = ll_thread.null_ll_lock @@ -23,9 +25,11 @@ else: lock.release() -def start_new_thread(callback, args): - assert args == () +def start_new_thread(callback): if we_are_translated(): - ll_thread.start_new_thread(callback, args) + llcallback = llhelper(ll_thread.CALLBACK, callback) + ident = ll_thread.c_thread_start_NOGIL(llcallback) + if ident == -1: + fatalerror("cannot start thread") else: - thread.start_new_thread(callback, args) + thread.start_new_thread(callback, ()) From noreply at buildbot.pypy.org Sun Feb 19 20:11:09 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:11:09 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix Message-ID: <20120219191109.A4862820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52651:f20611c076c0 Date: 2012-02-19 20:08 +0100 http://bitbucket.org/pypy/pypy/changeset/f20611c076c0/ Log: Fix diff --git a/pypy/translator/stm/src_stm/atomic_ops.h b/pypy/translator/stm/src_stm/atomic_ops.h --- a/pypy/translator/stm/src_stm/atomic_ops.h +++ b/pypy/translator/stm/src_stm/atomic_ops.h @@ -41,5 +41,7 @@ static inline void spinloop(void) { - asm volatile ("pause"); + /* use "memory" here to make sure that gcc will reload the + relevant data from memory after the spinloop */ + asm volatile ("pause":::"memory"); } From noreply at buildbot.pypy.org Sun Feb 19 20:23:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:23:19 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Go through "hint" operations that are not related to stm. Message-ID: <20120219192319.95B48820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52652:a5a9bfcc9aeb Date: 2012-02-19 20:22 +0100 http://bitbucket.org/pypy/pypy/changeset/a5a9bfcc9aeb/ Log: Go through "hint" operations that are not related to stm. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -45,7 +45,9 @@ for block in graph.iterblocks(): for op in block.operations: # - if op.opname in COPIES_POINTER: + if (op.opname in COPIES_POINTER or + (op.opname == 'hint' and + 'stm_write' not in op.args[1].value)): if _is_gc(op.result) and _is_gc(op.args[0]): resultlist.append((op.args[0], op.result)) continue diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py --- a/pypy/translator/stm/test/test_gcsource.py +++ b/pypy/translator/stm/test/test_gcsource.py @@ -2,6 +2,7 @@ from pypy.translator.stm.gcsource import GcSource from pypy.objspace.flow.model import SpaceOperation, Constant from pypy.rpython.lltypesystem import lltype +from pypy.rlib.jit import hint class X: @@ -130,3 +131,21 @@ v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] assert list(s) == ['last_exc_value'] + +def test_hint_xyz(): + def main(n): + return hint(X(n), xyz=True) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 1 + assert list(s)[0].opname == 'malloc' + +def test_hint_stm_write(): + def main(n): + return hint(X(n), stm_write=True) + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert len(s) == 1 + assert list(s)[0].opname == 'hint' From noreply at buildbot.pypy.org Sun Feb 19 20:48:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 20:48:19 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Kill this specialization. It's mostly pointless and it gives Message-ID: <20120219194819.BD7AF820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52653:c1db98c91413 Date: 2012-02-19 20:39 +0100 http://bitbucket.org/pypy/pypy/changeset/c1db98c91413/ Log: Kill this specialization. It's mostly pointless and it gives occasionally headaches because fatalerror() is called from several levels. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,19 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True class DebugLog(list): From noreply at buildbot.pypy.org Sun Feb 19 21:03:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 21:03:45 +0100 (CET) Subject: [pypy-commit] pypy default: Clarify or fix these comments. Message-ID: <20120219200345.E34BC820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52654:506419ff7de5 Date: 2012-02-19 21:01 +0100 http://bitbucket.org/pypy/pypy/changeset/506419ff7de5/ Log: Clarify or fix these comments. diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) From noreply at buildbot.pypy.org Sun Feb 19 21:04:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 21:04:18 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: hg merge default Message-ID: <20120219200418.14B6B820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52655:f43e80c81df4 Date: 2012-02-19 20:50 +0100 http://bitbucket.org/pypy/pypy/changeset/f43e80c81df4/ Log: hg merge default diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -184,8 +184,10 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -112,6 +112,37 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + def test_dictproxy(self, space, api): w_dict = space.sys.get('modules') w_proxy = api.PyDictProxy_New(w_dict) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +92,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,7 +435,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -1697,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Sun Feb 19 21:04:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 19 Feb 2012 21:04:19 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Bah, the problem was not about the 'traceback' argument. Message-ID: <20120219200419.49E09820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52656:8c8b4968177b Date: 2012-02-19 21:03 +0100 http://bitbucket.org/pypy/pypy/changeset/8c8b4968177b/ Log: Bah, the problem was not about the 'traceback' argument. Proper fix. diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -20,11 +20,13 @@ hop.genop('debug_assert', vlist) def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True +fatalerror._annenforceargs_ = [str] def fatalerror_notb(msg): # a variant of fatalerror() that doesn't print the RPython traceback @@ -32,6 +34,7 @@ from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) fatalerror_notb._dont_inline_ = True +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): From noreply at buildbot.pypy.org Sun Feb 19 21:22:49 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 21:22:49 +0100 (CET) Subject: [pypy-commit] pypy default: Another cpyext stub: PyThread_start_new_thread Message-ID: <20120219202249.B51DA82AAB@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52658:a8321d3e8e9c Date: 2012-02-19 21:21 +0100 http://bitbucket.org/pypy/pypy/changeset/a8321d3e8e9c/ Log: Another cpyext stub: PyThread_start_new_thread diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -62,3 +62,7 @@ """ return -1 +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 From noreply at buildbot.pypy.org Sun Feb 19 21:22:48 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 19 Feb 2012 21:22:48 +0100 (CET) Subject: [pypy-commit] pypy default: Add a stub implementation for Py_AddPendingCall. Message-ID: <20120219202248.80E41820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52657:fae75d81bc4f Date: 2012-02-19 21:13 +0100 http://bitbucket.org/pypy/pypy/changeset/fae75d81bc4f/ Log: Add a stub implementation for Py_AddPendingCall. It always returns an error for now... diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1293,28 +1293,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,27 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + From notifications-noreply at bitbucket.org Mon Feb 20 10:55:07 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Mon, 20 Feb 2012 09:55:07 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120220095507.8767.7539@bitbucket03.managed.contegix.com> You have received a notification from jpberdel. Hi, I forked pypy. My fork is at https://bitbucket.org/jpberdel/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Mon Feb 20 11:01:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 11:01:35 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Weakref support in the GC. Message-ID: <20120220100135.775FA820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52659:dc4d7d7854d2 Date: 2012-02-20 10:59 +0100 http://bitbucket.org/pypy/pypy/changeset/dc4d7d7854d2/ Log: Weakref support in the GC. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -75,8 +75,7 @@ "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], "stmgc": [("translation.gctransformer", "framework"), - ("translation.gcrootfinder", "stm"), - ("translation.rweakref", False)], # XXX temp + ("translation.gcrootfinder", "stm")], }, cmdline="--gc"), ChoiceOption("gctransformer", "GC transformer that is used - internal", diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -19,6 +19,7 @@ GCFLAG_WAS_COPIED = first_gcflag << 1 # keep in sync with et.c GCFLAG_HAS_SHADOW = first_gcflag << 2 GCFLAG_FIXED_HASH = first_gcflag << 3 +GCFLAG_WEAKREF = first_gcflag << 4 def always_inline(fn): @@ -48,6 +49,7 @@ ('nursery_size', lltype.Signed), ('malloc_flags', lltype.Signed), ('pending_list', llmemory.Address), + ('surviving_weakrefs', llmemory.Address), ) TRANSLATION_PARAMS = { @@ -160,7 +162,6 @@ is_finalizer_light=False, contains_weakptr=False): #assert not needs_finalizer, "XXX" --- finalizer is just ignored - assert not contains_weakptr, "XXX" # # Check the mode: either in a transactional thread, or in # the main thread. For now we do the same thing in both @@ -178,6 +179,8 @@ # Build the object. llarena.arena_reserve(result, totalsize) obj = result + size_gc_header + if contains_weakptr: # check constant-folded + flags |= GCFLAG_WEAKREF self.init_gc_object(result, typeid, flags=flags) # return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) @@ -480,6 +483,9 @@ # self.gc.release(self.gc.mutex_lock) # + # Fix up the weakrefs that used to point to local objects + self.fixup_weakrefs(tls) + # # Now, all indirectly reachable local objects have been copied into # the global area, and all pointers have been fixed to point to the # global copies, including in the local copy of the roots. What @@ -490,6 +496,7 @@ def collect_roots_from_tldict(self, tls): tls.pending_list = NULL + tls.surviving_weakrefs = NULL # Enumerate the roots, which are the local copies of global objects. # For each root, trace it. CALLBACK = self.stm_operations.CALLBACK_ENUM @@ -602,11 +609,47 @@ # thread before the commit is really complete. globalhdr.version = tls.pending_list tls.pending_list = globalobj + # + if hdr.tid & GCFLAG_WEAKREF != 0: + # this was a weakref object that survives. + self.young_weakref_survives(tls, obj) # # Fix the original root.address[0] to point to the globalobj root.address[0] = globalobj + @dont_inline + def young_weakref_survives(self, tls, obj): + # Relink it in the tls.surviving_weakrefs chained list, + # via the weakpointer_offset in the local copy of the object. + # Do it only if the weakref points to a local object. + offset = self.gc.weakpointer_offset(self.gc.get_type_id(obj)) + if self.is_in_nursery(tls, (obj + offset).address[0]): + (obj + offset).address[0] = tls.surviving_weakrefs + tls.surviving_weakrefs = obj + + def fixup_weakrefs(self, tls): + obj = tls.surviving_weakrefs + while obj: + offset = self.gc.weakpointer_offset(self.gc.get_type_id(obj)) + # + hdr = self.header(obj) + ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, + "weakref: unexpectedly global") + globalobj = hdr.version + obj2 = (globalobj + offset).address[0] + hdr2 = self.header(obj2) + ll_assert(hdr2.tid & GCFLAG_GLOBAL == 0, + "weakref: points to a global") + if hdr2.tid & GCFLAG_WAS_COPIED: + obj2g = hdr2.version # obj2 survives, going there + else: + obj2g = llmemory.NULL # obj2 dies + (globalobj + offset).address[0] = obj2g + # + obj = (obj + offset).address[0] + + class _GlobalCollector(object): pass _global_collector = _GlobalCollector() diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -1,5 +1,5 @@ import py -from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, llarena, llgroup, rffi from pypy.rpython.memory.gc.stmgc import StmGC, WORD from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_WAS_COPIED from pypy.rpython.memory.support import mangle_hash @@ -14,6 +14,9 @@ ('sr2', lltype.Ptr(SR)), ('sr3', lltype.Ptr(SR)))) +WR = lltype.GcStruct('WeakRef', ('wadr', llmemory.Address)) +SWR = lltype.GcStruct('SWR', ('wr', lltype.Ptr(WR))) + class FakeStmOperations: # The point of this class is to make sure about the distinction between @@ -115,6 +118,10 @@ ofslist = [llmemory.offsetof(SR, 's1'), llmemory.offsetof(SR, 'sr2'), llmemory.offsetof(SR, 'sr3')] + elif TYPE == WR: + ofslist = [] + elif TYPE == SWR: + ofslist = [llmemory.offsetof(SWR, 'wr')] else: assert 0 for ofs in ofslist: @@ -122,6 +129,9 @@ if addr.address[0]: callback(addr, arg) +def fake_weakpointer_offset(tid): + return llmemory.offsetof(WR, 'wadr') + class TestBasic: GCClass = StmGC @@ -135,6 +145,7 @@ self.gc.DEBUG = True self.gc.get_size = fake_get_size self.gc.trace = fake_trace + self.gc.weakpointer_offset = fake_weakpointer_offset self.gc.setup() def teardown_method(self, meth): @@ -147,9 +158,11 @@ # ---------- # test helpers - def malloc(self, STRUCT): + def malloc(self, STRUCT, weakref=False): size = llarena.round_up_for_allocation(llmemory.sizeof(STRUCT)) - gcref = self.gc.malloc_fixedsize_clear(123, size) + tid = lltype.cast_primitive(llgroup.HALFWORD, 123) + gcref = self.gc.malloc_fixedsize_clear(tid, size, + contains_weakptr=weakref) realobj = lltype.cast_opaque_ptr(lltype.Ptr(STRUCT), gcref) addr = llmemory.cast_ptr_to_adr(realobj) return realobj, addr @@ -485,3 +498,53 @@ s2 = tr1.s1 # tr1 is a root, so not copied yet assert s2 and s2 != t2 assert self.gc.identityhash(s2) == i + + def test_weakref_to_global(self): + swr1, swr1_adr = self.malloc(SWR) + s2, s2_adr = self.malloc(S) + self.select_thread(1) + wr1, wr1_adr = self.malloc(WR, weakref=True) + wr1.wadr = s2_adr + twr1_adr = self.gc.stm_writebarrier(swr1_adr) + twr1 = llmemory.cast_adr_to_ptr(twr1_adr, lltype.Ptr(SWR)) + twr1.wr = wr1 + self.gc.commit_transaction() + wr2 = twr1.wr # twr1 is a root, so not copied yet + assert wr2 and wr2 != wr1 + assert wr2.wadr == s2_adr # survives + + def test_weakref_to_local_dying(self): + swr1, swr1_adr = self.malloc(SWR) + self.select_thread(1) + t2, t2_adr = self.malloc(S) + wr1, wr1_adr = self.malloc(WR, weakref=True) + wr1.wadr = t2_adr + twr1_adr = self.gc.stm_writebarrier(swr1_adr) + twr1 = llmemory.cast_adr_to_ptr(twr1_adr, lltype.Ptr(SWR)) + twr1.wr = wr1 + self.gc.commit_transaction() + wr2 = twr1.wr # twr1 is a root, so not copied yet + assert wr2 and wr2 != wr1 + assert wr2.wadr == llmemory.NULL # dies + + def test_weakref_to_local_surviving(self): + sr1, sr1_adr = self.malloc(SR) + swr1, swr1_adr = self.malloc(SWR) + self.select_thread(1) + t2, t2_adr = self.malloc(S) + wr1, wr1_adr = self.malloc(WR, weakref=True) + wr1.wadr = t2_adr + twr1_adr = self.gc.stm_writebarrier(swr1_adr) + twr1 = llmemory.cast_adr_to_ptr(twr1_adr, lltype.Ptr(SWR)) + twr1.wr = wr1 + tr1_adr = self.gc.stm_writebarrier(sr1_adr) + tr1 = llmemory.cast_adr_to_ptr(tr1_adr, lltype.Ptr(SR)) + tr1.s1 = t2 + t2.a = 4242 + self.gc.commit_transaction() + wr2 = twr1.wr # twr1 is a root, so not copied yet + assert wr2 and wr2 != wr1 + assert wr2.wadr and wr2.wadr != t2_adr # survives + s2 = llmemory.cast_adr_to_ptr(wr2.wadr, lltype.Ptr(S)) + assert s2.a == 4242 + assert s2 == tr1.s1 # tr1 is a root, so not copied yet From noreply at buildbot.pypy.org Mon Feb 20 11:26:58 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:26:58 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Prepare the instructions in malloc_slowpath to actually emit them Message-ID: <20120220102658.7BF19820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52660:19a3138c5db4 Date: 2012-02-19 05:48 -0800 http://bitbucket.org/pypy/pypy/changeset/19a3138c5db4/ Log: Prepare the instructions in malloc_slowpath to actually emit them diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -329,6 +329,7 @@ pmc.bc(4, 2, jmp_pos) # jump if the two values are equal pmc.overwrite() mc.b_abs(self.propagate_exception_path) + mc.prepare_insts_blocks() rawstart = mc.materialize(self.cpu.asmmemmgr, []) if IS_PPC_64: self.write_64_bit_func_descr(rawstart, rawstart+3*WORD) From noreply at buildbot.pypy.org Mon Feb 20 11:26:59 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:26:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: create a minimal frame for malloc_slowpath to store the backchain and the return address Message-ID: <20120220102659.AD532820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52661:e98bb91b9214 Date: 2012-02-19 06:30 -0800 http://bitbucket.org/pypy/pypy/changeset/e98bb91b9214/ Log: create a minimal frame for malloc_slowpath to store the backchain and the return address diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -301,7 +301,12 @@ if IS_PPC_64: for _ in range(6): mc.write32(0) - + mc.subi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) + mc.mflr(r.SCRATCH.value) + if IS_PPC_32: + mc.stw(r.SCRATCH.value, r.SP.value, 0) + else: + mc.std(r.SCRATCH.value, r.SP.value, 0) with Saved_Volatiles(mc): # Values to compute size stored in r3 and r4 mc.subf(r.r3.value, r.r3.value, r.r4.value) @@ -321,6 +326,12 @@ mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() mc.nop() + + mc.load(r.SCRATCH.value, r.SP.value, 0) + mc.mtlr(r.SCRATCH.value) # restore LR + mc.addi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) # restore old SP + mc.blr() + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() mc.load_imm(r.r4, nursery_free_adr) mc.load(r.r4.value, r.r4.value, 0) @@ -329,6 +340,8 @@ pmc.bc(4, 2, jmp_pos) # jump if the two values are equal pmc.overwrite() mc.b_abs(self.propagate_exception_path) + + mc.prepare_insts_blocks() rawstart = mc.materialize(self.cpu.asmmemmgr, []) if IS_PPC_64: From noreply at buildbot.pypy.org Mon Feb 20 11:27:00 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:27:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: structure the return and exception paths in malloc_slowpath Message-ID: <20120220102700.DC50D820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52662:10f14d281b2e Date: 2012-02-19 06:32 -0800 http://bitbucket.org/pypy/pypy/changeset/10f14d281b2e/ Log: structure the return and exception paths in malloc_slowpath diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -326,18 +326,20 @@ mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() mc.nop() + + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() + mc.load_imm(r.r4, nursery_free_adr) + mc.load(r.r4.value, r.r4.value, 0) mc.load(r.SCRATCH.value, r.SP.value, 0) mc.mtlr(r.SCRATCH.value) # restore LR mc.addi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) # restore old SP mc.blr() - nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() - mc.load_imm(r.r4, nursery_free_adr) - mc.load(r.r4.value, r.r4.value, 0) - + # if r3 == 0 we skip the return above and jump to the exception path + offset = mc.currpos() - jmp_pos pmc = OverwritingBuilder(mc, jmp_pos, 1) - pmc.bc(4, 2, jmp_pos) # jump if the two values are equal + pmc.bc(4, 2, offset) pmc.overwrite() mc.b_abs(self.propagate_exception_path) From noreply at buildbot.pypy.org Mon Feb 20 11:27:02 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:27:02 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use FORCE_INDEX_AREA Message-ID: <20120220102702.1767D820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52663:9edce8167d5b Date: 2012-02-19 06:32 -0800 http://bitbucket.org/pypy/pypy/changeset/9edce8167d5b/ Log: use FORCE_INDEX_AREA diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -394,8 +394,8 @@ addr = rffi.cast(lltype.Signed, decode_func_addr) # load parameters into parameter registers - mc.load(r.r3.value, r.SPP.value, self.ENCODING_AREA) # address of state encoding - mc.mr(r.r4.value, r.SPP.value) # load spilling pointer + mc.load(r.r3.value, r.SPP.value, self.FORCE_INDEX_AREA) # address of state encoding + mc.mr(r.r4.value, r.SPP.value) # load spilling pointer # # call decoding function mc.call(addr) From noreply at buildbot.pypy.org Mon Feb 20 11:27:03 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:27:03 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Actually patch the machine code Message-ID: <20120220102703.45852820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52664:8f787b47866d Date: 2012-02-19 06:33 -0800 http://bitbucket.org/pypy/pypy/changeset/8f787b47866d/ Log: Actually patch the machine code diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -1024,6 +1024,7 @@ offset = self.mc.currpos() - fast_jmp_pos pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) pmc.bc(4, 1, offset) # jump if LE (not GT) + pmc.overwrite() with scratch_reg(self.mc): self.mc.load_imm(r.SCRATCH, nursery_free_adr) From noreply at buildbot.pypy.org Mon Feb 20 11:27:04 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 11:27:04 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge heads Message-ID: <20120220102704.74A80820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52665:a419d8d766c6 Date: 2012-02-20 02:25 -0800 http://bitbucket.org/pypy/pypy/changeset/a419d8d766c6/ Log: merge heads diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -301,7 +301,12 @@ if IS_PPC_64: for _ in range(6): mc.write32(0) - + mc.subi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) + mc.mflr(r.SCRATCH.value) + if IS_PPC_32: + mc.stw(r.SCRATCH.value, r.SP.value, 0) + else: + mc.std(r.SCRATCH.value, r.SP.value, 0) with Saved_Volatiles(mc): # Values to compute size stored in r3 and r4 mc.subf(r.r3.value, r.r3.value, r.r4.value) @@ -315,14 +320,25 @@ mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() mc.nop() + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() mc.load_imm(r.r4, nursery_free_adr) mc.load(r.r4.value, r.r4.value, 0) + + mc.load(r.SCRATCH.value, r.SP.value, 0) + mc.mtlr(r.SCRATCH.value) # restore LR + mc.addi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) # restore old SP + mc.blr() + # if r3 == 0 we skip the return above and jump to the exception path + offset = mc.currpos() - jmp_pos pmc = OverwritingBuilder(mc, jmp_pos, 1) - pmc.bc(4, 2, jmp_pos) # jump if the two values are equal + pmc.bc(4, 2, offset) pmc.overwrite() mc.b_abs(self.propagate_exception_path) + + + mc.prepare_insts_blocks() rawstart = mc.materialize(self.cpu.asmmemmgr, []) if IS_PPC_64: self.write_64_bit_func_descr(rawstart, rawstart+3*WORD) @@ -372,8 +388,8 @@ addr = rffi.cast(lltype.Signed, decode_func_addr) # load parameters into parameter registers - mc.load(r.r3.value, r.SPP.value, self.ENCODING_AREA) # address of state encoding - mc.mr(r.r4.value, r.SPP.value) # load spilling pointer + mc.load(r.r3.value, r.SPP.value, self.FORCE_INDEX_AREA) # address of state encoding + mc.mr(r.r4.value, r.SPP.value) # load spilling pointer # # call decoding function mc.call(addr) @@ -997,6 +1013,7 @@ offset = self.mc.currpos() - fast_jmp_pos pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) pmc.bc(4, 1, offset) # jump if LE (not GT) + pmc.overwrite() with scratch_reg(self.mc): self.mc.load_imm(r.SCRATCH, nursery_free_adr) From noreply at buildbot.pypy.org Mon Feb 20 11:35:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 11:35:46 +0100 (CET) Subject: [pypy-commit] pypy default: make sure that ctypes arrays are convertible to pointers, and that we can pass them as arguments in the fast path Message-ID: <20120220103546.88D00820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52666:6566e81c76a8 Date: 2012-02-20 11:31 +0100 http://bitbucket.org/pypy/pypy/changeset/6566e81c76a8/ Log: make sure that ctypes arrays are convertible to pointers, and that we can pass them as arguments in the fast path diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': From noreply at buildbot.pypy.org Mon Feb 20 11:44:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 11:44:33 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Allow objectmodel.current_object_addr_as_int() to work without Message-ID: <20120220104433.25C95820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52667:5c6be7732717 Date: 2012-02-20 11:38 +0100 http://bitbucket.org/pypy/pypy/changeset/5c6be7732717/ Log: Allow objectmodel.current_object_addr_as_int() to work without forcing inevitable transactions. diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -452,7 +452,7 @@ if hop.rtyper.type_system.name == 'lltypesystem': from pypy.rpython.lltypesystem import lltype if isinstance(vobj.concretetype, lltype.Ptr): - return hop.genop('cast_ptr_to_int', [vobj], + return hop.genop('cast_current_ptr_to_int', [vobj], resulttype = lltype.Signed) elif hop.rtyper.type_system.name == 'ootypesystem': from pypy.rpython.ootypesystem import ootype diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -385,6 +385,7 @@ 'ptr_iszero': LLOp(canfold=True), 'cast_ptr_to_int': LLOp(sideeffects=False), 'cast_int_to_ptr': LLOp(sideeffects=False), + 'cast_current_ptr_to_int': LLOp(sideeffects=False), # gcptr->int, approx. 'direct_fieldptr': LLOp(canfold=True), 'direct_arrayitems': LLOp(canfold=True), 'direct_ptradd': LLOp(canfold=True), diff --git a/pypy/translator/c/src/int.h b/pypy/translator/c/src/int.h --- a/pypy/translator/c/src/int.h +++ b/pypy/translator/c/src/int.h @@ -195,7 +195,8 @@ #define OP_CAST_INT_TO_LONGLONG(x,r) r = (long long)(x) #define OP_CAST_CHAR_TO_INT(x,r) r = (long)((unsigned char)(x)) #define OP_CAST_INT_TO_CHAR(x,r) r = (char)(x) -#define OP_CAST_PTR_TO_INT(x,r) r = (long)(x) /* XXX */ +#define OP_CAST_PTR_TO_INT(x,r) r = (long)(x) +#define OP_CAST_CURRENT_PTR_TO_INT(x,r) r = (long)(x) #define OP_TRUNCATE_LONGLONG_TO_INT(x,r) r = (long)(x) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -11,6 +11,7 @@ 'direct_call', 'force_cast', 'keepalive', 'cast_ptr_to_adr', 'debug_print', 'debug_assert', 'cast_opaque_ptr', 'hint', 'indirect_call', 'stack_current', 'gc_stack_bottom', + 'cast_current_ptr_to_int', # this variant of 'cast_ptr_to_int' is ok ]) ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_tryfold_ops()) From noreply at buildbot.pypy.org Mon Feb 20 11:50:18 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 20 Feb 2012 11:50:18 +0100 (CET) Subject: [pypy-commit] pypy default: whoops Message-ID: <20120220105018.99628820D1@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r52668:8abf18698af3 Date: 2012-02-20 12:49 +0200 http://bitbucket.org/pypy/pypy/changeset/8abf18698af3/ Log: whoops diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -83,7 +83,7 @@ shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) if sys.platform == 'win32': - shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib"))), + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces From noreply at buildbot.pypy.org Mon Feb 20 12:15:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 12:15:56 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: XXX temporarily disable the method cache Message-ID: <20120220111556.3DEA4820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52669:b9b5af7cb2b8 Date: 2012-02-20 12:15 +0100 http://bitbucket.org/pypy/pypy/changeset/b9b5af7cb2b8/ Log: XXX temporarily disable the method cache diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -359,6 +359,9 @@ type_system = config.translation.type_system backend = config.translation.backend + if config.translation.stm: # XXX --- for STM --- + config.objspace.std.withmethodcache = False + # all the good optimizations for PyPy should be listed here if level in ['2', '3', 'jit']: config.objspace.opcodes.suggest(CALL_METHOD=True) From noreply at buildbot.pypy.org Mon Feb 20 12:18:37 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Mon, 20 Feb 2012 12:18:37 +0100 (CET) Subject: [pypy-commit] pypy default: issue1059 testing Message-ID: <20120220111837.4B3BB820D1@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r52670:4b254e123047 Date: 2012-02-20 12:17 +0100 http://bitbucket.org/pypy/pypy/changeset/4b254e123047/ Log: issue1059 testing make the .__dict__.clear method of builtin types raise an error. Fix popitem on dict proxies (builtin types raise an error, normal types work normally). diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,13 @@ else: return result + def popitem(self, w_dict): + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass From noreply at buildbot.pypy.org Mon Feb 20 12:18:38 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Mon, 20 Feb 2012 12:18:38 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120220111838.D8D9A820D1@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r52671:6159d1be91c9 Date: 2012-02-20 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/6159d1be91c9/ Log: merge diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -83,7 +83,7 @@ shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) if sys.platform == 'win32': - shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib"))), + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces From noreply at buildbot.pypy.org Mon Feb 20 12:31:51 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Mon, 20 Feb 2012 12:31:51 +0100 (CET) Subject: [pypy-commit] pypy default: document this difference Message-ID: <20120220113151.811FA820D1@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r52672:b319183b838d Date: 2012-02-20 12:31 +0100 http://bitbucket.org/pypy/pypy/changeset/b319183b838d/ Log: document this difference diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt From noreply at buildbot.pypy.org Mon Feb 20 13:55:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 13:55:10 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Found out how to re-enable the methodcache with stm. Trying it out... Message-ID: <20120220125510.E3296820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52673:907d77791735 Date: 2012-02-20 13:53 +0100 http://bitbucket.org/pypy/pypy/changeset/907d77791735/ Log: Found out how to re-enable the methodcache with stm. Trying it out... diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -359,9 +359,6 @@ type_system = config.translation.type_system backend = config.translation.backend - if config.translation.stm: # XXX --- for STM --- - config.objspace.std.withmethodcache = False - # all the good optimizations for PyPy should be listed here if level in ['2', '3', 'jit']: config.objspace.opcodes.suggest(CALL_METHOD=True) diff --git a/pypy/interpreter/executioncontext.py b/pypy/interpreter/executioncontext.py --- a/pypy/interpreter/executioncontext.py +++ b/pypy/interpreter/executioncontext.py @@ -36,6 +36,11 @@ self.compiler = space.createcompiler() self.profilefunc = None # if not None, no JIT self.w_profilefuncarg = None + # + config = self.space.config + if config.translation.stm and config.objspace.std.withmethodcache: + from pypy.objspace.std.typeobject import MethodCache + self._methodcache = MethodCache(self.space) def gettopframe(self): return self.topframeref() diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -405,7 +405,12 @@ @elidable def _pure_lookup_where_with_method_cache(w_self, name, version_tag): space = w_self.space - cache = space.fromcache(MethodCache) + if space.config.translation.stm: + # with stm, it's important to use one method cache per thread; + # otherwise, we get all the time spurious transaction conflicts. + cache = space.getexecutioncontext()._methodcache + else: + cache = space.fromcache(MethodCache) SHIFT2 = r_uint.BITS - space.config.objspace.std.methodcachesizeexp SHIFT1 = SHIFT2 - 5 version_tag_as_int = current_object_addr_as_int(version_tag) From noreply at buildbot.pypy.org Mon Feb 20 14:18:51 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 14:18:51 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Make this variable unsigned, because it is meant to overflow from time Message-ID: <20120220131851.EA55C820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52674:94cc96af7a39 Date: 2012-02-20 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/94cc96af7a39/ Log: Make this variable unsigned, because it is meant to overflow from time to time. diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -850,29 +850,29 @@ i = ll_dict_lookup(d, key, d.keyhash(key)) return not i & HIGHEST_BIT -POPITEMINDEX = lltype.Struct('PopItemIndex', ('nextindex', lltype.Signed)) +POPITEMINDEX = lltype.Struct('PopItemIndex', ('nextindex', lltype.Unsigned)) global_popitem_index = lltype.malloc(POPITEMINDEX, zero=True, immortal=True) def _ll_getnextitem(dic): entries = dic.entries ENTRY = lltype.typeOf(entries).TO.OF - dmask = len(entries) - 1 + dmask = r_uint(len(entries) - 1) if hasattr(ENTRY, 'f_hash'): if entries.valid(0): return 0 - base = entries[0].f_hash + base = r_uint(entries[0].f_hash) else: base = global_popitem_index.nextindex - counter = 0 + counter = r_uint(0) while counter <= dmask: - i = (base + counter) & dmask - counter += 1 + i = intmask((base + counter) & dmask) + counter += r_uint(1) if entries.valid(i): break else: raise KeyError if hasattr(ENTRY, 'f_hash'): - entries[0].f_hash = base + counter + entries[0].f_hash = intmask(base + counter) else: global_popitem_index.nextindex = base + counter return i From noreply at buildbot.pypy.org Mon Feb 20 14:18:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 14:18:53 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Only have EXCDATA be a thread-local if stm is enabled. Message-ID: <20120220131853.3034D820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52675:983cc177471a Date: 2012-02-20 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/983cc177471a/ Log: Only have EXCDATA be a thread-local if stm is enabled. diff --git a/pypy/translator/c/node.py b/pypy/translator/c/node.py --- a/pypy/translator/c/node.py +++ b/pypy/translator/c/node.py @@ -520,7 +520,9 @@ def is_thread_local(self): T = self.getTYPE() - return hasattr(T, "_hints") and T._hints.get('thread_local') + return hasattr(T, "_hints") and (T._hints.get('thread_local') or ( + T._hints.get('stm_thread_local') and + self.db.translator.config.translation.stm)) def compilation_info(self): return getattr(self.obj, self.eci_name, None) diff --git a/pypy/translator/exceptiontransform.py b/pypy/translator/exceptiontransform.py --- a/pypy/translator/exceptiontransform.py +++ b/pypy/translator/exceptiontransform.py @@ -472,7 +472,7 @@ EXCDATA = lltype.Struct('ExcData', ('exc_type', self.lltype_of_exception_type), ('exc_value', self.lltype_of_exception_value), - hints={'thread_local': True}) + hints={'stm_thread_local': True}) self.EXCDATA = EXCDATA exc_data = lltype.malloc(EXCDATA, immortal=True) From noreply at buildbot.pypy.org Mon Feb 20 14:18:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 14:18:54 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix. Message-ID: <20120220131854.6118C820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52676:b58347084ebb Date: 2012-02-20 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/b58347084ebb/ Log: Fix. diff --git a/pypy/rpython/memory/gctypelayout.py b/pypy/rpython/memory/gctypelayout.py --- a/pypy/rpython/memory/gctypelayout.py +++ b/pypy/rpython/memory/gctypelayout.py @@ -428,7 +428,7 @@ appendto = self.addresses_of_static_ptrs else: return - elif hasattr(TYPE, "_hints") and TYPE._hints.get('thread_local'): + elif hasattr(TYPE, "_hints") and TYPE._hints.get('stm_thread_local'): # The exception data's value object is skipped: it's a thread- # local data structure. We assume that objects are stored # only temporarily there, so it is always cleared at the point From noreply at buildbot.pypy.org Mon Feb 20 14:18:55 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 14:18:55 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Prevent popitem() from generating spurious conflicts. Message-ID: <20120220131855.958C7820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52677:b34d343bd69b Date: 2012-02-20 14:18 +0100 http://bitbucket.org/pypy/pypy/changeset/b34d343bd69b/ Log: Prevent popitem() from generating spurious conflicts. diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -850,7 +850,8 @@ i = ll_dict_lookup(d, key, d.keyhash(key)) return not i & HIGHEST_BIT -POPITEMINDEX = lltype.Struct('PopItemIndex', ('nextindex', lltype.Unsigned)) +POPITEMINDEX = lltype.Struct('PopItemIndex', ('nextindex', lltype.Unsigned), + hints={'stm_dont_track_raw_accesses': True}) global_popitem_index = lltype.malloc(POPITEMINDEX, zero=True, immortal=True) def _ll_getnextitem(dic): diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -91,8 +91,10 @@ if op.result.concretetype is lltype.Void: newoperations.append(op) return - if op.args[0].concretetype.TO._gckind == 'raw': - if not is_immutable(op): + S = op.args[0].concretetype.TO + if S._gckind == 'raw': + if not (is_immutable(op) or + S._hints.get('stm_dont_track_raw_accesses', False)): turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return @@ -113,8 +115,10 @@ if op.args[-1].concretetype is lltype.Void: newoperations.append(op) return - if op.args[0].concretetype.TO._gckind == 'raw': - if not is_immutable(op): + S = op.args[0].concretetype.TO + if S._gckind == 'raw': + if not (is_immutable(op) or + S._hints.get('stm_dont_track_raw_accesses', False)): turn_inevitable(newoperations, op.opname + '-raw') newoperations.append(op) return From noreply at buildbot.pypy.org Mon Feb 20 15:32:42 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 15:32:42 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Remove the global lock during the commit_transaction() at the GC level. Message-ID: <20120220143242.72661820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52678:fd93fb6a06eb Date: 2012-02-20 15:32 +0100 http://bitbucket.org/pypy/pypy/changeset/fd93fb6a06eb/ Log: Remove the global lock during the commit_transaction() at the GC level. diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -50,15 +50,18 @@ ('malloc_flags', lltype.Signed), ('pending_list', llmemory.Address), ('surviving_weakrefs', llmemory.Address), + ('global_free', llmemory.Address), + ('global_stop', llmemory.Address), ) TRANSLATION_PARAMS = { 'stm_operations': 'use_real_one', 'max_nursery_size': 400*1024*1024, # XXX 400MB + 'tls_page_size': 64*1024, # 64KB } def __init__(self, config, stm_operations='use_emulator', - max_nursery_size=1024, + max_nursery_size=1024, tls_page_size=64, **kwds): GCBase.__init__(self, config, **kwds) # @@ -72,6 +75,7 @@ self.stm_operations = stm_operations self.collector = Collector(self) self.max_nursery_size = max_nursery_size + self.tls_page_size = tls_page_size # def _get_size(obj): # indirection to hide 'self' return self.get_size(obj) @@ -101,7 +105,7 @@ def setup_thread(self, in_main_thread): """Setup a thread. Allocates the thread-local data structures. Must be called only once per OS-level thread.""" - tls = lltype.malloc(self.GCTLS, flavor='raw') + tls = lltype.malloc(self.GCTLS, zero=True, flavor='raw') self.stm_operations.set_tls(llmemory.cast_ptr_to_adr(tls), int(in_main_thread)) tls.nursery_start = self._alloc_nursery() @@ -217,6 +221,34 @@ return obj + def _malloc_global_raw(self, tls, size): + # For collection: allocates enough space for a global object from + # the main_tls. The argument 'tls' is the current (local) GCTLS. + # We try to do it by reserving "pages" of memory from the global + # area at once, and subdividing here. + size_gc_header = self.gcheaderbuilder.size_gc_header + totalsize = size_gc_header + size + freespace = tls.global_stop - tls.global_free + if freespace < llmemory.raw_malloc_usage(totalsize): + self._malloc_global_more(tls, llmemory.raw_malloc_usage(totalsize)) + result = tls.global_free + tls.global_free = result + totalsize + llarena.arena_reserve(result, totalsize) + obj = result + size_gc_header + return obj + + @dont_inline + def _malloc_global_more(self, tls, totalsize): + if totalsize < self.tls_page_size: + totalsize = self.tls_page_size + main_tls = self.main_thread_tls + self.acquire(self.mutex_lock) + result = self._allocate_bump_pointer(main_tls, totalsize) + self.release(self.mutex_lock) + tls.global_free = result + tls.global_stop = result + totalsize + + def collect(self, gen=0): raise NotImplementedError @@ -376,11 +408,9 @@ # We need to allocate a global object here. We only allocate # it for now; it is left completely uninitialized. size = self.get_size(obj) - self.acquire(self.mutex_lock) - main_tls = self.main_thread_tls - globalobj = self._malloc_local_raw(main_tls, size) + tls = self.collector.get_tls() + globalobj = self._malloc_global_raw(tls, size) self.header(globalobj).tid = GCFLAG_GLOBAL - self.release(self.mutex_lock) # # Update the header of the local 'obj' hdr.tid |= GCFLAG_HAS_SHADOW @@ -463,9 +493,7 @@ # # Do a mark-and-move minor collection out of the tls' nursery # into the main thread's global area (which is right now also - # called a nursery). To simplify things, we use a global lock - # around the whole mark-and-move. - self.gc.acquire(self.gc.mutex_lock) + # called a nursery). debug_print("local arena:", tls.nursery_free - tls.nursery_start, "bytes") # @@ -481,8 +509,6 @@ # local objects self.collect_from_pending_list(tls) # - self.gc.release(self.gc.mutex_lock) - # # Fix up the weakrefs that used to point to local objects self.fixup_weakrefs(tls) # @@ -572,8 +598,7 @@ # First visit to a local-only 'obj': allocate a corresponding # global object size = self.gc.get_size(obj) - main_tls = self.gc.main_thread_tls - globalobj = self.gc._malloc_local_raw(main_tls, size) + globalobj = self.gc._malloc_global_raw(tls, size) need_to_copy = True # else: diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -340,7 +340,9 @@ assert s.b == 12345 # not updated by the GC code assert t.b == 67890 # still valid - def test_commit_transaction_with_one_reference(self): + def _commit_transaction_with_one_reference(self, tls_page_size): + self.gc.tls_page_size = tls_page_size + # sr, sr_adr = self.malloc(SR) assert sr.s1 == lltype.nullptr(S) assert sr.sr2 == lltype.nullptr(SR) @@ -359,9 +361,23 @@ # self.gc.collector.commit_transaction() # - assert main_tls.nursery_free - old_value == self.gcsize(S) + consumed = main_tls.nursery_free - old_value + expected = self.gcsize(S) # round this value up to tls_page_size + if expected < tls_page_size: expected = tls_page_size + assert consumed == expected + + def test_commit_transaction_with_one_reference_1(self): + self._commit_transaction_with_one_reference(1) + + def test_commit_transaction_with_one_reference_N1(self): + N1 = self.gcsize(S)-1 + self._commit_transaction_with_one_reference(N1) + + def test_commit_transaction_with_one_reference_128(self): + self._commit_transaction_with_one_reference(128) def test_commit_transaction_with_graph(self): + self.gc.tls_page_size = 1 sr1, sr1_adr = self.malloc(SR) sr2, sr2_adr = self.malloc(SR) self.select_thread(1) From noreply at buildbot.pypy.org Mon Feb 20 17:05:25 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 20 Feb 2012 17:05:25 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: typo Message-ID: <20120220160525.E1778820D1@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52679:b05fa1f14382 Date: 2012-02-20 16:04 +0000 http://bitbucket.org/pypy/pypy/changeset/b05fa1f14382/ Log: typo diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -1008,7 +1008,7 @@ prepare_op_debug_merge_point = void prepare_op_jit_debug = void - prepare_keepalive = void + prepare_op_keepalive = void def prepare_op_cond_call_gc_wb(self, op, fcond): assert op.result is None From noreply at buildbot.pypy.org Mon Feb 20 18:19:58 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 20 Feb 2012 18:19:58 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: A quick try to see the cost associated with locking this mutex: Message-ID: <20120220171958.EAD61820D1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52680:b104b62668bd Date: 2012-02-20 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/b104b62668bd/ Log: A quick try to see the cost associated with locking this mutex: add a fast-path if pending.run() adds exactly one new transaction. diff --git a/pypy/module/transaction/fifo.py b/pypy/module/transaction/fifo.py --- a/pypy/module/transaction/fifo.py +++ b/pypy/module/transaction/fifo.py @@ -16,6 +16,9 @@ assert (self.first is None) == (self.last is None) return self.first is None + def is_of_length_1(self): + return self.first is not None and self.first is self.last + def popleft(self): item = self.first self.first = item.next diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -247,7 +247,15 @@ if state.pending.is_empty(): state.lock_no_tasks_pending() state.unlock() - pending.run() + # + while True: + pending.run() + # for now, always break out of this loop, unless + # 'my_transactions_pending' contains precisely one item + if not my_transactions_pending.is_of_length_1(): + break + pending = my_transactions_pending.popleft() + # state.lock() _add_list(my_transactions_pending) # From noreply at buildbot.pypy.org Mon Feb 20 20:10:58 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:10:58 +0100 (CET) Subject: [pypy-commit] pypy py3k: restore OperationError.clear() from 35c013f9b1a5, it's called from executioncontext.py Message-ID: <20120220191058.62C37820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52681:26cabdd62e5b Date: 2012-02-20 20:10 +0100 http://bitbucket.org/pypy/pypy/changeset/26cabdd62e5b/ Log: restore OperationError.clear() from 35c013f9b1a5, it's called from executioncontext.py diff --git a/pypy/interpreter/error.py b/pypy/interpreter/error.py --- a/pypy/interpreter/error.py +++ b/pypy/interpreter/error.py @@ -35,6 +35,13 @@ if not we_are_translated(): self.debug_excs = [] + def clear(self, space): + self.w_type = space.w_None + self._w_value = space.w_None + self._application_traceback = None + if not we_are_translated(): + del self.debug_excs[:] + def match(self, space, w_check_class): "Check if this application-level exception matches 'w_check_class'." return space.exception_match(self.w_type, w_check_class) From noreply at buildbot.pypy.org Mon Feb 20 20:35:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:35:03 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax of these tests Message-ID: <20120220193503.EB1F1820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52682:c154f7dc6ef0 Date: 2012-02-20 20:14 +0100 http://bitbucket.org/pypy/pypy/changeset/c154f7dc6ef0/ Log: fix the syntax of these tests diff --git a/pypy/interpreter/test/test_exceptcomp.py b/pypy/interpreter/test/test_exceptcomp.py --- a/pypy/interpreter/test/test_exceptcomp.py +++ b/pypy/interpreter/test/test_exceptcomp.py @@ -7,7 +7,7 @@ def test_exception(self): try: - raise TypeError, "nothing" + raise TypeError("nothing") except TypeError: pass except: @@ -15,7 +15,7 @@ def test_exceptionfail(self): try: - raise TypeError, "nothing" + raise TypeError("nothing") except KeyError: self.fail("Different exceptions match.") except TypeError: @@ -47,7 +47,7 @@ class UserExcept(Exception): pass try: - raise UserExcept, "nothing" + raise UserExcept("nothing") except UserExcept: pass except: From noreply at buildbot.pypy.org Mon Feb 20 20:35:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:35:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: we no longer have unbound methods in python2, so type(A.meth) returns the type of a function. Adapt to bound methods instead Message-ID: <20120220193505.7A313820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52683:83cf494c988c Date: 2012-02-20 20:33 +0100 http://bitbucket.org/pypy/pypy/changeset/83cf494c988c/ Log: we no longer have unbound methods in python2, so type(A.meth) returns the type of a function. Adapt to bound methods instead diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -108,9 +108,10 @@ space.getexecutioncontext().setllprofile(None, None) assert l == ['call', 'return', 'call', 'c_call', 'c_return', 'return'] if isinstance(seen[0], Method): + w_class = space.type(seen[0].w_instance) found = 'method %s of %s' % ( seen[0].w_function.name, - seen[0].w_class.getname(space)) + w_class.getname(space)) else: assert isinstance(seen[0], Function) found = 'builtin %s' % seen[0].name @@ -196,8 +197,8 @@ self.value = value def meth(self): pass - MethodType = type(A.meth) - strangemeth = MethodType(A, 42, int) + MethodType = type(A(0).meth) + strangemeth = MethodType(A, 42) l = [] def profile(frame, event, arg): l.append(event) From noreply at buildbot.pypy.org Mon Feb 20 20:43:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:43:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120220194334.1EF0E820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52684:414902b2bf37 Date: 2012-02-20 20:37 +0100 http://bitbucket.org/pypy/pypy/changeset/414902b2bf37/ Log: fix syntax diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -281,14 +281,14 @@ def test_call_error_message(self): try: len() - except TypeError, e: + except TypeError as e: assert "len() takes exactly 1 argument (0 given)" in e.message else: assert 0, "did not raise" try: len(1, 2) - except TypeError, e: + except TypeError as e: assert "len() takes exactly 1 argument (2 given)" in e.message else: assert 0, "did not raise" From noreply at buildbot.pypy.org Mon Feb 20 20:43:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:43:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120220194335.55107820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52685:df10a2837781 Date: 2012-02-20 20:40 +0100 http://bitbucket.org/pypy/pypy/changeset/df10a2837781/ Log: fix syntax diff --git a/pypy/interpreter/test/test_gateway.py b/pypy/interpreter/test/test_gateway.py --- a/pypy/interpreter/test/test_gateway.py +++ b/pypy/interpreter/test/test_gateway.py @@ -690,11 +690,11 @@ class A(object): m = g # not a builtin function, so works as method d = {'A': A} - exec \"\"\" + exec(\"\"\" # own compiler a = A() y = a.m(33) -\"\"\" in d +\"\"\", d) return d['y'] == ('g', d['a'], 33) """) assert space.is_true(w_res) From noreply at buildbot.pypy.org Mon Feb 20 20:43:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 20 Feb 2012 20:43:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120220194336.89EA6820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52686:be8572d82a2f Date: 2012-02-20 20:42 +0100 http://bitbucket.org/pypy/pypy/changeset/be8572d82a2f/ Log: fix syntax diff --git a/pypy/interpreter/test/test_main.py b/pypy/interpreter/test/test_main.py --- a/pypy/interpreter/test/test_main.py +++ b/pypy/interpreter/test/test_main.py @@ -9,7 +9,7 @@ testcode = """\ def main(): aStr = 'hello world' - print len(aStr) + print(len(aStr)) main() """ @@ -20,7 +20,7 @@ import sys if __name__ == '__main__': aStr = sys.argv[1] - print len(aStr) + print(len(aStr)) """ testresultoutput = '11\n' From noreply at buildbot.pypy.org Mon Feb 20 23:29:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:29:33 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: work on slides Message-ID: <20120220222933.ECEF3820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4091:47da3b7e55c1 Date: 2012-02-19 20:08 +0200 http://bitbucket.org/pypy/extradoc/changeset/47da3b7e55c1/ Log: work on slides diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -4,13 +4,13 @@ What is this talk about? ------------------------ -* what is pypy and why +* What is PyPy and why? -* numeric landscape in python +* Numeric landscape in python -* what we achieved in pypy +* What we achieved in PyPy? -* where we're going +* Where we're going? What is PyPy? ------------- @@ -26,7 +26,7 @@ PyPy status right now --------------------- -* An efficient just in time compiler for the Python language +* An **efficient just in time compiler** for the Python language * Relatively "good" on numerics (compared to other dynamic languages) @@ -37,7 +37,7 @@ Why would you care? ------------------- -* "If I write this stuff in C it'll be faster anyway" +* "If I write this stuff in C/fortran/assembler it'll be faster anyway" * maybe, but ... @@ -50,7 +50,33 @@ * For novel algorithms, being clearly expressed in code makes them easier to evaluate (Python is cleaner than C often) -* Example - memcached server (?) XXX think about it +Why would you care even more +---------------------------- + +* Growing community + +* Everything is for free with reasonable licensing + +* There are many smart people out there addressing hard problems + +Example why would you care +-------------------------- + +* You spend a year writing optimized algorithms for a GPU + +* Next year a new generation of GPUs come along + +* Your algorithms are no longer optimize + +|pause| + +* Alternative - express your algorithms + +* Leave low-level details for people who have nothing better to do + +|pause| + +* Like me (I don't know enough physics to do the other part) Numerics in Python ------------------ @@ -113,7 +139,7 @@ Examples -------- -XXX say that the variables are e.g. 1-dim numpy arrays +* ``a``, ``b``, ``c`` are single dimensional arrays * ``a + a`` would generate different code than ``a + b`` @@ -130,19 +156,55 @@ * Vectorization in progress -Status benchmarks ------------------ +Status benchmarks - trivial stuff +--------------------------------- + +XXX + +Status benchmarks - slightly more complex +----------------------------------------- * laplace solution * solutions: + XXX laplace numbers +---+ | | +---+ +Progress plan +------------- + +* Express operations in high-level languages + +* Let us deal with low level details + +|pause| + +* However, leave knobs and buttons for advanced users + +* Don't get penalized too much for not using them + +Few words about the future +-------------------------- + +* Predictions are hard + +|pause| + +* Especially when it comes to future + +* Take this with a grain of salt + This is just the beginning... ----------------------------- * PyPy is an easy platform to experiment with +* We did not spend a whole lot of time dealing with the low-level optimizations + +Extra - SSE preliminary results +------------------------------- + +XXX From noreply at buildbot.pypy.org Mon Feb 20 23:29:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:29:35 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: work on slides Message-ID: <20120220222935.B170C820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4092:dc5e0b3c78dc Date: 2012-02-20 23:28 +0100 http://bitbucket.org/pypy/extradoc/changeset/dc5e0b3c78dc/ Log: work on slides diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -1,3 +1,5 @@ +.. include:: beamerdefs.txt + Fast numeric in Python - NumPy and PyPy ======================================= @@ -19,7 +21,7 @@ * A framework for writing efficient dynamic language implementations -* An open source project with a lot of volunteer effort +* An open source project with a lot of volunteer effort, released under the BSD license * I'll talk today about the first part (mostly) @@ -28,16 +30,16 @@ * An **efficient just in time compiler** for the Python language -* Relatively "good" on numerics (compared to other dynamic languages) +* Relatively "good' on numerics (compared to other dynamic languages) * Example - real time video processing -* Some comparisons +* XXX some benchmarks Why would you care? ------------------- -* "If I write this stuff in C/fortran/assembler it'll be faster anyway" +* *If I write this stuff in C/fortran/assembler it'll be faster anyway* * maybe, but ... @@ -46,10 +48,14 @@ * Experimentation is important -* Implementing something faster, in human time, leaves more time for optimizations and improvements +* Implementing something faster, in **human time**, leaves more time for optimizations and improvements * For novel algorithms, being clearly expressed in code makes them easier to evaluate (Python is cleaner than C often) +|pause| + +* Sometimes makes it **possible** in the first place + Why would you care even more ---------------------------- @@ -66,17 +72,17 @@ * Next year a new generation of GPUs come along -* Your algorithms are no longer optimize +* Your algorithms are no longer optimized |pause| -* Alternative - express your algorithms +* Alternative - **express** your algorithms * Leave low-level details for people who have nothing better to do |pause| -* Like me (I don't know enough physics to do the other part) +* .. like me (I don't know enough physics to do the other part) Numerics in Python ------------------ @@ -119,19 +125,23 @@ * Stuff is reasonably fast, but... -* Only if you don't actually write much Python +|pause| + +* Only if you don't actually write much **Python** * Array operations are fine as long as they're vectorized * Not everything is expressable that way -* Numpy allocates intermediates for each operation, trashing caches +* Numpy allocates intermediates for each operation, suboptimal Our approach ------------ * Build a tree of operations +XXX a tree picture + * Compile assembler specialized for aliasing and operations * Execute the specialized assembler @@ -141,9 +151,14 @@ * ``a``, ``b``, ``c`` are single dimensional arrays -* ``a + a`` would generate different code than ``a + b`` +* ``a+a`` would generate different code than ``a+b`` -* ``a + b * c`` is as fast as a loop +* ``a+b*c`` is as fast as a loop + +Performance comparison +---------------------- + +XXX Status ------ @@ -204,7 +219,26 @@ * We did not spend a whole lot of time dealing with the low-level optimizations +* Automatic vectorization over multiple threads + +* SSE, GPU, dynamic offloading + +* Optimizations based on machine cache size + +* We're running a fundraiser, make your employer donate money + Extra - SSE preliminary results ------------------------------- XXX + +Q&A +--- + +* http://pypy.org/ + +* http://buildbot.pypy.org/numpy-status/latest.html + +* http://morepypy.blogspot.com/ + +* Any questions? From noreply at buildbot.pypy.org Mon Feb 20 23:30:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:04 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fix test_zjit Message-ID: <20120220223004.96C21820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52687:ace56d6a6494 Date: 2012-02-19 20:12 +0200 http://bitbucket.org/pypy/pypy/changeset/ace56d6a6494/ Log: fix test_zjit diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -47,7 +47,7 @@ ) flat_set_driver = jit.JitDriver( greens=['shapelen', 'base'], - reds=['step', 'ai', 'ri', 'lngth', 'arr', 'basei'], + reds=['step', 'lngth', 'ri', 'arr', 'basei'], name='numpy_flatset', ) @@ -1386,7 +1386,6 @@ start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) arr = convert_to_array(space, w_value) ri = arr.create_iter() - ai = 0 basei = ViewIterator(base.start, base.strides, base.backstrides, base.shape) shapelen = len(base.shape) @@ -1397,7 +1396,6 @@ base=base, step=step, arr=arr, - ai=ai, lngth=lngth, ri=ri) v = arr.getitem(ri.offset).convert_to(base.dtype) From noreply at buildbot.pypy.org Mon Feb 20 23:30:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:06 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: dance to make interiorfields work Message-ID: <20120220223006.AD62E820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52688:f002fbc9d8ec Date: 2012-02-19 23:42 +0200 http://bitbucket.org/pypy/pypy/changeset/f002fbc9d8ec/ Log: dance to make interiorfields work diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -23,7 +23,8 @@ class Descr(history.AbstractDescr): def __init__(self, ofs, typeinfo, extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, + width=-1, is_array=False, field_size=-1): self.ofs = ofs self.width = width @@ -33,6 +34,11 @@ self.arg_types = arg_types self.count_fields_if_immut = count_fields_if_immut self.ffi_flags = ffi_flags + self.is_array_descr = is_array + self.field_size = field_size + + def get_field_size(self): + return self.field_size def get_arg_types(self): return self.arg_types @@ -43,6 +49,9 @@ def get_extra_info(self): return self.extrainfo + def get_width(self): + return self.width + def sort_key(self): """Returns an integer that can be used as a key when sorting the field descrs of a single structure. The property that this @@ -121,14 +130,16 @@ return False def getdescr(self, ofs, typeinfo='?', extrainfo=None, name=None, - arg_types=None, count_fields_if_immut=-1, ffi_flags=0, width=-1): + arg_types=None, count_fields_if_immut=-1, ffi_flags=0, + width=-1, is_array=False, field_size=-1): key = (ofs, typeinfo, extrainfo, name, arg_types, count_fields_if_immut, ffi_flags, width) try: return self._descrs[key] except KeyError: descr = Descr(ofs, typeinfo, extrainfo, name, arg_types, - count_fields_if_immut, ffi_flags, width) + count_fields_if_immut, ffi_flags, width, is_array, + field_size) self._descrs[key] = descr return descr @@ -339,7 +350,8 @@ width = symbolic.get_size(A) ofs, size = symbolic.get_field_token(S, fieldname) token = history.getkind(getattr(S, fieldname)) - return self.getdescr(ofs, token[0], name=fieldname, width=width) + return self.getdescr(ofs, token[0], name=fieldname, width=width, + field_size=size) def interiorfielddescrof_dynamic(self, offset, width, fieldsize, is_pointer, is_float, is_signed): @@ -351,7 +363,7 @@ else: typeinfo = INT # we abuse the arg_types field to distinguish dynamic and static descrs - return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width) + return Descr(offset, typeinfo, arg_types='dynamic', name='', width=width, field_size=fieldsize) def calldescrof(self, FUNC, ARGS, RESULT, extrainfo): arg_types = [] @@ -397,7 +409,7 @@ token = 's' else: token = '?' - return self.getdescr(size, token) + return self.getdescr(size, token, is_array=True) # ---------- the backend-dependent operations ---------- diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -156,6 +156,7 @@ itemsize = 0 lendescr = None flag = '\x00' + is_array_descr = True def __init__(self, basesize, itemsize, lendescr, flag): self.basesize = basesize @@ -178,6 +179,9 @@ def repr_of_descr(self): return '' % (self.flag, self.itemsize) + def get_item_size(self): + return self.itemsize + def get_array_descr(gccache, ARRAY_OR_STRUCT): cache = gccache._cache_array @@ -226,6 +230,12 @@ def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() + def get_item_size(self): + return self.fielddescr.field_size + + def get_width(self): + return self.arraydescr.itemsize + def get_interiorfield_descr(gc_ll_descr, ARRAY, name): cache = gc_ll_descr._cache_interiorfield try: diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -277,7 +277,8 @@ IGNORED = ['FLOAT_VECTOR_ADD', 'FLOAT_VECTOR_SUB', - 'GETARRAYITEM_VECTOR_RAW', + 'GETARRAYITEM_VECTOR_RAW', 'GETINTERIORFIELD_VECTOR_RAW', + 'SETINTERIORFIELD_VECTOR_RAW', 'SETARRAYITEM_VECTOR_RAW', 'ASSERT_ALIGNED'] def _make_execute_list(): diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -138,6 +138,7 @@ class AbstractDescr(AbstractValue): __slots__ = () + is_array_descr = False def repr_of_descr(self): return '%r' % (self,) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py --- a/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_vectorize.py @@ -10,6 +10,8 @@ cpu = LLtypeMixin.cpu FUNC = LLtypeMixin.FUNC arraydescr = cpu.arraydescrof(lltype.GcArray(lltype.Signed)) + interiordescr = cpu.interiorfielddescrof_dynamic(0, 1, 8, False, True, + False) def calldescr(cpu, FUNC, oopspecindex, extraeffect=None): if extraeffect == EffectInfo.EF_RANDOM_EFFECTS: @@ -297,7 +299,7 @@ f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) - setarrayitem_raw(p2, i2, f2) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) i3 = cast_float_to_int(f2) finish(p0, p1, p2, i0, i1, i3) """ @@ -306,8 +308,77 @@ f0 = getarrayitem_raw(p0, i0, descr=arraydescr) f1 = getarrayitem_raw(p1, i1, descr=arraydescr) f2 = float_add(f0, f1) - setarrayitem_raw(p2, i2, f2) + setarrayitem_raw(p2, i2, f2, descr=arraydescr) i3 = cast_float_to_int(f2) finish(p0, p1, p2, i0, i1, i3) """ self.optimize_loop(ops, expected) + + def test_getinteriorfield(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) + f0 = getinteriorfield_raw(p0, i0, descr=interiordescr) + f1 = getinteriorfield_raw(p1, i1, descr=interiordescr) + f2 = float_add(f0, f1) + setinteriorfield_raw(p2, i2, f2, descr=interiordescr) + i0_1 = int_add(i0, 8) + i1_1 = int_add(8, i1) + i2_1 = int_add(i2, 8) + f0_1 = getinteriorfield_raw(p0, i0_1, descr=interiordescr) + f1_1 = getinteriorfield_raw(p1, i1_1, descr=interiordescr) + f2_1 = float_add(f0_1, f1_1) + setinteriorfield_raw(p2, i2_1, f2_1, descr=interiordescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 8) + i1_1 = int_add(8, i1) + i2_1 = int_add(i2, 8) + vec0 = getinteriorfield_vector_raw(p0, i0, descr=interiordescr) + vec1 = getinteriorfield_vector_raw(p1, i1, descr=interiordescr) + vec2 = float_vector_add(vec0, vec1) + setinteriorfield_vector_raw(p2, i2, vec2, descr=interiordescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) + + def test_getinteriorfield_wrong(self): + ops = """ + [p0, p1, p2, i0, i1, i2] + call(0, p0, i0, descr=assert_aligned) + call(0, p1, i1, descr=assert_aligned) + call(0, p1, i2, descr=assert_aligned) + f0 = getinteriorfield_raw(p0, i0, descr=interiordescr) + f1 = getinteriorfield_raw(p1, i1, descr=interiordescr) + f2 = float_add(f0, f1) + setinteriorfield_raw(p2, i2, f2, descr=interiordescr) + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0_1 = getinteriorfield_raw(p0, i0_1, descr=interiordescr) + f1_1 = getinteriorfield_raw(p1, i1_1, descr=interiordescr) + f2_1 = float_add(f0_1, f1_1) + setinteriorfield_raw(p2, i2_1, f2_1, descr=interiordescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + expected = """ + [p0, p1, p2, i0, i1, i2] + i0_1 = int_add(i0, 1) + i1_1 = int_add(1, i1) + i2_1 = int_add(i2, 1) + f0 = getinteriorfield_raw(p0, i0, descr=interiordescr) + f1 = getinteriorfield_raw(p1, i1, descr=interiordescr) + f2 = float_add(f0, f1) + setinteriorfield_raw(p2, i2, f2, descr=interiordescr) + f0_1 = getinteriorfield_raw(p0, i0_1, descr=interiordescr) + f1_1 = getinteriorfield_raw(p1, i1_1, descr=interiordescr) + f2_1 = float_add(f0_1, f1_1) + setinteriorfield_raw(p2, i2_1, f2_1, descr=interiordescr) + finish(p0, p1, p2, i0_1, i1_1, i2_1) + """ + self.optimize_loop(ops, expected) + diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -7,7 +7,13 @@ VECTOR_SIZE = 2 VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD, - rop.FLOAT_SUB: rop.FLOAT_VECTOR_SUB} + rop.FLOAT_SUB: rop.FLOAT_VECTOR_SUB, + rop.GETINTERIORFIELD_RAW: rop.GETINTERIORFIELD_VECTOR_RAW, + rop.SETINTERIORFIELD_RAW: rop.SETINTERIORFIELD_VECTOR_RAW, + rop.GETARRAYITEM_RAW: rop.GETARRAYITEM_VECTOR_RAW, + rop.SETARRAYITEM_RAW: rop.SETARRAYITEM_VECTOR_RAW, + } + class BaseTrack(object): pass @@ -25,7 +31,7 @@ def emit(self, optimizer): box = BoxVector() - op = ResOperation(rop.GETARRAYITEM_VECTOR_RAW, [self.arr.box, + op = ResOperation(VEC_MAP[self.op.getopnum()], [self.arr.box, self.index.val.box], box, descr=self.op.getdescr()) optimizer.emit_operation(op) @@ -41,11 +47,13 @@ def match(self, other, i): if not isinstance(other, Write): return False + descr = self.op.getdescr() + i = i * descr.get_field_size() / descr.get_width() return self.v.match(other.v, i) def emit(self, optimizer): arg = self.v.emit(optimizer) - op = ResOperation(rop.SETARRAYITEM_VECTOR_RAW, [self.arr.box, + op = ResOperation(VEC_MAP[self.op.getopnum()], [self.arr.box, self.index.box, arg], None, descr=self.op.getdescr()) optimizer.emit_operation(op) @@ -78,8 +86,17 @@ self.val = val self.index = index - def advance(self): - return TrackIndex(self.val, self.index + 1) + def advance(self, v): + return TrackIndex(self.val, self.index + v) + + def match_descr(self, descr): + if self.index == 0: + return True + if descr.is_array_descr: + return self.index == 1 + if descr.get_width() != 1: + return False # XXX this can probably be supported + return self.index == descr.get_field_size() class OptVectorize(Optimization): def __init__(self): @@ -112,6 +129,9 @@ track = self.tracked_indexes.get(index, None) if track is None: self.emit_operation(op) + elif not track.match_descr(op.getdescr()): + self.reset() + self.emit_operation(op) else: self.ops_so_far.append(op) self.track[self.getvalue(op.result)] = Read(arr, track, op) @@ -123,20 +143,22 @@ one = self.getvalue(op.getarg(0)) two = self.getvalue(op.getarg(1)) self.emit_operation(op) - if (one.is_constant() and one.box.getint() == 1 and - two in self.tracked_indexes): + if (one.is_constant() and two in self.tracked_indexes): index = two - elif (two.is_constant() and two.box.getint() == 1 and - one in self.tracked_indexes): + v = one.box.getint() + elif (two.is_constant() and one in self.tracked_indexes): index = one + v = two.box.getint() else: return - self.tracked_indexes[self.getvalue(op.result)] = self.tracked_indexes[index].advance() + self.tracked_indexes[self.getvalue(op.result)] = self.tracked_indexes[index].advance(v) def _optimize_binop(self, op): left = self.getvalue(op.getarg(0)) right = self.getvalue(op.getarg(1)) if left not in self.track or right not in self.track: + if left in self.track or right in self.track: + self.reset() self.emit_operation(op) else: self.ops_so_far.append(op) @@ -151,6 +173,9 @@ index = self.getvalue(op.getarg(1)) val = self.getvalue(op.getarg(2)) if index not in self.tracked_indexes or val not in self.track: + # We could detect cases here, but we're playing on the safe + # side and just resetting everything + self.reset() self.emit_operation(op) return self.ops_so_far.append(op) @@ -159,7 +184,9 @@ ti = self.tracked_indexes[index] if arr not in self.full: self.full[arr] = [None] * VECTOR_SIZE - self.full[arr][ti.index] = Write(arr, index, v, op) + i = (ti.index * op.getdescr().get_width() // + op.getdescr().get_field_size()) + self.full[arr][i] = Write(arr, index, v, op) optimize_SETINTERIORFIELD_RAW = optimize_SETARRAYITEM_RAW diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -473,6 +473,7 @@ 'GETARRAYITEM_VECTOR_RAW/2d', 'GETARRAYITEM_RAW/2d', 'GETINTERIORFIELD_GC/2d', + 'GETINTERIORFIELD_VECTOR_RAW/2d', 'GETINTERIORFIELD_RAW/2d', 'GETFIELD_GC/1d', 'GETFIELD_RAW/1d', @@ -493,6 +494,7 @@ 'SETARRAYITEM_RAW/3d', 'SETARRAYITEM_VECTOR_RAW/3d', 'SETINTERIORFIELD_GC/3d', + 'SETINTERIORFIELD_VECTOR_RAW/3d', 'SETINTERIORFIELD_RAW/3d', 'SETFIELD_GC/2d', 'SETFIELD_RAW/2d', From noreply at buildbot.pypy.org Mon Feb 20 23:30:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:09 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix the test - it fails because stuff does not work when not compiled Message-ID: <20120220223009.32F04820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52689:31a15fa6b873 Date: 2012-02-20 10:58 +0100 http://bitbucket.org/pypy/pypy/changeset/31a15fa6b873/ Log: fix the test - it fails because stuff does not work when not compiled (why???) diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -179,9 +179,11 @@ def repr_of_descr(self): return '' % (self.flag, self.itemsize) - def get_item_size(self): + def get_field_size(self): return self.itemsize + def get_width(self): + return self.itemsize def get_array_descr(gccache, ARRAY_OR_STRUCT): cache = gccache._cache_array @@ -230,7 +232,7 @@ def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() - def get_item_size(self): + def get_field_size(self): return self.fielddescr.field_size def get_width(self): diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1469,6 +1469,13 @@ src_addr = addr_add(base_loc, ofs_loc, 0, scale) self.mc.MOVDQA(resloc, src_addr) + def genop_getinteriorfield_vector_raw(self, op, arglocs, resloc): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, sign_loc) = arglocs + self.genop_getarrayitem_vector_raw(op, [base_loc, ofs_loc, + itemsize_loc, None, sign_loc], + resloc) + def _get_interiorfield_addr(self, temp_loc, index_loc, itemsize_loc, base_loc, ofs_loc): assert isinstance(itemsize_loc, ImmedLoc) @@ -1528,6 +1535,12 @@ dest_addr = AddressLoc(base_loc, ofs_loc, scale, 0) self.mc.MOVDQA(dest_addr, value_loc) + def genop_discard_setinteriorfield_vector_raw(self, op, arglocs): + (base_loc, ofs_loc, itemsize_loc, fieldsize_loc, + index_loc, temp_loc, value_loc) = arglocs + self.genop_discard_setarrayitem_vector_raw(op, [base_loc, ofs_loc, + value_loc, itemsize_loc, None]) + def genop_discard_strsetitem(self, op, arglocs): base_loc, ofs_loc, val_loc = arglocs basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -1072,6 +1072,7 @@ index_loc, temp_loc, value_loc]) consider_setinteriorfield_raw = consider_setinteriorfield_gc + consider_setinteriorfield_vector_raw = consider_setinteriorfield_gc def consider_strsetitem(self, op): args = op.getarglist() @@ -1167,6 +1168,7 @@ index_loc, temp_loc, sign_loc], result_loc) consider_getinteriorfield_raw = consider_getinteriorfield_gc + consider_getinteriorfield_vector_raw = consider_getinteriorfield_gc def consider_int_is_true(self, op, guard_op): # doesn't need arg to be in a register diff --git a/pypy/jit/backend/x86/test/test_vectorize.py b/pypy/jit/backend/x86/test/test_vectorize.py --- a/pypy/jit/backend/x86/test/test_vectorize.py +++ b/pypy/jit/backend/x86/test/test_vectorize.py @@ -39,19 +39,22 @@ return r assert self.meta_interp(f, [20]) == f(20) + self.check_simple_loop(float_vector_add=1, getarrayitem_vector_raw=2, + setarrayitem_vector_raw=1) def test_vector_ops_libffi(self): - TP = rffi.CArray(lltype.Float) + TP = lltype.Array(lltype.Float, hints={'nolength': True, + 'memory_position_alignment': 16}) elem_size = rffi.sizeof(lltype.Float) ftype = clibffi.cast_type_to_ffitype(lltype.Float) driver = jit.JitDriver(greens = [], reds = ['a', 'i', 'b', 'size']) def read_item(arr, item): - return libffi.array_getitem(ftype, elem_size, arr, item, 0) + return libffi.array_getitem(ftype, 1, arr, item, 0) def store_item(arr, item, v): - libffi.array_setitem(ftype, elem_size, arr, item, 0, v) + libffi.array_setitem(ftype, 1, arr, item, 0, v) def initialize(arr, size): for i in range(size): @@ -69,19 +72,23 @@ initialize(a, size) initialize(b, size) i = 0 - while i < size: + while i < size * elem_size: driver.jit_merge_point(a=a, i=i, size=size, b=b) jit.assert_aligned(a, i) jit.assert_aligned(b, i) store_item(b, i, read_item(a, i) + read_item(a, i)) - i += 1 + i += elem_size store_item(b, i, read_item(a, i) + read_item(a, i)) - i += 1 + i += elem_size r = sum(b, size) lltype.free(a, flavor='raw') lltype.free(b, flavor='raw') return r res = f(20) - assert self.meta_interp(f, [20]) == res + res2 = self.meta_interp(f, [20]) + self.check_simple_loop(float_vector_add=1, + getinteriorfield_vector_raw=2, + setinteriorfield_vector_raw=1) + assert res2 == res diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -23,7 +23,7 @@ ('pure', OptPure), ('heap', OptHeap), ('ffi', None), - ('vectorize', OptVectorize), # XXX check if CPU supports that maybe + ('vectorize', OptVectorize), # XXX check if CPU supports that ('unroll', None), ] # no direct instantiation of unroll From noreply at buildbot.pypy.org Mon Feb 20 23:30:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:11 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: various translation fixes Message-ID: <20120220223011.6E747820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52690:090669b997a9 Date: 2012-02-20 11:21 +0100 http://bitbucket.org/pypy/pypy/changeset/090669b997a9/ Log: various translation fixes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -14,13 +14,14 @@ MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): - def get_dtype(space): + def get_dtype(self, space): from pypy.module.micronumpy.interp_dtype import get_dtype_cache return getattr(get_dtype_cache(space), "w_%sdtype" % name) def new(space, w_subtype, w_value): - dtype = get_dtype(space) + from pypy.module.micronumpy.interp_dtype import get_dtype_cache + dtype = getattr(get_dtype_cache(space), "w_%sdtype" % name) return dtype.itemtype.coerce_subtype(space, w_subtype, w_value) - return func_with_new_name(new, name + "_box_new"), staticmethod(get_dtype) + return func_with_new_name(new, name + "_box_new"), get_dtype class PrimitiveBox(object): _mixin_ = True @@ -47,12 +48,12 @@ return space.wrap(self.get_dtype(space).itemtype.str_format(self)) def descr_int(self, space): - box = self.convert_to(W_LongBox.get_dtype(space)) + box = self.convert_to(W_LongBox.get_dtype(self, space)) assert isinstance(box, W_LongBox) return space.wrap(box.value) def descr_float(self, space): - box = self.convert_to(W_Float64Box.get_dtype(space)) + box = self.convert_to(W_Float64Box.get_dtype(self, space)) assert isinstance(box, W_Float64Box) return space.wrap(box.value) @@ -342,11 +343,11 @@ __module__ = "numpypy", ) -W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), +W_StringBox.typedef = TypeDef("string_", (W_CharacterBox.typedef, str_typedef), __module__ = "numpypy", ) -W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), +W_UnicodeBox.typedef = TypeDef("unicode_", (W_CharacterBox.typedef, unicode_typedef), __module__ = "numpypy", ) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -134,6 +134,7 @@ fldname = space.str_w(w_fldname) if fldname in fields: raise OperationError(space.w_ValueError, space.wrap("two fields with the same name")) + assert isinstance(subdtype, W_Dtype) fields[fldname] = (offset, subdtype) ofs_and_items.append((offset, subdtype.itemtype)) offset += subdtype.itemtype.get_element_size() diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -63,6 +63,7 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray itemsize = dtype.itemtype.get_element_size() + assert itemsize >= 0 if count == -1: count = length / itemsize if length % itemsize != 0: @@ -76,7 +77,9 @@ a = W_NDimArray([count], dtype=dtype) ai = a.create_iter() for i in range(count): - val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) + start = i*itemsize + assert start >= 0 + val = dtype.itemtype.runpack_str(s[start:start + itemsize]) a.dtype.setitem(a, ai.offset, val) ai = ai.next(1) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -649,10 +649,11 @@ return interp_boxes.W_VoidBox(arr, i) @jit.unroll_safe - def coerce(self, space, dtype, w_item): + def coerce(self, space, dtype, w_item): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + if isinstance(w_item, interp_boxes.W_VoidBox): return w_item - from pypy.module.micronumpy.interp_numarray import W_NDimArray # we treat every sequence as sequence, no special support # for arrays if not space.issequence_w(w_item): @@ -675,11 +676,13 @@ @jit.unroll_safe def store(self, arr, _, i, ofs, box): + assert isinstance(box, interp_boxes.W_VoidBox) for k in range(self.get_element_size()): arr.storage[k + i] = box.arr.storage[k + box.ofs] @jit.unroll_safe def str_format(self, box): + assert isinstance(box, interp_boxes.W_VoidBox) pieces = ["("] first = True for ofs, tp in self.offsets_and_fields: From noreply at buildbot.pypy.org Mon Feb 20 23:30:16 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:16 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: merge default Message-ID: <20120220223016.BBB18820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52691:0716fdebaeea Date: 2012-02-20 11:24 +0100 http://bitbucket.org/pypy/pypy/changeset/0716fdebaeea/ Log: merge default diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -1520,7 +1522,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -309,3 +310,13 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -1,17 +1,22 @@ ============================ -PyPy 1.7 - business as usual +PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As became a habit, this -release brings a lot of bugfixes, performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms -of performance and memory. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -20,7 +25,8 @@ due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or -Windows 32. Windows 64 work is ongoing, but not yet natively supported. +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org @@ -33,20 +39,60 @@ the JIT performance in places that use such lists. There are also special strategies for unicode and string lists. -* As usual, numerous performance improvements. There are too many examples - which python constructs now should behave faster to list them. +* As usual, numerous performance improvements. There are many examples + of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. * Windows fixes. -* NumPy effort progress, for the exact list of things that have been done, +* NumPy effort progress; for the exact list of things that have been done, consult the `numpy status page`_. A tentative list of things that has been done: - xxxx # list it, multidim arrays in particular + * multi dimensional arrays -* Fundraising XXX + * various sizes of dtypes -.. _`numpy status page`: xxx -.. _`numpy status update blog report`: xxx + * a lot of ufuncs + + * a lot of other minor changes + + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1340,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1638,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3706,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -113,6 +113,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), @@ -129,8 +130,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -83,7 +83,15 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -94,9 +102,30 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") + descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") + + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -221,11 +250,29 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), + __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -234,8 +281,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,16 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) -from pypy.module.micronumpy.strides import (calculate_slice_strides, - shape_agreement, find_shape_and_elems, get_shape_from_iterable, - calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator, Chunks, RecordChunk) +from pypy.module.micronumpy.strides import (shape_agreement, + find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunks, Chunk, ViewIterator, RecordChunk) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -101,8 +100,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +116,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +134,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1239,21 +1257,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1262,10 +1295,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1279,6 +1308,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -383,14 +383,17 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -441,6 +441,31 @@ numpy.generic, object) assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) + assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) + assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) + assert int_(3) & int_(1) == int_(1) + assert 2 & int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + def test_alternate_constructs(self): from _numpypy import dtype assert dtype('i8') == dtype('> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -686,6 +739,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1418,6 +1495,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,15 +367,15 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -64,10 +64,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc def malloc(self, size): # XXX find out why test_zjit explodes with tracking of allocations @@ -286,6 +282,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -331,6 +331,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: @@ -349,6 +357,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -543,10 +543,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -22,3 +25,22 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,7 +43,7 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) @@ -67,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) @@ -125,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Mon Feb 20 23:30:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:19 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: fixes Message-ID: <20120220223019.33035820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52692:c8916d85c80a Date: 2012-02-20 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/c8916d85c80a/ Log: fixes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -8,20 +8,18 @@ from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name -from pypy.rpython.lltypesystem import lltype MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): - def get_dtype(self, space): + def _get_dtype(space): from pypy.module.micronumpy.interp_dtype import get_dtype_cache return getattr(get_dtype_cache(space), "w_%sdtype" % name) def new(space, w_subtype, w_value): - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - dtype = getattr(get_dtype_cache(space), "w_%sdtype" % name) + dtype = _get_dtype(space) return dtype.itemtype.coerce_subtype(space, w_subtype, w_value) - return func_with_new_name(new, name + "_box_new"), get_dtype + return func_with_new_name(new, name + "_box_new"), staticmethod(_get_dtype) class PrimitiveBox(object): _mixin_ = True @@ -32,7 +30,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -41,6 +38,9 @@ w_subtype.getname(space, '?') ) + def get_dtype(self, space): + return self._get_dtype(space) + def descr_str(self, space): return self.descr_repr(space) @@ -48,12 +48,12 @@ return space.wrap(self.get_dtype(space).itemtype.str_format(self)) def descr_int(self, space): - box = self.convert_to(W_LongBox.get_dtype(self, space)) + box = self.convert_to(W_LongBox._get_dtype(space)) assert isinstance(box, W_LongBox) return space.wrap(box.value) def descr_float(self, space): - box = self.convert_to(W_Float64Box.get_dtype(self, space)) + box = self.convert_to(W_Float64Box._get_dtype(space)) assert isinstance(box, W_Float64Box) return space.wrap(box.value) @@ -132,7 +132,7 @@ class W_BoolBox(W_GenericBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("bool") + descr__new__, _get_dtype = new_dtype_getter("bool") class W_NumberBox(W_GenericBox): _attrs_ = () @@ -148,40 +148,40 @@ pass class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int8") + descr__new__, _get_dtype = new_dtype_getter("int8") class W_UInt8Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint8") + descr__new__, _get_dtype = new_dtype_getter("uint8") class W_Int16Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int16") + descr__new__, _get_dtype = new_dtype_getter("int16") class W_UInt16Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint16") + descr__new__, _get_dtype = new_dtype_getter("uint16") class W_Int32Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int32") + descr__new__, _get_dtype = new_dtype_getter("int32") class W_UInt32Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint32") + descr__new__, _get_dtype = new_dtype_getter("uint32") class W_LongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("long") + descr__new__, _get_dtype = new_dtype_getter("long") class W_ULongBox(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("ulong") + descr__new__, _get_dtype = new_dtype_getter("ulong") class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int64") + descr__new__, _get_dtype = new_dtype_getter("int64") class W_LongLongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter('longlong') + descr__new__, _get_dtype = new_dtype_getter('longlong') class W_UInt64Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint64") + descr__new__, _get_dtype = new_dtype_getter("uint64") class W_ULongLongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter('ulonglong') + descr__new__, _get_dtype = new_dtype_getter('ulonglong') class W_InexactBox(W_NumberBox): _attrs_ = () @@ -190,10 +190,10 @@ _attrs_ = () class W_Float32Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float32") + descr__new__, _get_dtype = new_dtype_getter("float32") class W_Float64Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float64") + descr__new__, _get_dtype = new_dtype_getter("float64") class W_FlexibleBox(W_GenericBox): @@ -392,11 +392,11 @@ __module__ = "numpypy", ) -W_StringBox.typedef = TypeDef("string_", (W_CharacterBox.typedef, str_typedef), +W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), __module__ = "numpypy", ) -W_UnicodeBox.typedef = TypeDef("unicode_", (W_CharacterBox.typedef, unicode_typedef), +W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), __module__ = "numpypy", ) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -465,6 +465,7 @@ assert 5 ^ int_(3) == int_(6) assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) def test_alternate_constructs(self): from _numpypy import dtype @@ -477,7 +478,6 @@ assert a[0] == 1 assert (a + a)[1] == 4 self.check_non_native(a, array([1, 2, 3], 'i2')) - raises(TypeError, lambda: float64(3) & 1) def test_alignment(self): from _numpypy import dtype From noreply at buildbot.pypy.org Mon Feb 20 23:30:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 20 Feb 2012 23:30:22 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: merge numpy-record-dtypes. Message-ID: <20120220223022.CEB13820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52693:069abc3ddd46 Date: 2012-02-20 13:18 +0100 http://bitbucket.org/pypy/pypy/changeset/069abc3ddd46/ Log: merge numpy-record-dtypes. diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -4,6 +4,7 @@ from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.history import BoxVector from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.rlib.debug import debug_start, debug_print, debug_stop VECTOR_SIZE = 2 VEC_MAP = {rop.FLOAT_ADD: rop.FLOAT_VECTOR_ADD, @@ -99,12 +100,19 @@ return self.index == descr.get_field_size() class OptVectorize(Optimization): + track = None + full = None + def __init__(self): self.ops_so_far = [] self.reset() def reset(self): # deal with reset + if self.track or self.full: + debug_start("jit-optimizeopt-vectorize") + debug_print("aborting vectorizing") + debug_stop("jit-optimizeopt-vectorize") for op in self.ops_so_far: self.emit_operation(op) self.ops_so_far = [] @@ -120,6 +128,7 @@ if oopspec == EffectInfo.OS_ASSERT_ALIGNED: index = self.getvalue(op.getarg(2)) self.tracked_indexes[index] = TrackIndex(index, 0) + self.emit_operation(op) else: self.optimize_default(op) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -37,26 +37,44 @@ 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', + 'typeinfo': 'interp_dtype.get_dtype_cache(space).w_typeinfo', + 'generic': 'interp_boxes.W_GenericBox', 'number': 'interp_boxes.W_NumberBox', 'integer': 'interp_boxes.W_IntegerBox', 'signedinteger': 'interp_boxes.W_SignedIntegerBox', 'unsignedinteger': 'interp_boxes.W_UnsignedIntegerBox', 'bool_': 'interp_boxes.W_BoolBox', + 'bool8': 'interp_boxes.W_BoolBox', 'int8': 'interp_boxes.W_Int8Box', + 'byte': 'interp_boxes.W_Int8Box', 'uint8': 'interp_boxes.W_UInt8Box', + 'ubyte': 'interp_boxes.W_UInt8Box', 'int16': 'interp_boxes.W_Int16Box', + 'short': 'interp_boxes.W_Int16Box', 'uint16': 'interp_boxes.W_UInt16Box', + 'ushort': 'interp_boxes.W_UInt16Box', 'int32': 'interp_boxes.W_Int32Box', + 'intc': 'interp_boxes.W_Int32Box', 'uint32': 'interp_boxes.W_UInt32Box', + 'uintc': 'interp_boxes.W_UInt32Box', 'int64': 'interp_boxes.W_Int64Box', 'uint64': 'interp_boxes.W_UInt64Box', + 'longlong': 'interp_boxes.W_LongLongBox', + 'ulonglong': 'interp_boxes.W_ULongLongBox', 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', + 'intp': 'types.IntP.BoxType', + 'uintp': 'types.UIntP.BoxType', + 'flexible': 'interp_boxes.W_FlexibleBox', + 'character': 'interp_boxes.W_CharacterBox', + 'str_': 'interp_boxes.W_StringBox', + 'unicode_': 'interp_boxes.W_UnicodeBox', + 'void': 'interp_boxes.W_VoidBox', } # ufuncs diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -33,7 +33,7 @@ pass SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", - "unegative", "flat"] + "unegative", "flat", "tostring"] TWO_ARG_FUNCTIONS = ["dot", 'take'] class FakeSpace(object): @@ -51,6 +51,8 @@ w_long = "long" w_tuple = 'tuple' w_slice = "slice" + w_str = "str" + w_unicode = "unicode" def __init__(self): """NOT_RPYTHON""" @@ -91,8 +93,12 @@ return BoolObject(obj) elif isinstance(obj, int): return IntObject(obj) + elif isinstance(obj, long): + return LongObject(obj) elif isinstance(obj, W_Root): return obj + elif isinstance(obj, str): + return StringObject(obj) raise NotImplementedError def newlist(self, items): @@ -120,6 +126,11 @@ return int(w_obj.floatval) raise NotImplementedError + def str_w(self, w_obj): + if isinstance(w_obj, StringObject): + return w_obj.v + raise NotImplementedError + def int(self, w_obj): if isinstance(w_obj, IntObject): return w_obj @@ -151,7 +162,13 @@ return instantiate(klass) def newtuple(self, list_w): - raise ValueError + return ListObject(list_w) + + def newdict(self): + return {} + + def setitem(self, dict, item, value): + dict[item] = value def len_w(self, w_obj): if isinstance(w_obj, ListObject): @@ -178,6 +195,11 @@ def __init__(self, intval): self.intval = intval +class LongObject(W_Root): + tp = FakeSpace.w_long + def __init__(self, intval): + self.intval = intval + class ListObject(W_Root): tp = FakeSpace.w_list def __init__(self, items): @@ -190,6 +212,11 @@ self.stop = stop self.step = step +class StringObject(W_Root): + tp = FakeSpace.w_str + def __init__(self, v): + self.v = v + class InterpreterState(object): def __init__(self, code): self.code = code @@ -407,6 +434,9 @@ w_res = neg.call(interp.space, [arr]) elif self.name == "flat": w_res = arr.descr_get_flatiter(interp.space) + elif self.name == "tostring": + arr.descr_tostring(interp.space) + w_res = None else: assert False # unreachable code elif self.name in TWO_ARG_FUNCTIONS: diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,24 +1,25 @@ from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.error import operationerrfmt, OperationError +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef +from pypy.objspace.std.stringtype import str_typedef +from pypy.objspace.std.unicodetype import unicode_typedef from pypy.objspace.std.inttype import int_typedef from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name - MIXIN_32 = (int_typedef,) if LONG_BIT == 32 else () MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): - def get_dtype(space): + def _get_dtype(space): from pypy.module.micronumpy.interp_dtype import get_dtype_cache return getattr(get_dtype_cache(space), "w_%sdtype" % name) def new(space, w_subtype, w_value): - dtype = get_dtype(space) + dtype = _get_dtype(space) return dtype.itemtype.coerce_subtype(space, w_subtype, w_value) - return func_with_new_name(new, name + "_box_new"), staticmethod(get_dtype) + return func_with_new_name(new, name + "_box_new"), staticmethod(_get_dtype) class PrimitiveBox(object): _mixin_ = True @@ -29,7 +30,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -38,6 +38,9 @@ w_subtype.getname(space, '?') ) + def get_dtype(self, space): + return self._get_dtype(space) + def descr_str(self, space): return self.descr_repr(space) @@ -45,12 +48,12 @@ return space.wrap(self.get_dtype(space).itemtype.str_format(self)) def descr_int(self, space): - box = self.convert_to(W_LongBox.get_dtype(space)) + box = self.convert_to(W_LongBox._get_dtype(space)) assert isinstance(box, W_LongBox) return space.wrap(box.value) def descr_float(self, space): - box = self.convert_to(W_Float64Box.get_dtype(space)) + box = self.convert_to(W_Float64Box._get_dtype(space)) assert isinstance(box, W_Float64Box) return space.wrap(box.value) @@ -129,7 +132,7 @@ class W_BoolBox(W_GenericBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("bool") + descr__new__, _get_dtype = new_dtype_getter("bool") class W_NumberBox(W_GenericBox): _attrs_ = () @@ -145,34 +148,40 @@ pass class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int8") + descr__new__, _get_dtype = new_dtype_getter("int8") class W_UInt8Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint8") + descr__new__, _get_dtype = new_dtype_getter("uint8") class W_Int16Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int16") + descr__new__, _get_dtype = new_dtype_getter("int16") class W_UInt16Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint16") + descr__new__, _get_dtype = new_dtype_getter("uint16") class W_Int32Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int32") + descr__new__, _get_dtype = new_dtype_getter("int32") class W_UInt32Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint32") + descr__new__, _get_dtype = new_dtype_getter("uint32") class W_LongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("long") + descr__new__, _get_dtype = new_dtype_getter("long") class W_ULongBox(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("ulong") + descr__new__, _get_dtype = new_dtype_getter("ulong") class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int64") + descr__new__, _get_dtype = new_dtype_getter("int64") + +class W_LongLongBox(W_SignedIntegerBox, PrimitiveBox): + descr__new__, _get_dtype = new_dtype_getter('longlong') class W_UInt64Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint64") + descr__new__, _get_dtype = new_dtype_getter("uint64") + +class W_ULongLongBox(W_SignedIntegerBox, PrimitiveBox): + descr__new__, _get_dtype = new_dtype_getter('ulonglong') class W_InexactBox(W_NumberBox): _attrs_ = () @@ -181,12 +190,50 @@ _attrs_ = () class W_Float32Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float32") + descr__new__, _get_dtype = new_dtype_getter("float32") class W_Float64Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float64") + descr__new__, _get_dtype = new_dtype_getter("float64") +class W_FlexibleBox(W_GenericBox): + pass + +class W_VoidBox(W_FlexibleBox): + def __init__(self, arr, ofs): + self.arr = arr # we have to keep array alive + self.ofs = ofs + + def get_dtype(self, space): + return self.arr.dtype + + @unwrap_spec(item=str) + def descr_getitem(self, space, item): + try: + ofs, dtype = self.arr.dtype.fields[item] + except KeyError: + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + return dtype.itemtype.read(self.arr, 1, self.ofs, ofs) + + @unwrap_spec(item=str) + def descr_setitem(self, space, item, w_value): + try: + ofs, dtype = self.arr.dtype.fields[item] + except KeyError: + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + dtype.itemtype.store(self.arr, 1, self.ofs, ofs, + dtype.coerce(space, w_value)) + +class W_CharacterBox(W_FlexibleBox): + pass + +class W_StringBox(W_CharacterBox): + pass + +class W_UnicodeBox(W_CharacterBox): + pass W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -330,3 +377,26 @@ __new__ = interp2app(W_Float64Box.descr__new__.im_func), ) + +W_FlexibleBox.typedef = TypeDef("flexible", W_GenericBox.typedef, + __module__ = "numpypy", +) + +W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, + __module__ = "numpypy", + __getitem__ = interp2app(W_VoidBox.descr_getitem), + __setitem__ = interp2app(W_VoidBox.descr_setitem), +) + +W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, + __module__ = "numpypy", +) + +W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), + __module__ = "numpypy", +) + +W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), + __module__ = "numpypy", +) + diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -1,26 +1,28 @@ + +import sys from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, interp_attrproperty_w) -from pypy.module.micronumpy import types, signature, interp_boxes +from pypy.module.micronumpy import types, interp_boxes from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib.rarithmetic import LONG_BIT, r_longlong, r_ulonglong UNSIGNEDLTR = "u" SIGNEDLTR = "i" BOOLLTR = "b" FLOATINGLTR = "f" - - -VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) +VOIDLTR = 'V' +STRINGLTR = 'S' +UNICODELTR = 'U' class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[], + fields=None, fieldnames=None): self.itemtype = itemtype self.num = num self.kind = kind @@ -29,53 +31,27 @@ self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors self.aliases = aliases - - def malloc(self, length): - # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, self.itemtype.get_element_size() * length, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True - ) + self.fields = fields + self.fieldnames = fieldnames @specialize.argtype(1) def box(self, value): return self.itemtype.box(value) def coerce(self, space, w_item): - return self.itemtype.coerce(space, w_item) + return self.itemtype.coerce(space, self, w_item) - def getitem(self, storage, i): - return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem(self, arr, i): + return self.itemtype.read(arr, 1, i, 0) - def getitem_bool(self, storage, i): - isize = self.itemtype.get_element_size() - return self.itemtype.read_bool(storage, isize, i, 0) + def getitem_bool(self, arr, i): + return self.itemtype.read_bool(arr, 1, i, 0) - def setitem(self, storage, i, box): - self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) + def setitem(self, arr, i, box): + self.itemtype.store(arr, 1, i, 0, box) def fill(self, storage, box, start, stop): - self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) - - def descr__new__(space, w_subtype, w_dtype): - cache = get_dtype_cache(space) - - if space.is_w(w_dtype, space.w_None): - return cache.w_float64dtype - elif space.isinstance_w(w_dtype, w_subtype): - return w_dtype - elif space.isinstance_w(w_dtype, space.w_str): - name = space.str_w(w_dtype) - for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name or name in dtype.aliases: - return dtype - else: - for dtype in cache.builtin_dtypes: - if w_dtype in dtype.alternate_constructors: - return dtype - if w_dtype is dtype.w_box_type: - return dtype - raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + self.itemtype.fill(storage, self.get_size(), box, start, stop, 0) def descr_str(self, space): return space.wrap(self.name) @@ -86,6 +62,9 @@ def descr_get_itemsize(self, space): return space.wrap(self.itemtype.get_element_size()) + def descr_get_alignment(self, space): + return space.wrap(self.itemtype.alignment) + def descr_get_shape(self, space): return space.newtuple([]) @@ -99,31 +78,175 @@ def descr_ne(self, space, w_other): return space.wrap(not self.eq(space, w_other)) + def descr_get_fields(self, space): + if self.fields is None: + return space.w_None + w_d = space.newdict() + for name, (offset, subdtype) in self.fields.iteritems(): + space.setitem(w_d, space.wrap(name), space.newtuple([subdtype, + space.wrap(offset)])) + return w_d + + def descr_get_names(self, space): + if self.fieldnames is None: + return space.w_None + return space.newtuple([space.wrap(name) for name in self.fieldnames]) + + @unwrap_spec(item=str) + def descr_getitem(self, space, item): + if self.fields is None: + raise OperationError(space.w_KeyError, space.wrap("There are no keys in dtypes %s" % self.name)) + try: + return self.fields[item][1] + except KeyError: + raise OperationError(space.w_KeyError, space.wrap("Field named %s not found" % item)) + def is_int_type(self): return (self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR or self.kind == BOOLLTR) + def is_signed(self): + return self.kind == SIGNEDLTR + def is_bool_type(self): return self.kind == BOOLLTR + def is_record_type(self): + return self.fields is not None + + def __repr__(self): + if self.fields is not None: + return '' % self.fields + return '' % self.itemtype + + def get_size(self): + return self.itemtype.get_element_size() + +def dtype_from_list(space, w_lst): + lst_w = space.listview(w_lst) + fields = {} + offset = 0 + ofs_and_items = [] + fieldnames = [] + for w_elem in lst_w: + w_fldname, w_flddesc = space.fixedview(w_elem, 2) + subdtype = descr__new__(space, space.gettypefor(W_Dtype), w_flddesc) + fldname = space.str_w(w_fldname) + if fldname in fields: + raise OperationError(space.w_ValueError, space.wrap("two fields with the same name")) + assert isinstance(subdtype, W_Dtype) + fields[fldname] = (offset, subdtype) + ofs_and_items.append((offset, subdtype.itemtype)) + offset += subdtype.itemtype.get_element_size() + fieldnames.append(fldname) + itemtype = types.RecordType(ofs_and_items, offset) + return W_Dtype(itemtype, 20, VOIDLTR, "void" + str(8 * itemtype.get_element_size()), + "V", space.gettypefor(interp_boxes.W_VoidBox), fields=fields, + fieldnames=fieldnames) + +def dtype_from_dict(space, w_dict): + raise OperationError(space.w_NotImplementedError, space.wrap( + "dtype from dict")) + +def variable_dtype(space, name): + if name[0] in '<>': + # ignore byte order, not sure if it's worth it for unicode only + if name[0] != byteorder_prefix and name[1] == 'U': + raise OperationError(space.w_NotImplementedError, space.wrap( + "unimplemented non-native unicode")) + name = name[1:] + char = name[0] + if len(name) == 1: + size = 0 + else: + try: + size = int(name[1:]) + except ValueError: + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + if char == 'S': + itemtype = types.StringType(size) + basename = 'string' + num = 18 + w_box_type = space.gettypefor(interp_boxes.W_StringBox) + elif char == 'V': + num = 20 + basename = 'void' + w_box_type = space.gettypefor(interp_boxes.W_VoidBox) + raise OperationError(space.w_NotImplementedError, space.wrap( + "pure void dtype")) + else: + assert char == 'U' + basename = 'unicode' + itemtype = types.UnicodeType(size) + num = 19 + w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox) + return W_Dtype(itemtype, num, char, + basename + str(8 * itemtype.get_element_size()), + char, w_box_type) + +def dtype_from_spec(space, name): + raise OperationError(space.w_NotImplementedError, space.wrap( + "dtype from spec")) + +def descr__new__(space, w_subtype, w_dtype): + cache = get_dtype_cache(space) + + if space.is_w(w_dtype, space.w_None): + return cache.w_float64dtype + elif space.isinstance_w(w_dtype, w_subtype): + return w_dtype + elif space.isinstance_w(w_dtype, space.w_str): + name = space.str_w(w_dtype) + if ',' in name: + return dtype_from_spec(space, name) + try: + return cache.dtypes_by_name[name] + except KeyError: + pass + if name[0] in 'VSU' or name[0] in '<>' and name[1] in 'VSU': + return variable_dtype(space, name) + elif space.isinstance_w(w_dtype, space.w_list): + return dtype_from_list(space, w_dtype) + elif space.isinstance_w(w_dtype, space.w_dict): + return dtype_from_dict(space, w_dtype) + else: + for dtype in cache.builtin_dtypes: + if w_dtype in dtype.alternate_constructors: + return dtype + if w_dtype is dtype.w_box_type: + return dtype + raise OperationError(space.w_TypeError, space.wrap("data type not understood")) + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", - __new__ = interp2app(W_Dtype.descr__new__.im_func), + __new__ = interp2app(descr__new__), __str__= interp2app(W_Dtype.descr_str), __repr__ = interp2app(W_Dtype.descr_repr), __eq__ = interp2app(W_Dtype.descr_eq), __ne__ = interp2app(W_Dtype.descr_ne), + __getitem__ = interp2app(W_Dtype.descr_getitem), num = interp_attrproperty("num", cls=W_Dtype), kind = interp_attrproperty("kind", cls=W_Dtype), + char = interp_attrproperty("char", cls=W_Dtype), type = interp_attrproperty_w("w_box_type", cls=W_Dtype), itemsize = GetSetProperty(W_Dtype.descr_get_itemsize), + alignment = GetSetProperty(W_Dtype.descr_get_alignment), shape = GetSetProperty(W_Dtype.descr_get_shape), name = interp_attrproperty('name', cls=W_Dtype), + fields = GetSetProperty(W_Dtype.descr_get_fields), + names = GetSetProperty(W_Dtype.descr_get_names), ) W_Dtype.typedef.acceptable_as_base_class = False +if sys.byteorder == 'little': + byteorder_prefix = '<' + nonnative_byteorder_prefix = '>' +else: + byteorder_prefix = '>' + nonnative_byteorder_prefix = '<' + class DtypeCache(object): def __init__(self, space): self.w_booldtype = W_Dtype( @@ -211,7 +334,6 @@ name="int64", char="q", w_box_type=space.gettypefor(interp_boxes.W_Int64Box), - alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( types.UInt64(), @@ -239,18 +361,147 @@ alternate_constructors=[space.w_float], aliases=["float"], ) - + self.w_longlongdtype = W_Dtype( + types.Int64(), + num=9, + kind=SIGNEDLTR, + name='int64', + char='q', + w_box_type = space.gettypefor(interp_boxes.W_LongLongBox), + alternate_constructors=[space.w_long], + ) + self.w_ulonglongdtype = W_Dtype( + types.UInt64(), + num=10, + kind=UNSIGNEDLTR, + name='uint64', + char='Q', + w_box_type = space.gettypefor(interp_boxes.W_ULongLongBox), + ) + self.w_stringdtype = W_Dtype( + types.StringType(0), + num=18, + kind=STRINGLTR, + name='string', + char='S', + w_box_type = space.gettypefor(interp_boxes.W_StringBox), + alternate_constructors=[space.w_str], + ) + self.w_unicodedtype = W_Dtype( + types.UnicodeType(0), + num=19, + kind=UNICODELTR, + name='unicode', + char='U', + w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox), + alternate_constructors=[space.w_unicode], + ) + self.w_voiddtype = W_Dtype( + types.VoidType(0), + num=20, + kind=VOIDLTR, + name='void', + char='V', + w_box_type = space.gettypefor(interp_boxes.W_VoidBox), + #alternate_constructors=[space.w_buffer], + # XXX no buffer in space + ) self.builtin_dtypes = [ self.w_booldtype, self.w_int8dtype, self.w_uint8dtype, self.w_int16dtype, self.w_uint16dtype, self.w_int32dtype, self.w_uint32dtype, self.w_longdtype, self.w_ulongdtype, - self.w_int64dtype, self.w_uint64dtype, self.w_float32dtype, - self.w_float64dtype + self.w_longlongdtype, self.w_ulonglongdtype, + self.w_float32dtype, + self.w_float64dtype, self.w_stringdtype, self.w_unicodedtype, + self.w_voiddtype, ] self.dtypes_by_num_bytes = sorted( (dtype.itemtype.get_element_size(), dtype) for dtype in self.builtin_dtypes ) + self.dtypes_by_name = {} + for dtype in self.builtin_dtypes: + self.dtypes_by_name[dtype.name] = dtype + can_name = dtype.kind + str(dtype.itemtype.get_element_size()) + self.dtypes_by_name[can_name] = dtype + self.dtypes_by_name[byteorder_prefix + can_name] = dtype + new_name = nonnative_byteorder_prefix + can_name + itemtypename = dtype.itemtype.__class__.__name__ + itemtype = getattr(types, 'NonNative' + itemtypename)() + self.dtypes_by_name[new_name] = W_Dtype( + itemtype, + dtype.num, dtype.kind, new_name, dtype.char, dtype.w_box_type) + for alias in dtype.aliases: + self.dtypes_by_name[alias] = dtype + self.dtypes_by_name[dtype.char] = dtype + + typeinfo_full = { + 'LONGLONG': self.w_int64dtype, + 'SHORT': self.w_int16dtype, + 'VOID': self.w_voiddtype, + #'LONGDOUBLE':, + 'UBYTE': self.w_uint8dtype, + 'UINTP': self.w_ulongdtype, + 'ULONG': self.w_ulongdtype, + 'LONG': self.w_longdtype, + 'UNICODE': self.w_unicodedtype, + #'OBJECT', + 'ULONGLONG': self.w_ulonglongdtype, + 'STRING': self.w_stringdtype, + #'CDOUBLE', + #'DATETIME', + 'UINT': self.w_uint32dtype, + 'INTP': self.w_longdtype, + #'HALF', + 'BYTE': self.w_int8dtype, + #'CFLOAT': , + #'TIMEDELTA', + 'INT': self.w_int32dtype, + 'DOUBLE': self.w_float64dtype, + 'USHORT': self.w_uint16dtype, + 'FLOAT': self.w_float32dtype, + 'BOOL': self.w_booldtype, + #, 'CLONGDOUBLE'] + } + typeinfo_partial = { + 'Generic': interp_boxes.W_GenericBox, + 'Character': interp_boxes.W_CharacterBox, + 'Flexible': interp_boxes.W_FlexibleBox, + 'Inexact': interp_boxes.W_InexactBox, + 'Integer': interp_boxes.W_IntegerBox, + 'SignedInteger': interp_boxes.W_SignedIntegerBox, + 'UnsignedInteger': interp_boxes.W_UnsignedIntegerBox, + #'ComplexFloating', + 'Number': interp_boxes.W_NumberBox, + 'Floating': interp_boxes.W_FloatingBox + } + w_typeinfo = space.newdict() + for k, v in typeinfo_partial.iteritems(): + space.setitem(w_typeinfo, space.wrap(k), space.gettypefor(v)) + for k, dtype in typeinfo_full.iteritems(): + itemsize = dtype.itemtype.get_element_size() + items_w = [space.wrap(dtype.char), + space.wrap(dtype.num), + space.wrap(itemsize * 8), # in case of changing + # number of bits per byte in the future + space.wrap(itemsize or 1)] + if dtype.is_int_type(): + if dtype.kind == BOOLLTR: + w_maxobj = space.wrap(1) + w_minobj = space.wrap(0) + elif dtype.is_signed(): + w_maxobj = space.wrap(r_longlong((1 << (itemsize*8 - 1)) + - 1)) + w_minobj = space.wrap(r_longlong(-1) << (itemsize*8 - 1)) + else: + w_maxobj = space.wrap(r_ulonglong(1 << (itemsize*8)) - 1) + w_minobj = space.wrap(0) + items_w = items_w + [w_maxobj, w_minobj] + items_w = items_w + [dtype.w_box_type] + + w_tuple = space.newtuple(items_w) + space.setitem(w_typeinfo, space.wrap(k), w_tuple) + self.w_typeinfo = w_typeinfo def get_dtype_cache(space): return space.fromcache(DtypeCache) diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -42,24 +42,65 @@ we can go faster. All the calculations happen in next() -next_step_x() tries to do the iteration for a number of steps at once, +next_skip_x() tries to do the iteration for a number of steps at once, but then we cannot gaurentee that we only overflow one single shape dimension, perhaps we could overflow times in one big step. """ # structures to describe slicing -class Chunk(object): +class BaseChunk(object): + pass + +class RecordChunk(BaseChunk): + def __init__(self, name): + self.name = name + + def apply(self, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice + + arr = arr.get_concrete() + ofs, subdtype = arr.dtype.fields[self.name] + # strides backstrides are identical, ofs only changes start + return W_NDimSlice(arr.start + ofs, arr.strides[:], arr.backstrides[:], + arr.shape[:], arr, subdtype) + +class Chunks(BaseChunk): + def __init__(self, l): + self.l = l + + @jit.unroll_safe + def extend_shape(self, old_shape): + shape = [] + i = -1 + for i, c in enumerate(self.l): + if c.step != 0: + shape.append(c.lgt) + s = i + 1 + assert s >= 0 + return shape[:] + old_shape[s:] + + def apply(self, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice,\ + VirtualSlice, ConcreteArray + + shape = self.extend_shape(arr.shape) + if not isinstance(arr, ConcreteArray): + return VirtualSlice(arr, self, shape) + r = calculate_slice_strides(arr.shape, arr.start, arr.strides, + arr.backstrides, self.l) + _, start, strides, backstrides = r + return W_NDimSlice(start, strides[:], backstrides[:], + shape[:], arr) + + +class Chunk(BaseChunk): def __init__(self, start, stop, step, lgt): self.start = start self.stop = stop self.step = step self.lgt = lgt - def extend_shape(self, shape): - if self.step != 0: - shape.append(self.lgt) - def __repr__(self): return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, self.lgt) @@ -95,17 +136,20 @@ raise NotImplementedError class ArrayIterator(BaseIterator): - def __init__(self, size): + def __init__(self, size, element_size): self.offset = 0 self.size = size + self.element_size = element_size def next(self, shapelen): return self.next_skip_x(1) - def next_skip_x(self, ofs): + def next_skip_x(self, x): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + ofs + element_size = jit.promote(self.element_size) + arr.offset = self.offset + x * element_size + arr.element_size = element_size return arr def next_no_increase(self, shapelen): @@ -152,7 +196,7 @@ elif isinstance(t, ViewTransform): r = calculate_slice_strides(self.res_shape, self.offset, self.strides, - self.backstrides, t.chunks) + self.backstrides, t.chunks.l) return ViewIterator(r[1], r[2], r[3], r[0]) @jit.unroll_safe diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,16 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) -from pypy.module.micronumpy.strides import (calculate_slice_strides, - shape_agreement, find_shape_and_elems, get_shape_from_iterable, - calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator, Chunks, RecordChunk) +from pypy.module.micronumpy.strides import (shape_agreement, + find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -47,7 +46,7 @@ ) flat_set_driver = jit.JitDriver( greens=['shapelen', 'base'], - reds=['step', 'ai', 'lngth', 'arr', 'basei'], + reds=['step', 'lngth', 'ri', 'arr', 'basei'], name='numpy_flatset', ) @@ -79,8 +78,8 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + shape = _find_shape(space, w_size) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def _unaryop_impl(ufunc_name): def impl(self, space): @@ -223,8 +222,7 @@ return scalar_w(space, dtype, space.wrap(0)) # Do the dims match? out_shape, other_critical_dim = match_dot_shapes(space, self, other) - out_size = support.product(out_shape) - result = W_NDimArray(out_size, out_shape, dtype) + result = W_NDimArray(out_shape, dtype) # This is the place to add fpypy and blas return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, @@ -243,7 +241,7 @@ return space.wrap(self.find_dtype().itemtype.get_element_size()) def descr_get_nbytes(self, space): - return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + return space.wrap(self.size) @jit.unroll_safe def descr_get_shape(self, space): @@ -251,13 +249,16 @@ def descr_set_shape(self, space, w_iterable): new_shape = get_shape_from_iterable(space, - self.size, w_iterable) + support.product(self.shape), w_iterable) if isinstance(self, Scalar): return self.get_concrete().setshape(space, new_shape) def descr_get_size(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) + + def get_size(self): + return self.size // self.find_dtype().get_size() def descr_copy(self, space): return self.copy(space) @@ -277,7 +278,7 @@ def empty_copy(self, space, dtype): shape = self.shape - return W_NDimArray(support.product(shape), shape[:], dtype, 'C') + return W_NDimArray(shape[:], dtype, 'C') def descr_len(self, space): if len(self.shape): @@ -318,6 +319,8 @@ """ The result of getitem/setitem is a single item if w_idx is a list of scalars that match the size of shape """ + if space.isinstance_w(w_idx, space.w_str): + return False shape_len = len(self.shape) if shape_len == 0: raise OperationError(space.w_IndexError, space.wrap( @@ -343,34 +346,41 @@ @jit.unroll_safe def _prepare_slice_args(self, space, w_idx): + if space.isinstance_w(w_idx, space.w_str): + idx = space.str_w(w_idx) + dtype = self.find_dtype() + if not dtype.is_record_type() or idx not in dtype.fields: + raise OperationError(space.w_ValueError, space.wrap( + "field named %s not defined" % idx)) + return RecordChunk(idx) if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] - return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in - enumerate(space.fixedview(w_idx))] + return Chunks([Chunk(*space.decode_index4(w_idx, self.shape[0]))]) + return Chunks([Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in + enumerate(space.fixedview(w_idx))]) - def count_all_true(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(arr.shape) + def count_all_true(self): + sig = self.find_sig() + frame = sig.create_frame(self) + shapelen = len(self.shape) s = 0 iter = None while not frame.done(): - count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + count_driver.jit_merge_point(arr=self, frame=frame, iter=iter, s=s, shapelen=shapelen) iter = frame.get_final_iter() - s += arr.dtype.getitem_bool(arr.storage, iter.offset) + s += self.dtype.getitem_bool(self, iter.offset) frame.next(shapelen) return s def getitem_filter(self, space, arr): concr = arr.get_concrete() - if concr.size > self.size: + if concr.get_size() > self.get_size(): raise OperationError(space.w_IndexError, space.wrap("index out of range for array")) - size = self.count_all_true(concr) - res = W_NDimArray(size, [size], self.find_dtype()) - ri = ArrayIterator(size) + size = concr.count_all_true() + res = W_NDimArray([size], self.find_dtype()) + ri = res.create_iter() shapelen = len(self.shape) argi = concr.create_iter() sig = self.find_sig() @@ -380,7 +390,7 @@ filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, frame=frame, v=v, res=res, sig=sig, shapelen=shapelen, self=self) - if concr.dtype.getitem_bool(concr.storage, argi.offset): + if concr.dtype.getitem_bool(concr, argi.offset): v = sig.eval(frame, self) res.setitem(ri.offset, v) ri = ri.next(1) @@ -390,23 +400,6 @@ frame.next(shapelen) return res - def setitem_filter(self, space, idx, val): - size = self.count_all_true(idx) - arr = SliceArray([size], self.dtype, self, val) - sig = arr.find_sig() - shapelen = len(self.shape) - frame = sig.create_frame(arr) - idxi = idx.create_iter() - while not frame.done(): - filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, - frame=frame, arr=arr, - shapelen=shapelen) - if idx.dtype.getitem_bool(idx.storage, idxi.offset): - sig.eval(frame, arr) - frame.next_from_second(1) - frame.next_first(shapelen) - idxi = idxi.next(shapelen) - def descr_getitem(self, space, w_idx): if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and w_idx.find_dtype().is_bool_type()): @@ -416,7 +409,24 @@ item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item) chunks = self._prepare_slice_args(space, w_idx) - return self.create_slice(chunks) + return chunks.apply(self) + + def setitem_filter(self, space, idx, val): + size = idx.count_all_true() + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) def descr_setitem(self, space, w_idx, w_value): self.invalidated() @@ -434,26 +444,9 @@ if not isinstance(w_value, BaseArray): w_value = convert_to_array(space, w_value) chunks = self._prepare_slice_args(space, w_idx) - view = self.create_slice(chunks).get_concrete() + view = chunks.apply(self).get_concrete() view.setslice(space, w_value) - @jit.unroll_safe - def create_slice(self, chunks): - shape = [] - i = -1 - for i, chunk in enumerate(chunks): - chunk.extend_shape(shape) - s = i + 1 - assert s >= 0 - shape += self.shape[s:] - if not isinstance(self, ConcreteArray): - return VirtualSlice(self, chunks, shape) - r = calculate_slice_strides(self.shape, self.start, self.strides, - self.backstrides, chunks) - _, start, strides, backstrides = r - return W_NDimSlice(start, strides[:], backstrides[:], - shape[:], self) - def descr_reshape(self, space, args_w): """reshape(...) a.reshape(shape) @@ -470,12 +463,13 @@ w_shape = args_w[0] else: w_shape = space.newtuple(args_w) - new_shape = get_shape_from_iterable(space, self.size, w_shape) + new_shape = get_shape_from_iterable(space, support.product(self.shape), + w_shape) return self.reshape(space, new_shape) def reshape(self, space, new_shape): concrete = self.get_concrete() - # Since we got to here, prod(new_shape) == self.size + # Since we got to here, prod(new_shape) == self.get_size() new_strides = calc_new_strides(new_shape, concrete.shape, concrete.strides, concrete.order) if new_strides: @@ -506,7 +500,7 @@ def descr_mean(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): w_axis = space.wrap(-1) - w_denom = space.wrap(self.size) + w_denom = space.wrap(support.product(self.shape)) else: dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) @@ -525,7 +519,7 @@ concr.fill(space, w_value) def descr_nonzero(self, space): - if self.size > 1: + if self.get_size() > 1: raise OperationError(space.w_ValueError, space.wrap( "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()")) concr = self.get_concrete_or_scalar() @@ -604,8 +598,7 @@ space.wrap("axis unsupported for take")) index_i = index.create_iter() res_shape = index.shape - size = support.product(res_shape) - res = W_NDimArray(size, res_shape[:], concr.dtype, concr.order) + res = W_NDimArray(res_shape[:], concr.dtype, concr.order) res_i = res.create_iter() shapelen = len(index.shape) sig = concr.find_sig() @@ -644,6 +637,11 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "non-int arg not supported")) + def descr_tostring(self, space): + ra = ToStringArray(self) + loop.compute(ra) + return space.wrap(ra.s.build()) + def compute_first_step(self, sig, frame): pass @@ -665,8 +663,7 @@ """ Intermediate class representing a literal. """ - size = 1 - _attrs_ = ["dtype", "value", "shape"] + _attrs_ = ["dtype", "value", "shape", "size"] def __init__(self, dtype, value): self.shape = [] @@ -674,6 +671,7 @@ self.dtype = dtype assert isinstance(value, interp_boxes.W_GenericBox) self.value = value + self.size = dtype.get_size() def find_dtype(self): return self.dtype @@ -691,8 +689,7 @@ return self def reshape(self, space, new_shape): - size = support.product(new_shape) - res = W_NDimArray(size, new_shape, self.dtype, 'C') + res = W_NDimArray(new_shape, self.dtype, 'C') res.setitem(0, self.value) return res @@ -705,6 +702,7 @@ self.forced_result = None self.res_dtype = res_dtype self.name = name + self.size = support.product(self.shape) * res_dtype.get_size() def _del_sources(self): # Function for deleting references to source arrays, @@ -712,7 +710,7 @@ raise NotImplementedError def compute(self): - ra = ResultArray(self, self.size, self.shape, self.res_dtype) + ra = ResultArray(self, self.shape, self.res_dtype) loop.compute(ra) return ra.left @@ -740,7 +738,6 @@ def __init__(self, child, chunks, shape): self.child = child self.chunks = chunks - self.size = support.product(shape) VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) def create_sig(self): @@ -752,7 +749,7 @@ def force_if_needed(self): if self.forced_result is None: concr = self.child.get_concrete() - self.forced_result = concr.create_slice(self.chunks) + self.forced_result = self.chunks.apply(concr) def _del_sources(self): self.child = None @@ -787,7 +784,6 @@ self.left = left self.right = right self.calc_dtype = calc_dtype - self.size = support.product(self.shape) def _del_sources(self): self.left = None @@ -815,15 +811,30 @@ self.left.create_sig(), self.right.create_sig()) class ResultArray(Call2): - def __init__(self, child, size, shape, dtype, res=None, order='C'): + def __init__(self, child, shape, dtype, res=None, order='C'): if res is None: - res = W_NDimArray(size, shape, dtype, order) + res = W_NDimArray(shape, dtype, order) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): return signature.ResultSignature(self.res_dtype, self.left.create_sig(), self.right.create_sig()) +class ToStringArray(Call1): + def __init__(self, child): + dtype = child.find_dtype() + self.itemsize = dtype.itemtype.get_element_size() + self.s = StringBuilder(child.size * self.itemsize) + Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, + child) + self.res = W_NDimArray([1], dtype, 'C') + self.res_casted = rffi.cast(rffi.CArrayPtr(lltype.Char), + self.res.storage) + + def create_sig(self): + return signature.ToStringSignature(self.calc_dtype, + self.values.create_sig()) + def done_if_true(dtype, val): return dtype.itemtype.bool(val) @@ -897,13 +908,13 @@ """ _immutable_fields_ = ['storage'] - def __init__(self, size, shape, dtype, order='C', parent=None): - self.size = size + def __init__(self, shape, dtype, order='C', parent=None): self.parent = parent + self.size = support.product(shape) * dtype.get_size() if parent is not None: self.storage = parent.storage else: - self.storage = dtype.malloc(size) + self.storage = dtype.itemtype.malloc(self.size) self.order = order self.dtype = dtype if self.strides is None: @@ -922,13 +933,14 @@ return self.dtype def getitem(self, item): - return self.dtype.getitem(self.storage, item) + return self.dtype.getitem(self, item) def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def calc_strides(self, shape): + dtype = self.find_dtype() strides = [] backstrides = [] s = 1 @@ -936,8 +948,8 @@ if self.order == 'C': shape_rev.reverse() for sh in shape_rev: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -985,9 +997,9 @@ shapelen = len(self.shape) if shapelen == 1: rffi.c_memcpy( - rffi.ptradd(self.storage, self.start * itemsize), - rffi.ptradd(w_value.storage, w_value.start * itemsize), - self.size * itemsize + rffi.ptradd(self.storage, self.start), + rffi.ptradd(w_value.storage, w_value.start), + self.size ) else: dest = SkipLastAxisIterator(self) @@ -1002,7 +1014,7 @@ dest.next() def copy(self, space): - array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) + array = W_NDimArray(self.shape[:], self.dtype, self.order) array.setslice(space, self) return array @@ -1016,14 +1028,15 @@ class W_NDimSlice(ViewArray): - def __init__(self, start, strides, backstrides, shape, parent): + def __init__(self, start, strides, backstrides, shape, parent, dtype=None): assert isinstance(parent, ConcreteArray) if isinstance(parent, W_NDimSlice): parent = parent.parent self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, support.product(shape), shape, parent.dtype, - parent.order, parent) + if dtype is None: + dtype = parent.dtype + ViewArray.__init__(self, shape, dtype, parent.order, parent) self.start = start def create_iter(self, transforms=None): @@ -1038,12 +1051,13 @@ # but then calc_strides would have to accept a stepping factor strides = [] backstrides = [] - s = self.strides[0] + dtype = self.find_dtype() + s = self.strides[0] // dtype.get_size() if self.order == 'C': new_shape.reverse() for sh in new_shape: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -1071,14 +1085,16 @@ """ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def setshape(self, space, new_shape): self.shape = new_shape self.calc_strides(new_shape) def create_iter(self, transforms=None): - return ArrayIterator(self.size).apply_transformations(self, transforms) + esize = self.find_dtype().get_size() + return ArrayIterator(self.size, esize).apply_transformations(self, + transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1086,18 +1102,13 @@ def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) -def _find_size_and_shape(space, w_size): +def _find_shape(space, w_size): if space.isinstance_w(w_size, space.w_int): - size = space.int_w(w_size) - shape = [size] - else: - size = 1 - shape = [] - for w_item in space.fixedview(w_size): - item = space.int_w(w_item) - size *= item - shape.append(item) - return size, shape + return [space.int_w(w_size)] + shape = [] + for w_item in space.fixedview(w_size): + shape.append(space.int_w(w_item)) + return shape @unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) def array(space, w_item_or_iterable, w_dtype=None, w_order=None, @@ -1131,28 +1142,28 @@ if copy: return w_item_or_iterable.copy(space) return w_item_or_iterable - shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) + if w_dtype is None or space.is_w(w_dtype, space.w_None): + dtype = None + else: + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype)) + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable, dtype) # they come back in C order - size = len(elems_w) - if w_dtype is None or space.is_w(w_dtype, space.w_None): - w_dtype = None + if dtype is None: for w_elem in elems_w: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, - w_dtype) - if w_dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: + dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, + dtype) + if dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: break - if w_dtype is None: - w_dtype = space.w_None - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) + if dtype is None: + dtype = interp_dtype.get_dtype_cache(space).w_float64dtype + arr = W_NDimArray(shape[:], dtype=dtype, order=order) shapelen = len(shape) - arr_iter = ArrayIterator(arr.size) + arr_iter = arr.create_iter() # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] - dtype.setitem(arr.storage, arr_iter.offset, + dtype.setitem(arr, arr_iter.offset, dtype.coerce(space, w_elem)) arr_iter = arr_iter.next(shapelen) return arr @@ -1161,22 +1172,22 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(0)) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def ones(space, w_size, w_dtype=None): dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(1)) - arr = W_NDimArray(size, shape[:], dtype=dtype) + arr = W_NDimArray(shape[:], dtype=dtype) one = dtype.box(1) - arr.dtype.fill(arr.storage, one, 0, size) + arr.dtype.fill(arr.storage, one, 0, arr.size) return space.wrap(arr) @unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) @@ -1224,13 +1235,13 @@ "array dimensions must agree except for axis being concatenated")) elif i == axis: shape[i] += axis_size - res = W_NDimArray(support.product(shape), shape, dtype, 'C') + res = W_NDimArray(shape, dtype, 'C') chunks = [Chunk(0, i, 1, i) for i in shape] axis_start = 0 for arr in args_w: chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, arr.shape[axis]) - res.create_slice(chunks).setslice(space, arr) + Chunks(chunks).apply(res).setslice(space, arr) axis_start += arr.shape[axis] return res @@ -1297,6 +1308,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), @@ -1315,6 +1327,7 @@ std = interp2app(BaseArray.descr_std), fill = interp2app(BaseArray.descr_fill), + tostring = interp2app(BaseArray.descr_tostring), copy = interp2app(BaseArray.descr_copy), flatten = interp2app(BaseArray.descr_flatten), @@ -1337,7 +1350,7 @@ self.iter = sig.create_frame(arr).get_final_iter() self.base = arr self.index = 0 - ViewArray.__init__(self, arr.size, [arr.size], arr.dtype, arr.order, + ViewArray.__init__(self, [arr.get_size()], arr.dtype, arr.order, arr) def descr_next(self, space): @@ -1352,7 +1365,7 @@ return self def descr_len(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) def descr_index(self, space): return space.wrap(self.index) @@ -1370,28 +1383,26 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) # setslice would have been better, but flat[u:v] for arbitrary # shapes of array a cannot be represented as a[x1:x2, y1:y2] basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) if lngth <2: return base.getitem(basei.offset) - ri = ArrayIterator(lngth) - res = W_NDimArray(lngth, [lngth], base.dtype, - base.order) + res = W_NDimArray([lngth], base.dtype, base.order) + ri = res.create_iter() while not ri.done(): flat_get_driver.jit_merge_point(shapelen=shapelen, base=base, basei=basei, step=step, res=res, - ri=ri, - ) + ri=ri) w_val = base.getitem(basei.offset) - res.setitem(ri.offset,w_val) + res.setitem(ri.offset, w_val) basei = basei.next_skip_x(shapelen, step) ri = ri.next(shapelen) return res @@ -1402,27 +1413,28 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) arr = convert_to_array(space, w_value) - ai = 0 + ri = arr.create_iter() basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) while lngth > 0: flat_set_driver.jit_merge_point(shapelen=shapelen, - basei=basei, - base=base, - step=step, - arr=arr, - ai=ai, - lngth=lngth, - ) - v = arr.getitem(ai).convert_to(base.dtype) + basei=basei, + base=base, + step=step, + arr=arr, + lngth=lngth, + ri=ri) + v = arr.getitem(ri.offset).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done - ai = (ai + 1) % arr.size basei = basei.next_skip_x(shapelen, step) + ri = ri.next(shapelen) + # WTF is numpy thinking? + ri.offset %= arr.size lngth -= 1 def create_sig(self): @@ -1430,9 +1442,9 @@ def create_iter(self, transforms=None): return ViewIterator(self.base.start, self.base.strides, - self.base.backstrides, - self.base.shape).apply_transformations(self.base, - transforms) + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) def descr_base(self, space): return space.wrap(self.base) diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -51,9 +51,11 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(num_items, [num_items], dtype=dtype) - for i, val in enumerate(items): - a.dtype.setitem(a.storage, i, val) + a = W_NDimArray([num_items], dtype=dtype) + ai = a.create_iter() + for val in items: + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) @@ -61,6 +63,7 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray itemsize = dtype.itemtype.get_element_size() + assert itemsize >= 0 if count == -1: count = length / itemsize if length % itemsize != 0: @@ -71,10 +74,14 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(count, [count], dtype=dtype) + a = W_NDimArray([count], dtype=dtype) + ai = a.create_iter() for i in range(count): - val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) - a.dtype.setitem(a.storage, i, val) + start = i*itemsize + assert start >= 0 + val = dtype.itemtype.runpack_str(s[start:start + itemsize]) + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -156,7 +156,7 @@ shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] - result = W_NDimArray(support.product(shape), shape, dtype) + result = W_NDimArray(shape, dtype) arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) loop.compute(arr) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -143,11 +143,10 @@ from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) - storage = concr.storage if self.iter_no >= len(iterlist): iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): - arraylist.append(storage) + arraylist.append(concr) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -321,7 +320,23 @@ assert isinstance(arr, ResultArray) offset = frame.get_final_iter().offset - arr.left.setitem(offset, self.right.eval(frame, arr.right)) + res = arr.left + if frame.first_iteration: + jit.assert_aligned(res, offset) + res.setitem(offset, self.right.eval(frame, arr.right)) + +class ToStringSignature(Call1): + def __init__(self, dtype, child): + Call1.__init__(self, None, 'tostring', dtype, child) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import ToStringArray + + assert isinstance(arr, ToStringArray) + arr.res.setitem(0, self.child.eval(frame, arr.values).convert_to( + self.dtype)) + for i in range(arr.itemsize): + arr.s.append(arr.res_casted[i]) class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): @@ -389,7 +404,10 @@ assert isinstance(arr, Call2) ofs = frame.iterators[0].offset - arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + res = arr.left + if frame.first_iteration: + jit.assert_aligned(res, ofs) + res.setitem(ofs, self.right.eval(frame, arr.right).convert_to( self.calc_dtype)) def debug_repr(self): diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -38,22 +38,31 @@ rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides -def find_shape_and_elems(space, w_iterable): +def is_single_elem(space, w_elem, is_rec_type): + if (is_rec_type and space.isinstance_w(w_elem, space.w_tuple)): + return True + if space.issequence_w(w_elem): + return False + return True + +def find_shape_and_elems(space, w_iterable, dtype): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) + is_rec_type = dtype is not None and dtype.is_record_type() while True: new_batch = [] if not batch: return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): + if is_single_elem(space, batch[0], is_rec_type): + for w_elem in batch: + if not is_single_elem(space, w_elem, is_rec_type): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) return shape, batch size = space.len_w(batch[0]) for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + if (is_single_elem(space, w_elem, is_rec_type) or + space.len_w(w_elem) != size): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) new_batch += space.listview(w_elem) diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -21,8 +21,8 @@ float64_dtype = get_dtype_cache(space).w_float64dtype bool_dtype = get_dtype_cache(space).w_booldtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) - ar2 = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) + ar2 = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) v2 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(2.0))) sig1 = v1.find_sig() @@ -40,7 +40,7 @@ v4 = ar.descr_add(space, ar) assert v1.find_sig() is v4.find_sig() - bool_ar = W_NDimArray(10, [10], dtype=bool_dtype) + bool_ar = W_NDimArray([10], dtype=bool_dtype) v5 = ar.descr_add(space, bool_ar) assert v5.find_sig() is not v1.find_sig() assert v5.find_sig() is not v2.find_sig() @@ -57,7 +57,7 @@ def test_slice_signature(self, space): float64_dtype = get_dtype_cache(space).w_float64dtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_getitem(space, space.wrap(slice(1, 3, 1))) v2 = ar.descr_getitem(space, space.wrap(slice(4, 6, 1))) assert v1.find_sig() is v2.find_sig() diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -1,5 +1,6 @@ from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - +from pypy.module.micronumpy.interp_dtype import nonnative_byteorder_prefix +from pypy.interpreter.gateway import interp2app class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): @@ -12,7 +13,10 @@ assert dtype(d) is d assert dtype(None) is dtype(float) assert dtype('int8').name == 'int8' + assert dtype(int).fields is None + assert dtype(int).names is None raises(TypeError, dtype, 1042) + raises(KeyError, 'dtype(int)["asdasd"]') def test_dtype_eq(self): from _numpypy import dtype @@ -53,13 +57,13 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from _numpypy import array, False_, True_, int64 + from _numpypy import array, False_, longlong a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit - assert isinstance(a[0], int64) + assert isinstance(a[0], longlong) b = a.copy() - assert isinstance(b[0], int64) + assert isinstance(b[0], longlong) a = array([0, 1, 2, 3], dtype=bool) assert a[0] is False_ @@ -81,17 +85,17 @@ assert a[i] is True_ def test_zeros_long(self): - from _numpypy import zeros, int64 + from _numpypy import zeros, longlong a = zeros(10, dtype=long) for i in range(10): - assert isinstance(a[i], int64) + assert isinstance(a[i], longlong) assert a[1] == 0 def test_ones_long(self): - from _numpypy import ones, int64 + from _numpypy import ones, longlong a = ones(10, dtype=long) for i in range(10): - assert isinstance(a[i], int64) + assert isinstance(a[i], longlong) assert a[1] == 1 def test_overflow(self): @@ -182,16 +186,31 @@ class AppTestTypes(BaseNumpyAppTest): + def setup_class(cls): + BaseNumpyAppTest.setup_class.im_func(cls) + cls.w_non_native_prefix = cls.space.wrap(nonnative_byteorder_prefix) + def check_non_native(w_obj, w_obj2): + assert w_obj.storage[0] == w_obj2.storage[1] + assert w_obj.storage[1] == w_obj2.storage[0] + if w_obj.storage[0] == '\x00': + assert w_obj2.storage[1] == '\x00' + assert w_obj2.storage[0] == '\x01' + else: + assert w_obj2.storage[1] == '\x01' + assert w_obj2.storage[0] == '\x00' + cls.w_check_non_native = cls.space.wrap(interp2app(check_non_native)) + def test_abstract_types(self): import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) exc = raises(TypeError, numpy.signedinteger, 0) - assert str(exc.value) == "cannot create 'signedinteger' instances" + assert 'cannot create' in str(exc.value) + assert 'signedinteger' in str(exc.value) exc = raises(TypeError, numpy.unsignedinteger, 0) - assert str(exc.value) == "cannot create 'unsignedinteger' instances" - + assert 'cannot create' in str(exc.value) + assert 'unsignedinteger' in str(exc.value) raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) @@ -402,10 +421,29 @@ assert issubclass(int64, int) assert int_ is int64 + def test_various_types(self): + import _numpypy as numpy + import sys + + assert numpy.int16 is numpy.short + assert numpy.int8 is numpy.byte + assert numpy.bool_ is numpy.bool8 + if sys.maxint == (1 << 63) - 1: + assert numpy.intp is numpy.int64 + else: + assert numpy.intp is numpy.int32 + + def test_mro(self): + import _numpypy as numpy + + assert numpy.int16.__mro__ == (numpy.int16, numpy.signedinteger, + numpy.integer, numpy.number, + numpy.generic, object) + assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) + def test_operators(self): from operator import truediv from _numpypy import float64, int_, True_, False_ - assert 5 / int_(2) == int_(2) assert truediv(int_(3), int_(2)) == float64(1.5) assert truediv(3, int_(2)) == float64(1.5) @@ -425,9 +463,86 @@ assert int_(3) ^ int_(5) == int_(6) assert True_ ^ False_ is True_ assert 5 ^ int_(3) == int_(6) - assert +int_(3) == int_(3) assert ~int_(3) == int_(-4) - raises(TypeError, lambda: float64(3) & 1) + def test_alternate_constructs(self): + from _numpypy import dtype + assert dtype('i8') == dtype('i2')[::2].tostring() == '\x00\x01\x00\x03' class AppTestRanges(BaseNumpyAppTest): def test_arange(self): @@ -1862,3 +1878,44 @@ cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str + +class AppTestRecordDtype(BaseNumpyAppTest): + def test_zeros(self): + from _numpypy import zeros + a = zeros(2, dtype=[('x', int), ('y', float)]) + raises(IndexError, 'a[0]["xyz"]') + assert a[0]['x'] == 0 + assert a[0]['y'] == 0 + raises(ValueError, "a[0] = (1, 2, 3)") + a[0]['x'] = 13 + assert a[0]['x'] == 13 + a[1] = (1, 2) + assert a[1]['y'] == 2 + b = zeros(2, dtype=[('x', int), ('y', float)]) + b[1] = a[1] + assert a[1]['y'] == 2 + + def test_views(self): + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + raises(ValueError, 'array([1])["x"]') + raises(ValueError, 'a["z"]') + assert a['x'][1] == 3 + assert a['y'][1] == 4 + a['x'][0] = 15 + assert a['x'][0] == 15 + b = a['x'] + a['y'] + assert (b == [15+2, 3+4]).all() + assert b.dtype == float + + def test_assign_tuple(self): + from _numpypy import zeros + a = zeros((2, 3), dtype=[('x', int), ('y', float)]) + a[1, 2] = (1, 2) + assert a['x'][1, 2] == 1 + assert a['y'][1, 2] == 2 + + def test_creation_and_repr(self): + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + assert repr(a[0]) == '(1, 2.0)' diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -503,7 +503,7 @@ dtype = float64_dtype else: dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) + ar = W_NDimArray([n], dtype=dtype) i = 0 while i < n: ar.get_concrete().setitem(i, int32_dtype.box(7)) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -1,15 +1,20 @@ import functools import math +import struct from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat, libffi, clibffi -from pypy.rlib.objectmodel import specialize -from pypy.rlib.rarithmetic import LONG_BIT, widen +from pypy.rlib.objectmodel import specialize, we_are_translated +from pypy.rlib.rarithmetic import widen, byteswap from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack +from pypy.tool.sourcetools import func_with_new_name +from pypy.rlib import jit +VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, + 'render_as_void': True}) def simple_unary_op(func): specialize.argtype(1)(func) @@ -60,6 +65,15 @@ def _unimplemented_ufunc(self, *args): raise NotImplementedError + def malloc(self, size): + # XXX find out why test_zjit explodes with tracking of allocations + return lltype.malloc(VOID_STORAGE, size, + zero=True, flavor="raw", + track_allocation=False, add_memory_pressure=True) + + def __repr__(self): + return self.__class__.__name__ + class Primitive(object): _mixin_ = True @@ -74,7 +88,7 @@ assert isinstance(box, self.BoxType) return box.value - def coerce(self, space, w_item): + def coerce(self, space, dtype, w_item): if isinstance(w_item, self.BoxType): return w_item return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) @@ -95,32 +109,41 @@ def default_fromstring(self, space): raise NotImplementedError - def read(self, storage, width, i, offset): - return self.box(libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset - )) + def _read(self, storage, width, i, offset): + if we_are_translated(): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + else: + return libffi.array_getitem_T(self.T, width, storage, i, offset) - def read_bool(self, storage, width, i, offset): - return bool(self.for_computation( - libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset))) + def read(self, arr, width, i, offset): + return self.box(self._read(arr.storage, width, i, offset)) - def store(self, storage, width, i, offset, box): - value = self.unbox(box) - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value - ) + def read_bool(self, arr, width, i, offset): + return bool(self.for_computation(self._read(arr.storage, width, i, offset))) + + def _write(self, storage, width, i, offset, value): + if we_are_translated(): + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value) + else: + libffi.array_setitem_T(self.T, width, storage, i, offset, value) + + + def store(self, arr, width, i, offset, box): + self._write(arr.storage, width, i, offset, self.unbox(box)) def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) - for i in xrange(start, stop): - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value - ) + for i in xrange(start, stop, width): + self._write(storage, 1, i, offset, value) def runpack_str(self, s): return self.box(runpack(self.format_code, s)) + def pack_str(self, box): + return struct.pack(self.format_code, self.unbox(box)) + @simple_binary_op def add(self, v1, v2): return v1 + v2 @@ -204,6 +227,17 @@ def min(self, v1, v2): return min(v1, v2) +class NonNativePrimitive(Primitive): + _mixin_ = True + + def _read(self, storage, width, i, offset): + return byteswap(Primitive._read(self, storage, width, i, offset)) + + def _write(self, storage, width, i, offset, value): + Primitive._write(self, storage, width, i, offset, byteswap(value)) + + def pack_str(self, box): + return struct.pack(self.format_code, byteswap(self.unbox(box))) class Bool(BaseType, Primitive): T = lltype.Bool @@ -232,8 +266,7 @@ return space.wrap(self.unbox(w_item)) def str_format(self, box): - value = self.unbox(box) - return "True" if value else "False" + return "True" if self.unbox(box) else "False" def for_computation(self, v): return int(v) @@ -257,15 +290,18 @@ def invert(self, v): return ~v +NonNativeBool = Bool + class Integer(Primitive): _mixin_ = True + def _base_coerce(self, space, w_item): + return self.box(space.int_w(space.call_function(space.w_int, w_item))) def _coerce(self, space, w_item): - return self.box(space.int_w(space.call_function(space.w_int, w_item))) + return self._base_coerce(space, w_item) def str_format(self, box): - value = self.unbox(box) - return str(self.for_computation(value)) + return str(self.for_computation(self.unbox(box))) def for_computation(self, v): return widen(v) @@ -329,68 +365,117 @@ def invert(self, v): return ~v +class NonNativeInteger(NonNativePrimitive, Integer): + _mixin_ = True + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box format_code = "b" +NonNativeInt8 = Int8 class UInt8(BaseType, Integer): T = rffi.UCHAR BoxType = interp_boxes.W_UInt8Box format_code = "B" +NonNativeUInt8 = UInt8 class Int16(BaseType, Integer): T = rffi.SHORT BoxType = interp_boxes.W_Int16Box format_code = "h" +class NonNativeInt16(BaseType, NonNativeInteger): + T = rffi.SHORT + BoxType = interp_boxes.W_Int16Box + format_code = "h" + class UInt16(BaseType, Integer): T = rffi.USHORT BoxType = interp_boxes.W_UInt16Box format_code = "H" +class NonNativeUInt16(BaseType, NonNativeInteger): + T = rffi.USHORT + BoxType = interp_boxes.W_UInt16Box + format_code = "H" + class Int32(BaseType, Integer): T = rffi.INT BoxType = interp_boxes.W_Int32Box format_code = "i" +class NonNativeInt32(BaseType, NonNativeInteger): + T = rffi.INT + BoxType = interp_boxes.W_Int32Box + format_code = "i" + class UInt32(BaseType, Integer): T = rffi.UINT BoxType = interp_boxes.W_UInt32Box format_code = "I" +class NonNativeUInt32(BaseType, NonNativeInteger): + T = rffi.UINT + BoxType = interp_boxes.W_UInt32Box + format_code = "I" + class Long(BaseType, Integer): T = rffi.LONG BoxType = interp_boxes.W_LongBox format_code = "l" +class NonNativeLong(BaseType, NonNativeInteger): + T = rffi.LONG + BoxType = interp_boxes.W_LongBox + format_code = "l" + class ULong(BaseType, Integer): T = rffi.ULONG BoxType = interp_boxes.W_ULongBox format_code = "L" +class NonNativeULong(BaseType, NonNativeInteger): + T = rffi.ULONG + BoxType = interp_boxes.W_ULongBox + format_code = "L" + class Int64(BaseType, Integer): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box format_code = "q" +class NonNativeInt64(BaseType, NonNativeInteger): + T = rffi.LONGLONG + BoxType = interp_boxes.W_Int64Box + format_code = "q" + +def _uint64_coerce(self, space, w_item): + try: + return self._base_coerce(space, w_item) + except OperationError, e: + if not e.match(space, space.w_OverflowError): + raise + bigint = space.bigint_w(w_item) + try: + value = bigint.toulonglong() + except OverflowError: + raise OperationError(space.w_OverflowError, space.w_None) + return self.box(value) + class UInt64(BaseType, Integer): T = rffi.ULONGLONG BoxType = interp_boxes.W_UInt64Box format_code = "Q" - def _coerce(self, space, w_item): - try: - return Integer._coerce(self, space, w_item) - except OperationError, e: - if not e.match(space, space.w_OverflowError): - raise - bigint = space.bigint_w(w_item) - try: - value = bigint.toulonglong() - except OverflowError: - raise OperationError(space.w_OverflowError, space.w_None) - return self.box(value) + _coerce = func_with_new_name(_uint64_coerce, '_coerce') + +class NonNativeUInt64(BaseType, NonNativeInteger): + T = rffi.ULONGLONG + BoxType = interp_boxes.W_UInt64Box + format_code = "Q" + + _coerce = func_with_new_name(_uint64_coerce, '_coerce') class Float(Primitive): _mixin_ = True @@ -399,8 +484,8 @@ return self.box(space.float_w(space.call_function(space.w_float, w_item))) def str_format(self, box): - value = self.unbox(box) - return float2string(self.for_computation(value), "g", rfloat.DTSF_STR_PRECISION) + return float2string(self.for_computation(self.unbox(box)), "g", + rfloat.DTSF_STR_PRECISION) def for_computation(self, v): return float(v) @@ -515,13 +600,126 @@ def isinf(self, v): return rfloat.isinf(v) +class NonNativeFloat(NonNativePrimitive, Float): + _mixin_ = True class Float32(BaseType, Float): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box format_code = "f" +class NonNativeFloat32(BaseType, NonNativeFloat): + T = rffi.FLOAT + BoxType = interp_boxes.W_Float32Box + format_code = "f" + class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box format_code = "d" + +class NonNativeFloat64(BaseType, NonNativeFloat): + T = rffi.DOUBLE + BoxType = interp_boxes.W_Float64Box + format_code = "d" + +class CompositeType(BaseType): + def __init__(self, offsets_and_fields, size): + self.offsets_and_fields = offsets_and_fields + self.size = size + + def get_element_size(self): + return self.size + +class BaseStringType(object): + _mixin_ = True + + def __init__(self, size=0): + self.size = size + + def get_element_size(self): + return self.size * rffi.sizeof(self.T) + +class StringType(BaseType, BaseStringType): + T = lltype.Char + +class VoidType(BaseType, BaseStringType): + T = lltype.Char + +NonNativeVoidType = VoidType +NonNativeStringType = StringType + +class UnicodeType(BaseType, BaseStringType): + T = lltype.UniChar + +NonNativeUnicodeType = UnicodeType + +class RecordType(CompositeType): + T = lltype.Char + + def read(self, arr, width, i, offset): + return interp_boxes.W_VoidBox(arr, i) + + @jit.unroll_safe + def coerce(self, space, dtype, w_item): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + + if isinstance(w_item, interp_boxes.W_VoidBox): + return w_item + # we treat every sequence as sequence, no special support + # for arrays + if not space.issequence_w(w_item): + raise OperationError(space.w_TypeError, space.wrap( + "expected sequence")) + if len(self.offsets_and_fields) != space.int_w(space.len(w_item)): + raise OperationError(space.w_ValueError, space.wrap( + "wrong length")) + items_w = space.fixedview(w_item) + # XXX optimize it out one day, but for now we just allocate an + # array + arr = W_NDimArray([1], dtype) + for i in range(len(items_w)): + subdtype = dtype.fields[dtype.fieldnames[i]][1] + ofs, itemtype = self.offsets_and_fields[i] + w_item = items_w[i] + w_box = itemtype.coerce(space, subdtype, w_item) + itemtype.store(arr, 1, 0, ofs, w_box) + return interp_boxes.W_VoidBox(arr, 0) + + @jit.unroll_safe + def store(self, arr, _, i, ofs, box): + assert isinstance(box, interp_boxes.W_VoidBox) + for k in range(self.get_element_size()): + arr.storage[k + i] = box.arr.storage[k + box.ofs] + + @jit.unroll_safe + def str_format(self, box): + assert isinstance(box, interp_boxes.W_VoidBox) + pieces = ["("] + first = True + for ofs, tp in self.offsets_and_fields: + if first: + first = False + else: + pieces.append(", ") + pieces.append(tp.str_format(tp.read(box.arr, 1, box.ofs, ofs))) + pieces.append(")") + return "".join(pieces) + +for tp in [Int32, Int64]: + if tp.T == lltype.Signed: + IntP = tp + break +for tp in [UInt32, UInt64]: + if tp.T == lltype.Unsigned: + UIntP = tp + break +del tp + +def _setup(): + # compute alignment + for tp in globals().values(): + if isinstance(tp, type) and hasattr(tp, 'T'): + tp.alignment = clibffi.cast_type_to_ffitype(tp.T).c_alignment +_setup() +del _setup diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -228,6 +228,7 @@ (rffi.LONGLONG, _signed_type_for(rffi.LONGLONG)), (lltype.UniChar, _unsigned_type_for(lltype.UniChar)), (lltype.Bool, _unsigned_type_for(lltype.Bool)), + (lltype.Char, _signed_type_for(lltype.Char)), ] __float_type_map = [ diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first @@ -424,6 +424,11 @@ return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] assert False +def array_getitem_T(TYPE, width, addr, index, offset): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + @specialize.call_location() @jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") def array_setitem(ffitype, width, addr, index, offset, value): @@ -434,3 +439,8 @@ rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value return assert False + +def array_setitem_T(TYPE, width, addr, index, offset, value): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -513,3 +513,31 @@ if not objectmodel.we_are_translated(): assert n <= p return llop.int_between(lltype.Bool, n, m, p) + + at objectmodel.specialize.ll() +def byteswap(arg): + """ Convert little->big endian and the opposite + """ + from pypy.rpython.lltypesystem import lltype, rffi + + T = lltype.typeOf(arg) + if T != rffi.LONGLONG and T != rffi.ULONGLONG and T != rffi.UINT: + arg = rffi.cast(lltype.Signed, arg) + # XXX we cannot do arithmetics on small ints + if rffi.sizeof(T) == 1: + res = arg + elif rffi.sizeof(T) == 2: + a, b = arg & 0xFF, arg & 0xFF00 + res = (a << 8) | (b >> 8) + elif rffi.sizeof(T) == 4: + a, b, c, d = arg & 0xFF, arg & 0xFF00, arg & 0xFF0000, arg & 0xFF000000 + res = (a << 24) | (b << 8) | (c >> 8) | (d >> 24) + elif rffi.sizeof(T) == 8: + a, b, c, d = arg & 0xFF, arg & 0xFF00, arg & 0xFF0000, arg & 0xFF000000 + e, f, g, h = (arg & (0xFF << 32), arg & (0xFF << 40), + arg & (0xFF << 48), arg & (r_uint(0xFF) << 56)) + res = ((a << 56) | (b << 40) | (c << 24) | (d << 8) | (e >> 8) | + (f >> 24) | (g >> 40) | (h >> 56)) + else: + assert False # unreachable code + return rffi.cast(T, res) diff --git a/pypy/rlib/rstruct/runpack.py b/pypy/rlib/rstruct/runpack.py --- a/pypy/rlib/rstruct/runpack.py +++ b/pypy/rlib/rstruct/runpack.py @@ -4,11 +4,10 @@ """ import py -from struct import pack, unpack +from struct import unpack from pypy.rlib.rstruct.formatiterator import FormatIterator from pypy.rlib.rstruct.error import StructError from pypy.rlib.rstruct.nativefmttable import native_is_bigendian -from pypy.rpython.extregistry import ExtRegistryEntry class MasterReader(object): def __init__(self, s): diff --git a/pypy/rlib/test/test_rarithmetic.py b/pypy/rlib/test/test_rarithmetic.py --- a/pypy/rlib/test/test_rarithmetic.py +++ b/pypy/rlib/test/test_rarithmetic.py @@ -374,3 +374,9 @@ assert not int_between(1, 2, 2) assert not int_between(1, 1, 1) +def test_byteswap(): + from pypy.rpython.lltypesystem import rffi + + assert byteswap(rffi.cast(rffi.USHORT, 0x0102)) == 0x0201 + assert byteswap(rffi.cast(rffi.INT, 0x01020304)) == 0x04030201 + assert byteswap(rffi.cast(rffi.ULONGLONG, 0x0102030405060708L)) == 0x0807060504030201L diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -126,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Tue Feb 21 01:41:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 21 Feb 2012 01:41:07 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: A plan for separate compilation of modules. Message-ID: <20120221004107.88CE7820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52694:135faaa76513 Date: 2012-02-21 01:40 +0100 http://bitbucket.org/pypy/pypy/changeset/135faaa76513/ Log: A plan for separate compilation of modules. diff --git a/pypy/README b/pypy/README new file mode 100644 --- /dev/null +++ b/pypy/README @@ -0,0 +1,81 @@ +Separate Compilation +==================== + +Goal +---- + +Translation of an extension module written in RPython. +The best form is probably the MixedModule. + +Strategy +-------- + +The main executable (bin/pypy-c or libpypy-c.dll) exports RPython +functions; this "first translation" also produces a pickled object +that describe these functions: signatures, exception info, etc. + +It will probably be necessary to list all exported functions and methods, +or mark them with some @exported decorator. + +The translation of an extension module (the "second translation") will +reuse the information from the pickled object; the content of the +MixedModule is annotated as usual, except that functions exported by +the main executable are now external calls. + +The extension module simply has to export a single function +"init_module()", which at runtime uses space operations to create and +install a module. + + +Roadmap +------- + +* First, a framework to test and measure progress; builds two + shared libraries (.so or .dll): + + - the first one is the "core module", which exports functions + - that can be called from the second module, which exports a single + entry point that we call call with ctypes. + +* Find a way to mark functions as "exported". We need to either + provide a signature, or be sure that the functions is somehow + annotated (because it is already used by the core interpreter) + +* Pass structures (as opaque pointers). At this step, only the core + module has access to the fields. + +* Implement access to struct fields: an idea is to use a Controller + object, and redirect attribute access to the ClassRepr computed by + the first translation. + +* Implement method calls, again with the help of the Controller which + can replace calls to bound methods with calls to exported functions. + +* Share the ExceptionTransformer between the modules: a RPython + exception raised on one side can be caught by the other side. + +* Support subclassing. Two issues here: + + - isinstance() is translated into a range check, but these minid and + maxid work because all classes are known at translation time. + Subclasses defined in the second module must use the same minid + and maxid as their parent; isinstance(X, SecondModuleClass) should + use an additional field. Be sure to not confuse classes + separately created in two extension modules. + + - virtual methods, that override methods defined in the first + module. + +* specialize.memo() needs to know all possible values of a + PreBuildConstant to compute the results during translation and build + some kind of lookup table. The most obvious case is the function + space.gettypeobject(typedef). Fortunately a PBC defined in a module + can only be used from the same module, so the list of prebuilt + results is probably local to the same module and this is not really + an issue. + +* Integration with GC. The GC functions should be exported from the + first module, and we need a way to register the static roots of the + second module. + +* Integration with the JIT. From noreply at buildbot.pypy.org Tue Feb 21 01:57:05 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 21 Feb 2012 01:57:05 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: prettify Message-ID: <20120221005705.A33B6820D1@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r4093:c0113c9b8cd5 Date: 2012-02-20 19:56 -0500 http://bitbucket.org/pypy/extradoc/changeset/c0113c9b8cd5/ Log: prettify diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -8,9 +8,9 @@ * What is PyPy and why? -* Numeric landscape in python +* Numeric landscape in Python -* What we achieved in PyPy? +* What we achieved in PyPy * Where we're going? @@ -36,7 +36,7 @@ * XXX some benchmarks -Why would you care? +Why should you care? ------------------- * *If I write this stuff in C/fortran/assembler it'll be faster anyway* From noreply at buildbot.pypy.org Tue Feb 21 02:02:21 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 21 Feb 2012 02:02:21 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Move file to extradoc repo Message-ID: <20120221010221.02919820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52695:10fbdd68bcca Date: 2012-02-21 02:01 +0100 http://bitbucket.org/pypy/pypy/changeset/10fbdd68bcca/ Log: Move file to extradoc repo diff --git a/pypy/README b/pypy/README deleted file mode 100644 --- a/pypy/README +++ /dev/null @@ -1,81 +0,0 @@ -Separate Compilation -==================== - -Goal ----- - -Translation of an extension module written in RPython. -The best form is probably the MixedModule. - -Strategy --------- - -The main executable (bin/pypy-c or libpypy-c.dll) exports RPython -functions; this "first translation" also produces a pickled object -that describe these functions: signatures, exception info, etc. - -It will probably be necessary to list all exported functions and methods, -or mark them with some @exported decorator. - -The translation of an extension module (the "second translation") will -reuse the information from the pickled object; the content of the -MixedModule is annotated as usual, except that functions exported by -the main executable are now external calls. - -The extension module simply has to export a single function -"init_module()", which at runtime uses space operations to create and -install a module. - - -Roadmap -------- - -* First, a framework to test and measure progress; builds two - shared libraries (.so or .dll): - - - the first one is the "core module", which exports functions - - that can be called from the second module, which exports a single - entry point that we call call with ctypes. - -* Find a way to mark functions as "exported". We need to either - provide a signature, or be sure that the functions is somehow - annotated (because it is already used by the core interpreter) - -* Pass structures (as opaque pointers). At this step, only the core - module has access to the fields. - -* Implement access to struct fields: an idea is to use a Controller - object, and redirect attribute access to the ClassRepr computed by - the first translation. - -* Implement method calls, again with the help of the Controller which - can replace calls to bound methods with calls to exported functions. - -* Share the ExceptionTransformer between the modules: a RPython - exception raised on one side can be caught by the other side. - -* Support subclassing. Two issues here: - - - isinstance() is translated into a range check, but these minid and - maxid work because all classes are known at translation time. - Subclasses defined in the second module must use the same minid - and maxid as their parent; isinstance(X, SecondModuleClass) should - use an additional field. Be sure to not confuse classes - separately created in two extension modules. - - - virtual methods, that override methods defined in the first - module. - -* specialize.memo() needs to know all possible values of a - PreBuildConstant to compute the results during translation and build - some kind of lookup table. The most obvious case is the function - space.gettypeobject(typedef). Fortunately a PBC defined in a module - can only be used from the same module, so the list of prebuilt - results is probably local to the same module and this is not really - an issue. - -* Integration with GC. The GC functions should be exported from the - first module, and we need a way to register the static roots of the - second module. - -* Integration with the JIT. From noreply at buildbot.pypy.org Tue Feb 21 02:02:45 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 21 Feb 2012 02:02:45 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Add plan for separate compilation of extension modules Message-ID: <20120221010245.75043820D1@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: extradoc Changeset: r4094:fde9a6c6b314 Date: 2012-02-21 02:01 +0100 http://bitbucket.org/pypy/extradoc/changeset/fde9a6c6b314/ Log: Add plan for separate compilation of extension modules diff --git a/planning/separate-compilation.txt b/planning/separate-compilation.txt new file mode 100644 --- /dev/null +++ b/planning/separate-compilation.txt @@ -0,0 +1,81 @@ +Separate Compilation +==================== + +Goal +---- + +Translation extension modules written in RPython. +The best form is probably the MixedModule. + +Strategy +-------- + +The main executable (bin/pypy-c or libpypy-c.dll) exports RPython +functions; this "first translation" also produces a pickled object +that describe these functions: signatures, exception info, etc. + +It will probably be necessary to list all exported functions and methods, +or mark them with some @exported decorator. + +The translation of an extension module (the "second translation") will +reuse the information from the pickled object; the content of the +MixedModule is annotated as usual, except that functions exported by +the main executable are now external calls. + +The extension module simply has to export a single function +"init_module()", which at runtime uses space operations to create and +install a module. + + +Roadmap +------- + +* First, a framework to test and measure progress; builds two + shared libraries (.so or .dll): + + - the first one is the "core module", which exports functions + - that can be called from the second module, which exports a single + entry point that we call call with ctypes. + +* Find a way to mark functions as "exported". We need to either + provide a signature, or be sure that the functions is somehow + annotated (because it is already used by the core interpreter) + +* Pass structures (as opaque pointers). At this step, only the core + module has access to the fields. + +* Implement access to struct fields: an idea is to use a Controller + object, and redirect attribute access to the ClassRepr computed by + the first translation. + +* Implement method calls, again with the help of the Controller which + can replace calls to bound methods with calls to exported functions. + +* Share the ExceptionTransformer between the modules: a RPython + exception raised on one side can be caught by the other side. + +* Support subclassing. Two issues here: + + - isinstance() is translated into a range check, but these minid and + maxid work because all classes are known at translation time. + Subclasses defined in the second module must use the same minid + and maxid as their parent; isinstance(X, SecondModuleClass) should + use an additional field. Be sure to not confuse classes + separately created in two extension modules. + + - virtual methods, that override methods defined in the first + module. + +* specialize.memo() needs to know all possible values of a + PreBuildConstant to compute the results during translation and build + some kind of lookup table. The most obvious case is the function + space.gettypeobject(typedef). Fortunately a PBC defined in a module + can only be used from the same module, so the list of prebuilt + results is probably local to the same module and this is not really + an issue. + +* Integration with GC. The GC functions should be exported from the + first module, and we need a way to register the static roots of the + second module. + +* Integration with the JIT. From noreply at buildbot.pypy.org Tue Feb 21 02:04:15 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 21 Feb 2012 02:04:15 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: more prettify Message-ID: <20120221010415.30F62820D1@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r4095:1cc099a42883 Date: 2012-02-20 20:02 -0500 http://bitbucket.org/pypy/extradoc/changeset/1cc099a42883/ Log: more prettify diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -37,27 +37,27 @@ * XXX some benchmarks Why should you care? -------------------- +-------------------- * *If I write this stuff in C/fortran/assembler it'll be faster anyway* * maybe, but ... -Why would you care (2) ----------------------- +Why should you care? (2) +------------------------ * Experimentation is important * Implementing something faster, in **human time**, leaves more time for optimizations and improvements -* For novel algorithms, being clearly expressed in code makes them easier to evaluate (Python is cleaner than C often) +* For novel algorithms, clearer implementation makes them easier to evaluate (Python often is cleaner than C) |pause| * Sometimes makes it **possible** in the first place -Why would you care even more ----------------------------- +Why would you care even more? +----------------------------- * Growing community @@ -65,8 +65,8 @@ * There are many smart people out there addressing hard problems -Example why would you care --------------------------- +Example of why would you care +----------------------------- * You spend a year writing optimized algorithms for a GPU @@ -78,11 +78,11 @@ * Alternative - **express** your algorithms -* Leave low-level details for people who have nothing better to do +* Leave low-level details to people who have nothing better to do |pause| -* .. like me (I don't know enough physics to do the other part) +* ... like me (I don't know enough Physics to do the other part) Numerics in Python ------------------ @@ -197,7 +197,7 @@ |pause| -* However, leave knobs and buttons for advanced users +* However, retain knobs and buttons for advanced users * Don't get penalized too much for not using them From noreply at buildbot.pypy.org Tue Feb 21 02:04:16 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 21 Feb 2012 02:04:16 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge Message-ID: <20120221010416.55D04820D1@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r4096:3b2a3ebfb225 Date: 2012-02-20 20:03 -0500 http://bitbucket.org/pypy/extradoc/changeset/3b2a3ebfb225/ Log: merge diff --git a/planning/separate-compilation.txt b/planning/separate-compilation.txt new file mode 100644 --- /dev/null +++ b/planning/separate-compilation.txt @@ -0,0 +1,81 @@ +Separate Compilation +==================== + +Goal +---- + +Translation extension modules written in RPython. +The best form is probably the MixedModule. + +Strategy +-------- + +The main executable (bin/pypy-c or libpypy-c.dll) exports RPython +functions; this "first translation" also produces a pickled object +that describe these functions: signatures, exception info, etc. + +It will probably be necessary to list all exported functions and methods, +or mark them with some @exported decorator. + +The translation of an extension module (the "second translation") will +reuse the information from the pickled object; the content of the +MixedModule is annotated as usual, except that functions exported by +the main executable are now external calls. + +The extension module simply has to export a single function +"init_module()", which at runtime uses space operations to create and +install a module. + + +Roadmap +------- + +* First, a framework to test and measure progress; builds two + shared libraries (.so or .dll): + + - the first one is the "core module", which exports functions + - that can be called from the second module, which exports a single + entry point that we call call with ctypes. + +* Find a way to mark functions as "exported". We need to either + provide a signature, or be sure that the functions is somehow + annotated (because it is already used by the core interpreter) + +* Pass structures (as opaque pointers). At this step, only the core + module has access to the fields. + +* Implement access to struct fields: an idea is to use a Controller + object, and redirect attribute access to the ClassRepr computed by + the first translation. + +* Implement method calls, again with the help of the Controller which + can replace calls to bound methods with calls to exported functions. + +* Share the ExceptionTransformer between the modules: a RPython + exception raised on one side can be caught by the other side. + +* Support subclassing. Two issues here: + + - isinstance() is translated into a range check, but these minid and + maxid work because all classes are known at translation time. + Subclasses defined in the second module must use the same minid + and maxid as their parent; isinstance(X, SecondModuleClass) should + use an additional field. Be sure to not confuse classes + separately created in two extension modules. + + - virtual methods, that override methods defined in the first + module. + +* specialize.memo() needs to know all possible values of a + PreBuildConstant to compute the results during translation and build + some kind of lookup table. The most obvious case is the function + space.gettypeobject(typedef). Fortunately a PBC defined in a module + can only be used from the same module, so the list of prebuilt + results is probably local to the same module and this is not really + an issue. + +* Integration with GC. The GC functions should be exported from the + first module, and we need a way to register the static roots of the + second module. + +* Integration with the JIT. From noreply at buildbot.pypy.org Tue Feb 21 02:04:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 02:04:42 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: remove an obscure parameter Message-ID: <20120221010442.78941820D1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52696:45ad7393f0ac Date: 2012-02-20 18:04 -0700 http://bitbucket.org/pypy/pypy/changeset/45ad7393f0ac/ Log: remove an obscure parameter diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -823,8 +823,8 @@ class ToStringArray(Call1): def __init__(self, child): dtype = child.find_dtype() - self.itemsize = dtype.itemtype.get_element_size() - self.s = StringBuilder(child.size * self.itemsize) + itemsize = dtype.itemtype.get_element_size() + self.s = StringBuilder(child.size * itemsize) Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, child) self.res = W_NDimArray([1], dtype, 'C') diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -335,7 +335,7 @@ assert isinstance(arr, ToStringArray) arr.res.setitem(0, self.child.eval(frame, arr.values).convert_to( self.dtype)) - for i in range(arr.itemsize): + for i in range(self.dtype.get_size()): arr.s.append(arr.res_casted[i]) class BroadcastLeft(Call2): From noreply at buildbot.pypy.org Tue Feb 21 10:50:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: dict.keys() and range() no longer return lists in python3, adapt the tests Message-ID: <20120221095031.1B7DA8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52697:48b1edafc52a Date: 2012-02-20 22:47 +0100 http://bitbucket.org/pypy/pypy/changeset/48b1edafc52a/ Log: dict.keys() and range() no longer return lists in python3, adapt the tests diff --git a/pypy/interpreter/test/test_nestedscope.py b/pypy/interpreter/test/test_nestedscope.py --- a/pypy/interpreter/test/test_nestedscope.py +++ b/pypy/interpreter/test/test_nestedscope.py @@ -53,12 +53,11 @@ return g() outer_locals, inner_locals = f() assert inner_locals == {'i':3, 'x':3} - keys = outer_locals.keys() - keys.sort() + keys = sorted(outer_locals.keys()) assert keys == ['h', 'x'] def test_lambda_in_genexpr(self): - assert eval('map(apply, (lambda: t for t in range(10)))') == range(10) + assert eval('map(apply, (lambda: t for t in range(10)))') == list(range(10)) def test_cell_contents(self): def f(x): From noreply at buildbot.pypy.org Tue Feb 21 10:50:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/__builtin__/builtins Message-ID: <20120221095032.568FA8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52698:a92e756eead4 Date: 2012-02-20 22:48 +0100 http://bitbucket.org/pypy/pypy/changeset/a92e756eead4/ Log: s/__builtin__/builtins diff --git a/pypy/interpreter/test/test_module.py b/pypy/interpreter/test/test_module.py --- a/pypy/interpreter/test/test_module.py +++ b/pypy/interpreter/test/test_module.py @@ -19,7 +19,7 @@ class AppTest_ModuleObject: def test_attr(self): - m = __import__('__builtin__') + m = __import__('builtins') m.x = 15 assert m.x == 15 assert getattr(m, 'x') == 15 From noreply at buildbot.pypy.org Tue Feb 21 10:50:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/__builtin__/builtins Message-ID: <20120221095033.8F4278203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52699:c29adf6b388b Date: 2012-02-20 22:50 +0100 http://bitbucket.org/pypy/pypy/changeset/c29adf6b388b/ Log: s/__builtin__/builtins diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -30,9 +30,9 @@ assert f.f_globals is globals() def test_f_builtins(self): - import sys, __builtin__ + import sys, builtins f = sys._getframe() - assert f.f_builtins is __builtin__.__dict__ + assert f.f_builtins is builtins.__dict__ def test_f_code(self): def g(): From noreply at buildbot.pypy.org Tue Feb 21 10:50:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120221095034.D2CED8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52700:08546a562e23 Date: 2012-02-20 22:54 +0100 http://bitbucket.org/pypy/pypy/changeset/08546a562e23/ Log: fix syntax diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -75,6 +75,7 @@ # assert did not crash def test_f_lineno_set_firstline(self): + r""" seen = [] def tracer(f, event, *args): seen.append((event, f.f_lineno)) @@ -85,7 +86,7 @@ def g(): import sys sys.settrace(tracer) - exec "x=1\ny=x+1\nz=y+1\nt=z+1\ns=t+1\n" in {} + exec("x=1\ny=x+1\nz=y+1\nt=z+1\ns=t+1\n", {}) sys.settrace(None) g() @@ -99,6 +100,7 @@ ('line', 4), ('line', 5), ('return', 5)] + """ def test_f_back(self): import sys From noreply at buildbot.pypy.org Tue Feb 21 10:50:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120221095036.1E3A28203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52701:359ade7abb4c Date: 2012-02-20 22:54 +0100 http://bitbucket.org/pypy/pypy/changeset/359ade7abb4c/ Log: fix syntax diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -202,7 +202,7 @@ x = f(4) sys.settrace(None) assert x == 42 - print l + print(l) assert l == [(0, 'f', 'call', None), (1, 'f', 'line', None), (0, 'g', 'call', None), From noreply at buildbot.pypy.org Tue Feb 21 10:50:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax, and use next() to get the next item of the generator Message-ID: <20120221095037.575468203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52702:26bb8ffd76f5 Date: 2012-02-20 23:01 +0100 http://bitbucket.org/pypy/pypy/changeset/26bb8ffd76f5/ Log: fix syntax, and use next() to get the next item of the generator diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -386,6 +386,7 @@ def test_trace_generator_finalisation(self): + ''' import sys l = [] got_exc = [] @@ -396,7 +397,7 @@ return trace d = {} - exec """if 1: + exec("""if 1: def g(): try: yield True @@ -406,11 +407,11 @@ def f(): try: gen = g() - gen.next() + next(gen) gen.close() except: pass - """ in d + """, d) f = d['f'] sys.settrace(trace) @@ -432,6 +433,7 @@ (6, 'line'), (6, 'return'), (12, 'return')] + ''' def test_dont_trace_on_reraise(self): import sys From noreply at buildbot.pypy.org Tue Feb 21 10:50:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: rewrite this test by using sum() instead of print(). The problem with print is that we are also tracing a lot of functions inside encodings/utf_8.py, and this adds noise to the test Message-ID: <20120221095038.8F67B8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52703:45dc6af4adc3 Date: 2012-02-21 00:01 +0100 http://bitbucket.org/pypy/pypy/changeset/45dc6af4adc3/ Log: rewrite this test by using sum() instead of print(). The problem with print is that we are also tracing a lot of functions inside encodings/utf_8.py, and this adds noise to the test diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -1,12 +1,9 @@ -from pypy.tool import udir from pypy.conftest import option class AppTestPyFrame: def setup_class(cls): - cls.w_udir = cls.space.wrap(str(udir.udir)) - cls.w_tempfile1 = cls.space.wrap(str(udir.udir.join('tempfile1'))) if not option.runappdirect: w_call_further = cls.space.appexec([], """(): def call_further(f): @@ -260,7 +257,7 @@ assert l[0][1] == 'call' assert res == 'hidden' # sanity - def test_trace_hidden_prints(self): + def test_trace_hidden_applevel_builtins(self): import sys l = [] @@ -268,19 +265,17 @@ l.append((a,b,c)) return trace - outputf = open(self.tempfile1, 'w') def f(): - print >> outputf, 1 - print >> outputf, 2 - print >> outputf, 3 + sum([]) + sum([]) + sum([]) return "that's the return value" sys.settrace(trace) f() sys.settrace(None) - outputf.close() # should get 1 "call", 3 "line" and 1 "return" events, and no call - # or return for the internal app-level implementation of 'print' + # or return for the internal app-level implementation of sum assert len(l) == 6 assert [what for (frame, what, arg) in l] == [ 'call', 'line', 'line', 'line', 'line', 'return'] From noreply at buildbot.pypy.org Tue Feb 21 10:50:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill this test, we no longer have the three-args raise form Message-ID: <20120221095039.CA1168203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52704:31dba1322ab7 Date: 2012-02-21 00:03 +0100 http://bitbucket.org/pypy/pypy/changeset/31dba1322ab7/ Log: kill this test, we no longer have the three-args raise form diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -350,34 +350,6 @@ assert len(l) == 2 assert issubclass(l[0][0], Exception) assert issubclass(l[1][0], Exception) - - def test_trace_raise_three_arg(self): - import sys - l = [] - def trace(frame, event, arg): - if event == 'exception': - l.append(arg) - return trace - - def g(): - try: - raise Exception - except Exception, e: - import sys - raise Exception, e, sys.exc_info()[2] - - def f(): - try: - g() - except: - pass - - sys.settrace(trace) - f() - sys.settrace(None) - assert len(l) == 2 - assert issubclass(l[0][0], Exception) - assert issubclass(l[1][0], Exception) def test_trace_generator_finalisation(self): From noreply at buildbot.pypy.org Tue Feb 21 10:50:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/xrange/range Message-ID: <20120221095041.1CA0D8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52705:e2b89bb4df90 Date: 2012-02-21 00:03 +0100 http://bitbucket.org/pypy/pypy/changeset/e2b89bb4df90/ Log: s/xrange/range diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -317,7 +317,7 @@ def f(): return 1 - for i in xrange(sys.getrecursionlimit() + 1): + for i in range(sys.getrecursionlimit() + 1): sys.settrace(trace) try: f() From noreply at buildbot.pypy.org Tue Feb 21 10:50:42 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:42 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/__builtin__/builtins Message-ID: <20120221095042.570408203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52706:a04906e56772 Date: 2012-02-21 00:04 +0100 http://bitbucket.org/pypy/pypy/changeset/a04906e56772/ Log: s/__builtin__/builtins diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -287,7 +287,7 @@ space = self.space assert space.builtin w_name = space.wrap('__import__') - w_builtin = space.sys.getmodule('__builtin__') + w_builtin = space.sys.getmodule('builtins') w_import = self.space.getattr(w_builtin, w_name) assert space.is_true(w_import) From noreply at buildbot.pypy.org Tue Feb 21 10:50:43 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:43 +0100 (CET) Subject: [pypy-commit] pypy py3k: we no longer have oldstyle classes, kill this part of the test Message-ID: <20120221095043.92DBF8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52707:b12bfba17a60 Date: 2012-02-21 00:06 +0100 http://bitbucket.org/pypy/pypy/changeset/b12bfba17a60/ Log: we no longer have oldstyle classes, kill this part of the test diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -159,14 +159,6 @@ self.space.setattr(w_instance, self.space.wrap("__call__"), w_func) assert not is_callable(w_instance) - w_oldstyle = self.space.appexec([], """(): - class NoCall: - pass - return NoCall()""") - assert not is_callable(w_oldstyle) - self.space.setattr(w_oldstyle, self.space.wrap("__call__"), w_func) - assert is_callable(w_oldstyle) - def test_interp_w(self): w = self.space.wrap w_bltinfunction = self.space.builtin.get('len') From noreply at buildbot.pypy.org Tue Feb 21 10:50:44 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:44 +0100 (CET) Subject: [pypy-commit] pypy py3k: re-raise the operror if it's not of the expected type; s/next/__next__ Message-ID: <20120221095044.CE80C8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52708:29787b8bfeab Date: 2012-02-21 00:09 +0100 http://bitbucket.org/pypy/pypy/changeset/29787b8bfeab/ Log: re-raise the operror if it's not of the expected type; s/next/__next__ diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -78,7 +78,7 @@ class A(object): def __iter__(self): return self - def next(self): + def __next__(self): raise StopIteration def __len__(self): 1/0 @@ -88,7 +88,7 @@ space.unpackiterable(w_a) except OperationError, o: if not o.match(space, space.w_ZeroDivisionError): - raise Exception("DID NOT RAISE") + raise else: raise Exception("DID NOT RAISE") From noreply at buildbot.pypy.org Tue Feb 21 10:50:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120221095046.160078203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52709:0807d354c622 Date: 2012-02-21 00:11 +0100 http://bitbucket.org/pypy/pypy/changeset/0807d354c622/ Log: fix syntax diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -13,7 +13,7 @@ # XXX why is this called newstring? import sys def f(): - raise TypeError, "hello" + raise TypeError("hello") def g(): f() @@ -23,7 +23,7 @@ except: typ,val,tb = sys.exc_info() else: - raise AssertionError, "should have raised" + raise AssertionError("should have raised") assert hasattr(tb, 'tb_frame') assert hasattr(tb, 'tb_lasti') assert hasattr(tb, 'tb_lineno') From noreply at buildbot.pypy.org Tue Feb 21 10:50:47 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, this test did nothing because of a bad indentation. Fix it, and adapt to py3k because list.append is no longer an unboud method Message-ID: <20120221095047.5653C8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52710:2963eaf808c2 Date: 2012-02-21 10:02 +0100 http://bitbucket.org/pypy/pypy/changeset/2963eaf808c2/ Log: bah, this test did nothing because of a bad indentation. Fix it, and adapt to py3k because list.append is no longer an unboud method diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -96,8 +96,8 @@ def test_write_code_builtin_forbidden(self): def f(*args): return 42 - raises(TypeError, "dir.func_code = f.func_code") - raises(TypeError, "list.append.im_func.func_code = f.func_code") + raises(TypeError, "dir.func_code = f.func_code") + raises(TypeError, "list().append.im_func.func_code = f.func_code") def test_set_module_to_name_eagerly(self): skip("fails on PyPy but works on CPython. Unsure we want to care") From noreply at buildbot.pypy.org Tue Feb 21 10:50:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:48 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the attributes im_self and im_func from Method objects, in py3k they have been replaced by __self__ and __func__. I expect some tests to fail because of this, I'll let buildbot to find them :-) Message-ID: <20120221095048.96D788203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52711:b18b2fc21ef3 Date: 2012-02-21 10:06 +0100 http://bitbucket.org/pypy/pypy/changeset/b18b2fc21ef3/ Log: kill the attributes im_self and im_func from Method objects, in py3k they have been replaced by __self__ and __func__. I expect some tests to fail because of this, I'll let buildbot to find them :-) diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -97,7 +97,7 @@ def f(*args): return 42 raises(TypeError, "dir.func_code = f.func_code") - raises(TypeError, "list().append.im_func.func_code = f.func_code") + raises(TypeError, "list().append.__func__.func_code = f.func_code") def test_set_module_to_name_eagerly(self): skip("fails on PyPy but works on CPython. Unsure we want to care") diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -337,10 +337,10 @@ class B(A): pass - bm = B().m - assert bm.__func__ is bm.im_func - assert bm.__self__ is bm.im_self - assert bm.im_class is B + obj = B() + bm = obj.m + assert bm.__func__ is A.m + assert bm.__self__ is obj assert bm.__doc__ == "aaa" assert bm.x == 3 raises(AttributeError, setattr, bm, 'x', 15) diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -795,9 +795,7 @@ __new__ = interp2app(Method.descr_method__new__.im_func), __call__ = interp2app(Method.descr_method_call), __get__ = interp2app(Method.descr_method_get), - im_func = interp_attrproperty_w('w_function', cls=Method), __func__ = interp_attrproperty_w('w_function', cls=Method), - im_self = interp_attrproperty_w('w_instance', cls=Method), __self__ = interp_attrproperty_w('w_instance', cls=Method), __getattribute__ = interp2app(Method.descr_method_getattribute), __eq__ = interp2app(Method.descr_method_eq), From noreply at buildbot.pypy.org Tue Feb 21 10:50:49 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that test_function passes with -A: this is mostly about replacing access to func_* attributes with __*__ Message-ID: <20120221095049.D43858203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52712:3abb17f92ceb Date: 2012-02-21 10:36 +0100 http://bitbucket.org/pypy/pypy/changeset/3abb17f92ceb/ Log: make sure that test_function passes with -A: this is mostly about replacing access to func_* attributes with __*__ diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -204,6 +204,8 @@ def skip(message): print(message) raise SystemExit(0) + class ExceptionWrapper: + pass def raises(exc, func, *args, **kwargs): try: if isinstance(func, str): @@ -214,8 +216,10 @@ exec(func) else: func(*args, **kwargs) - except exc: - pass + except exc as e: + res = ExceptionWrapper() + res.value = e + return res else: raise AssertionError("DID NOT RAISE") """ diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -1,4 +1,3 @@ - import unittest from pypy.interpreter import eval from pypy.interpreter.function import Function, Method, descr_function_get @@ -10,31 +9,29 @@ def test_attributes(self): globals()['__name__'] = 'mymodulename' def f(): pass - assert hasattr(f, 'func_code') - assert f.func_defaults == None - f.func_defaults = None - assert f.func_defaults == None - assert f.func_dict == {} - assert type(f.func_globals) == dict - assert f.func_closure is None - assert f.func_doc == None - assert f.func_name == 'f' + assert hasattr(f, '__code__') + assert f.__defaults__ == None + f.__defaults__ = None + assert f.__defaults__ == None + assert f.__dict__ == {} + assert type(f.__globals__) == dict + assert f.__closure__ is None + assert f.__doc__ == None + assert f.__name__ == 'f' assert f.__module__ == 'mymodulename' def test_code_is_ok(self): def f(): pass - assert not hasattr(f.func_code, '__dict__') + assert not hasattr(f.__code__, '__dict__') def test_underunder_attributes(self): def f(): pass assert f.__name__ == 'f' assert f.__doc__ == None - assert f.__name__ == f.func_name - assert f.__doc__ == f.func_doc - assert f.__dict__ is f.func_dict - assert f.__code__ is f.func_code - assert f.__defaults__ is f.func_defaults - assert f.__globals__ is f.func_globals + assert f.__dict__ == {} + assert f.__code__.co_name == 'f' + assert f.__defaults__ is None + assert f.__globals__ is globals() assert hasattr(f, '__class__') def test_classmethod(self): @@ -43,7 +40,7 @@ assert classmethod(f).__func__ is f assert staticmethod(f).__func__ is f - def test_write_doc(self): + def test_write___doc__(self): def f(): "hello" assert f.__doc__ == 'hello' f.__doc__ = 'good bye' @@ -51,14 +48,6 @@ del f.__doc__ assert f.__doc__ == None - def test_write_func_doc(self): - def f(): "hello" - assert f.func_doc == 'hello' - f.func_doc = 'good bye' - assert f.func_doc == 'good bye' - del f.func_doc - assert f.func_doc == None - def test_write_module(self): def f(): "hello" f.__module__ = 'ab.c' @@ -69,7 +58,7 @@ def test_new(self): def f(): return 42 FuncType = type(f) - f2 = FuncType(f.func_code, f.func_globals, 'f2', None, None) + f2 = FuncType(f.__code__, f.__globals__, 'f2', None, None) assert f2() == 42 def g(x): @@ -77,7 +66,7 @@ return x return f f = g(42) - raises(TypeError, FuncType, f.func_code, f.func_globals, 'f2', None, None) + raises(TypeError, FuncType, f.__code__, f.__globals__, 'f2', None, None) def test_write_code(self): def f(): @@ -86,18 +75,23 @@ return 41 assert f() == 42 assert g() == 41 - raises(TypeError, "f.func_code = 1") - f.func_code = g.func_code + raises(TypeError, "f.__code__ = 1") + f.__code__ = g.__code__ assert f() == 41 - def h(): - return f() # a closure - raises(ValueError, "f.func_code = h.func_code") + def get_h(f=f): + def h(): + return f() # a closure + return h + h = get_h() + raises(ValueError, "f.__code__ = h.__code__") def test_write_code_builtin_forbidden(self): def f(*args): return 42 - raises(TypeError, "dir.func_code = f.func_code") - raises(TypeError, "list().append.__func__.func_code = f.func_code") + if hasattr('dir', '__code__'): + # only on PyPy, CPython does not expose these attrs + raises(TypeError, "dir.__code__ = f.__code__") + raises(TypeError, "list().append.__func__.__code__ = f.__code__") def test_set_module_to_name_eagerly(self): skip("fails on PyPy but works on CPython. Unsure we want to care") @@ -193,8 +187,8 @@ assert res[0] == 23 assert res[1] == {'a': 'a', 'b': 'b'} error = raises(TypeError, lambda: func(42, **[])) - assert error.value.message == ('argument after ** must be a mapping, ' - 'not list') + assert ('argument after ** must be a mapping, not list' in + str(error.value)) def test_default_arg(self): def func(arg1,arg2=42): @@ -282,14 +276,18 @@ try: len() except TypeError as e: - assert "len() takes exactly 1 argument (0 given)" in e.message + msg = str(e) + msg = msg.replace('one', '1') # CPython puts 'one', PyPy '1' + assert "len() takes exactly 1 argument (0 given)" in msg else: assert 0, "did not raise" try: len(1, 2) except TypeError as e: - assert "len() takes exactly 1 argument (2 given)" in e.message + msg = str(e) + msg = msg.replace('one', '1') # CPython puts 'one', PyPy '1' + assert "len() takes exactly 1 argument (2 given)" in msg else: assert 0, "did not raise" @@ -311,13 +309,15 @@ # But let's not test that. Just test that (lambda:42) does not # have 42 as docstring. f = lambda: 42 - assert f.func_doc is None + assert f.__doc__ is None def test_setstate_called_with_wrong_args(self): f = lambda: 42 # not sure what it should raise, since CPython doesn't have setstate # on function types - raises(ValueError, type(f).__setstate__, f, (1, 2, 3)) + FunctionType= type(f) + if hasattr(FunctionType, '__setstate__'): + raises(ValueError, FunctionType.__setstate__, f, (1, 2, 3)) class AppTestMethod: def test_simple_call(self): @@ -447,7 +447,7 @@ return x+y import types im = types.MethodType(A(), 3) - assert map(im, [4]) == [7] + assert list(map(im, [4])) == [7] def test_invalid_creation(self): import types @@ -478,7 +478,7 @@ def setup_method(self, method): def c(self, bar): return bar - code = PyCode._from_code(self.space, c.func_code) + code = PyCode._from_code(self.space, c.__code__) self.fn = Function(self.space, code, self.space.newdict()) def test_get(self): @@ -505,7 +505,7 @@ space = self.space # Create some function for this test only def m(self): return self - func = Function(space, PyCode._from_code(self.space, m.func_code), + func = Function(space, PyCode._from_code(self.space, m.__code__), space.newdict()) # Some shorthands obj1 = space.wrap(23) @@ -541,7 +541,7 @@ """ % (args, args) in d f = d['f'] res = f(*range(i)) - code = PyCode._from_code(self.space, f.func_code) + code = PyCode._from_code(self.space, f.__code__) fn = Function(self.space, code, self.space.newdict()) assert fn.code.fast_natural_arity == i|PyCode.FLATPYCALL @@ -563,7 +563,7 @@ def f(a): return a - code = PyCode._from_code(self.space, f.func_code) + code = PyCode._from_code(self.space, f.__code__) fn = Function(self.space, code, self.space.newdict()) assert fn.code.fast_natural_arity == 1|PyCode.FLATPYCALL @@ -590,7 +590,7 @@ def f(self, a): return a - code = PyCode._from_code(self.space, f.func_code) + code = PyCode._from_code(self.space, f.__code__) fn = Function(self.space, code, self.space.newdict()) assert fn.code.fast_natural_arity == 2|PyCode.FLATPYCALL @@ -618,7 +618,7 @@ def f(a, b): return a+b - code = PyCode._from_code(self.space, f.func_code) + code = PyCode._from_code(self.space, f.__code__) fn = Function(self.space, code, self.space.newdict(), defs_w=[space.newint(1)]) @@ -647,7 +647,7 @@ def f(self, a, b): return a+b - code = PyCode._from_code(self.space, f.func_code) + code = PyCode._from_code(self.space, f.__code__) fn = Function(self.space, code, self.space.newdict(), defs_w=[space.newint(1)]) From noreply at buildbot.pypy.org Tue Feb 21 10:50:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 10:50:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ in test_code.py Message-ID: <20120221095051.197B78203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52713:427eec028c45 Date: 2012-02-21 10:38 +0100 http://bitbucket.org/pypy/pypy/changeset/427eec028c45/ Log: s/func_code/__code__ in test_code.py diff --git a/pypy/interpreter/test/test_code.py b/pypy/interpreter/test/test_code.py --- a/pypy/interpreter/test/test_code.py +++ b/pypy/interpreter/test/test_code.py @@ -15,17 +15,17 @@ def test_attributes(self): def f(): pass def g(x, *y, **z): "docstring" - assert hasattr(f.func_code, 'co_code') - assert hasattr(g.func_code, 'co_code') + assert hasattr(f.__code__, 'co_code') + assert hasattr(g.__code__, 'co_code') testcases = [ - (f.func_code, {'co_name': 'f', + (f.__code__, {'co_name': 'f', 'co_names': (), 'co_varnames': (), 'co_argcount': 0, 'co_consts': (None,) }), - (g.func_code, {'co_name': 'g', + (g.__code__, {'co_name': 'g', 'co_names': (), 'co_varnames': ('x', 'y', 'z'), 'co_argcount': 1, @@ -36,13 +36,13 @@ import sys if hasattr(sys, 'pypy_objspaceclass'): testcases += [ - (abs.func_code, {'co_name': 'abs', + (abs.__code__, {'co_name': 'abs', 'co_varnames': ('val',), 'co_argcount': 1, 'co_flags': 0, 'co_consts': ("abs(number) -> number\n\nReturn the absolute value of the argument.",), }), - (object.__init__.func_code, + (object.__init__.__code__, {#'co_name': '__init__', XXX getting descr__init__ 'co_varnames': ('obj', 'args', 'keywords'), 'co_argcount': 1, @@ -72,7 +72,7 @@ d = {} exec(src, d) - assert list(sorted(d['f'].func_code.co_names)) == ['foo', 'g'] + assert list(sorted(d['f'].__code__.co_names)) == ['foo', 'g'] def test_code(self): import sys @@ -117,7 +117,7 @@ assert d['c'] == 3 def f(x): y = 1 - ccode = f.func_code + ccode = f.__code__ raises(ValueError, new.code, -ccode.co_argcount, ccode.co_nlocals, @@ -150,13 +150,13 @@ exec("def f(): pass", d1) d2 = {} exec("def f(): pass", d2) - assert d1['f'].func_code == d2['f'].func_code - assert hash(d1['f'].func_code) == hash(d2['f'].func_code) + assert d1['f'].__code__ == d2['f'].__code__ + assert hash(d1['f'].__code__) == hash(d2['f'].__code__) def test_repr(self): def f(): xxx - res = repr(f.func_code) + res = repr(f.__code__) expected = [" Author: Antonio Cuni Branch: py3k Changeset: r52714:365afd942db2 Date: 2012-02-21 10:49 +0100 http://bitbucket.org/pypy/pypy/changeset/365afd942db2/ Log: kill the func_* attributes, in python3 we only have the corresponding __*__. Also, replace them in all the tests in interpreter/. There are probably other tests which will fail, I'll let buildbot to find them diff --git a/pypy/interpreter/test/test_appinterp.py b/pypy/interpreter/test/test_appinterp.py --- a/pypy/interpreter/test/test_appinterp.py +++ b/pypy/interpreter/test/test_appinterp.py @@ -29,7 +29,7 @@ app = appdef("""app(x,y): return x + y """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space, space.wrap(41), space.wrap(1)) assert space.eq_w(w_result, space.wrap(42)) @@ -37,7 +37,7 @@ app = appdef("""app(x,y=1): return x + y """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space, space.wrap(41)) assert space.eq_w(w_result, space.wrap(42)) @@ -59,7 +59,7 @@ app = appdef("""app(): return 42 """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space) assert space.eq_w(w_result, space.wrap(42)) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -381,8 +381,8 @@ (4 and 5): def g(): "line 6" - fline = f.func_code.co_firstlineno - gline = g.func_code.co_firstlineno + fline = f.__code__.co_firstlineno + gline = g.__code__.co_firstlineno ''')) code = self.compiler.compile(snippet, '', 'exec', 0) space = self.space @@ -400,7 +400,7 @@ @foo # line 4 def f(): # line 5 pass # line 6 - fline = f.func_code.co_firstlineno + fline = f.__code__.co_firstlineno ''')) code = self.compiler.compile(snippet, '', 'exec', 0) space = self.space @@ -766,7 +766,7 @@ """ ns = {} exec(source, ns) - code = ns['f'].func_code + code = ns['f'].__code__ import dis, sys from io import StringIO s = StringIO() @@ -873,7 +873,7 @@ """ ns = {} exec(source, ns) - code = ns['_f'].func_code + code = ns['_f'].__code__ import sys, dis from io import StringIO @@ -893,7 +893,7 @@ """ ns = {} exec(source, ns) - code = ns['_f'].func_code + code = ns['_f'].__code__ import sys, dis from io import StringIO diff --git a/pypy/interpreter/test/test_descrtypecheck.py b/pypy/interpreter/test/test_descrtypecheck.py --- a/pypy/interpreter/test/test_descrtypecheck.py +++ b/pypy/interpreter/test/test_descrtypecheck.py @@ -5,11 +5,11 @@ def test_getsetprop_get(self): def f(): pass - getter = type(f).__dict__['func_code'].__get__ + getter = type(f).__dict__['__code__'].__get__ getter = getattr(getter, 'im_func', getter) # neutralizes pypy/cpython diff raises(TypeError, getter, 1, None) def test_func_code_get(self): def f(): pass - raises(TypeError, type(f).func_code.__get__,1) + raises(TypeError, type(f).__code__.__get__,1) diff --git a/pypy/interpreter/test/test_eval.py b/pypy/interpreter/test/test_eval.py --- a/pypy/interpreter/test/test_eval.py +++ b/pypy/interpreter/test/test_eval.py @@ -7,7 +7,7 @@ def setup_method(self, method): def c(x, y, *args): pass - code = PyCode._from_code(self.space, c.func_code) + code = PyCode._from_code(self.space, c.__code__) class ConcreteFastscopeFrame(Frame): diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -682,5 +682,5 @@ app_g = gateway.interp2app_temp(g) space = self.space w_g = space.wrap(app_g) - w_defs = space.getattr(w_g, space.wrap("func_defaults")) + w_defs = space.getattr(w_g, space.wrap("__defaults__")) assert space.is_w(w_defs, space.w_None) diff --git a/pypy/interpreter/test/test_gateway.py b/pypy/interpreter/test/test_gateway.py --- a/pypy/interpreter/test/test_gateway.py +++ b/pypy/interpreter/test/test_gateway.py @@ -715,7 +715,7 @@ class X(object): def __init__(self, **kw): pass - clash = type.__call__.func_code.co_varnames[0] + clash = type.__call__.__code__.co_varnames[0] X(**{clash: 33}) type.__call__(X, **{clash: 33}) @@ -724,28 +724,28 @@ class X(object): def __init__(self, **kw): pass - clash = object.__new__.func_code.co_varnames[0] + clash = object.__new__.__code__.co_varnames[0] X(**{clash: 33}) object.__new__(X, **{clash: 33}) def test_dict_new(self): - clash = dict.__new__.func_code.co_varnames[0] + clash = dict.__new__.__code__.co_varnames[0] dict(**{clash: 33}) dict.__new__(dict, **{clash: 33}) def test_dict_init(self): d = {} - clash = dict.__init__.func_code.co_varnames[0] + clash = dict.__init__.__code__.co_varnames[0] d.__init__(**{clash: 33}) dict.__init__(d, **{clash: 33}) def test_dict_update(self): d = {} - clash = dict.update.func_code.co_varnames[0] + clash = dict.update.__code__.co_varnames[0] d.update(**{clash: 33}) dict.update(d, **{clash: 33}) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -17,7 +17,7 @@ yield 1 assert g.gi_running g = f() - assert g.gi_code is f.func_code + assert g.gi_code is f.__code__ assert g.__name__ == 'f' assert g.gi_frame is not None assert not g.gi_running @@ -26,7 +26,7 @@ raises(StopIteration, next, g) assert not g.gi_running assert g.gi_frame is None - assert g.gi_code is f.func_code + assert g.gi_code is f.__code__ assert g.__name__ == 'f' def test_generator3(self): diff --git a/pypy/interpreter/test/test_nestedscope.py b/pypy/interpreter/test/test_nestedscope.py --- a/pypy/interpreter/test/test_nestedscope.py +++ b/pypy/interpreter/test/test_nestedscope.py @@ -66,7 +66,7 @@ return f g = f(10) - assert g.func_closure[0].cell_contents == 10 + assert g.__closure__[0].cell_contents == 10 def test_empty_cell_contents(self): @@ -77,7 +77,7 @@ x = 1 g = f() - raises(ValueError, "g.func_closure[0].cell_contents") + raises(ValueError, "g.__closure__[0].cell_contents") def test_compare_cells(self): def f(n): @@ -87,8 +87,8 @@ return x + y return f - g0 = f(0).func_closure[0] - g1 = f(1).func_closure[0] + g0 = f(0).__closure__[0] + g1 = f(1).__closure__[0] assert cmp(g0, g1) == -1 def test_leaking_class_locals(self): diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -36,7 +36,7 @@ import sys f = sys._getframe() return f.f_code - assert g() is g.func_code + assert g() is g.__code__ def test_f_trace_del(self): import sys @@ -52,7 +52,7 @@ y = f.f_lineno z = f.f_lineno return [x, y, z] - origin = g.func_code.co_firstlineno + origin = g.__code__.co_firstlineno assert g() == [origin+3, origin+4, origin+5] def test_f_lineno_set(self): @@ -457,7 +457,7 @@ len(seen) # take one line del f.f_trace len(seen) # take one line - firstline = set_the_trace.func_code.co_firstlineno + firstline = set_the_trace.__code__.co_firstlineno assert seen == [(1, f, firstline + 6, 'line', None), (1, f, firstline + 7, 'line', None), (1, f, firstline + 8, 'line', None)] diff --git a/pypy/interpreter/test/test_zzpickle_and_slow.py b/pypy/interpreter/test/test_zzpickle_and_slow.py --- a/pypy/interpreter/test/test_zzpickle_and_slow.py +++ b/pypy/interpreter/test/test_zzpickle_and_slow.py @@ -22,7 +22,7 @@ if not hasattr(len, 'func_code'): skip("Cannot run this test if builtins have no func_code") import inspect - args, varargs, varkw = inspect.getargs(len.func_code) + args, varargs, varkw = inspect.getargs(len.__code__) assert args == ['obj'] assert varargs is None assert varkw is None @@ -84,7 +84,7 @@ def f(): return 42 import pickle - code = f.func_code + code = f.__code__ pckl = pickle.dumps(code) result = pickle.loads(pckl) assert code == result @@ -131,13 +131,13 @@ import pickle pckl = pickle.dumps(func) result = pickle.loads(pckl) - assert func.func_name == result.func_name - assert func.func_closure == result.func_closure - assert func.func_code == result.func_code - assert func.func_defaults == result.func_defaults - assert func.func_dict == result.func_dict - assert func.func_doc == result.func_doc - assert func.func_globals == result.func_globals + assert func.__name__ == result.__name__ + assert func.__closure__ == result.__closure__ + assert func.__code__ == result.__code__ + assert func.__defaults__ == result.__defaults__ + assert func.__dict__ == result.__dict__ + assert func.__doc__ == result.__doc__ + assert func.__globals__ == result.__globals__ def test_pickle_cell(self): def g(): @@ -145,7 +145,7 @@ def f(): x[0] += 1 return x - return f.func_closure[0] + return f.__closure__[0] import pickle cell = g() pckl = pickle.dumps(cell) diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -772,19 +772,13 @@ __repr__ = interp2app(Function.descr_function_repr, descrmismatch='__repr__'), __reduce__ = interp2app(Function.descr_function__reduce__), __setstate__ = interp2app(Function.descr_function__setstate__), - func_code = getset_func_code, - func_doc = getset_func_doc, - func_name = getset_func_name, - func_dict = getset_func_dict, - func_defaults = getset_func_defaults, - func_globals = interp_attrproperty_w('w_func_globals', cls=Function), - func_closure = GetSetProperty( Function.fget_func_closure ), __code__ = getset_func_code, __doc__ = getset_func_doc, __name__ = getset_func_name, __dict__ = getset_func_dict, __defaults__ = getset_func_defaults, __globals__ = interp_attrproperty_w('w_func_globals', cls=Function), + __closure__ = GetSetProperty( Function.fget_func_closure ), __module__ = getset___module__, __weakref__ = make_weakref_descr(Function), ) From noreply at buildbot.pypy.org Tue Feb 21 11:06:08 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 11:06:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Allow to add an offset of 0 when using shadow stack, as long as the offset is aligned Message-ID: <20120221100608.E94058203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52715:cbc1e5945396 Date: 2012-02-21 01:38 -0800 http://bitbucket.org/pypy/pypy/changeset/cbc1e5945396/ Log: Allow to add an offset of 0 when using shadow stack, as long as the offset is aligned diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -523,7 +523,7 @@ return [] def add_frame_offset(self, shape, offset): - assert offset != 0 + assert offset & 3 == 0 shape.append(offset) def add_callee_save_reg(self, shape, register): From noreply at buildbot.pypy.org Tue Feb 21 11:06:10 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 11:06:10 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: store the correct register here Message-ID: <20120221100610.346DC82366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52716:6debfbeeb8b8 Date: 2012-02-21 01:39 -0800 http://bitbucket.org/pypy/pypy/changeset/6debfbeeb8b8/ Log: store the correct register here diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -1017,7 +1017,7 @@ with scratch_reg(self.mc): self.mc.load_imm(r.SCRATCH, nursery_free_adr) - self.mc.storex(r.r1.value, 0, r.SCRATCH.value) + self.mc.storex(r.r4.value, 0, r.SCRATCH.value) def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: From noreply at buildbot.pypy.org Tue Feb 21 11:06:11 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 11:06:11 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Do not save the volatile registers around the call malloc in malloc_slowpath Message-ID: <20120221100611.6BB2D82367@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52717:1f4a181255fa Date: 2012-02-21 02:04 -0800 http://bitbucket.org/pypy/pypy/changeset/1f4a181255fa/ Log: Do not save the volatile registers around the call malloc in malloc_slowpath Saving the registers for malloc on the stack overwrites the saved volatiles leading to random failures when the volatile registers are restored. The volatile registers managed by the register allocator are saved and restored anyway around the call to malloc. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -307,15 +307,17 @@ mc.stw(r.SCRATCH.value, r.SP.value, 0) else: mc.std(r.SCRATCH.value, r.SP.value, 0) - with Saved_Volatiles(mc): - # Values to compute size stored in r3 and r4 - mc.subf(r.r3.value, r.r3.value, r.r4.value) - addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() - for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): - mc.store(reg.value, r.SPP.value, ofs) - mc.call(addr) - for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): - mc.load(reg.value, r.SPP.value, ofs) + # managed volatiles are saved below + if self.cpu.supports_floats: + assert 0, "make sure to save floats here" + # Values to compute size stored in r3 and r4 + mc.subf(r.r3.value, r.r3.value, r.r4.value) + addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + mc.store(reg.value, r.SPP.value, ofs) + mc.call(addr) + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + mc.load(reg.value, r.SPP.value, ofs) mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() From noreply at buildbot.pypy.org Tue Feb 21 11:15:00 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 11:15:00 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: store frame size for malloc_slowpath in a variable Message-ID: <20120221101500.3AAD58203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52718:a194a3b4885f Date: 2012-02-21 02:11 -0800 http://bitbucket.org/pypy/pypy/changeset/a194a3b4885f/ Log: store frame size for malloc_slowpath in a variable diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -301,7 +301,10 @@ if IS_PPC_64: for _ in range(6): mc.write32(0) - mc.subi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) + frame_size = (# add space for floats later + + WORD # Link Register + + BACKCHAIN_SIZE * WORD) + mc.subi(r.SP.value, r.SP.value, frame_size) mc.mflr(r.SCRATCH.value) if IS_PPC_32: mc.stw(r.SCRATCH.value, r.SP.value, 0) @@ -329,7 +332,7 @@ mc.load(r.SCRATCH.value, r.SP.value, 0) mc.mtlr(r.SCRATCH.value) # restore LR - mc.addi(r.SP.value, r.SP.value, BACKCHAIN_SIZE * WORD + 1*WORD) # restore old SP + mc.addi(r.SP.value, r.SP.value, frame_size) # restore old SP mc.blr() # if r3 == 0 we skip the return above and jump to the exception path From noreply at buildbot.pypy.org Tue Feb 21 11:36:52 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 11:36:52 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, arigo): we no longer have a file to subclass in py3k. Change the test to use array.array, and add a comment explaining what we are actually testing Message-ID: <20120221103652.253328203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52719:94f6f187a675 Date: 2012-02-21 11:36 +0100 http://bitbucket.org/pypy/pypy/changeset/94f6f187a675/ Log: (antocuni, arigo): we no longer have a file to subclass in py3k. Change the test to use array.array, and add a comment explaining what we are actually testing diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -313,20 +313,22 @@ cls.w_path = cls.space.wrap(str(path)) def test_destructor(self): - import gc, os + import gc, array seen = [] - class MyFile(file): + class MyArray(array.array): def __del__(self): + # here we check that we can still access the array, i.e. that + # the interp-level __del__ has not been called yet seen.append(10) - seen.append(os.lseek(self.fileno(), 2, 0)) - f = MyFile(self.path, 'r') - fd = f.fileno() - seen.append(os.lseek(fd, 5, 0)) - del f + seen.append(self[0]) + a = MyArray('i') + a.append(42) + seen.append(a[0]) + del a gc.collect(); gc.collect(); gc.collect() lst = seen[:] - assert lst == [5, 10, 2] - raises(OSError, os.lseek, fd, 7, 0) + print(lst) + assert lst == [42, 10, 42] def test_method_attrs(self): import sys From noreply at buildbot.pypy.org Tue Feb 21 13:10:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 13:10:50 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: disable this for the purpose of this branch Message-ID: <20120221121050.E5FE28203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52720:dd71f098d947 Date: 2012-02-21 05:10 -0700 http://bitbucket.org/pypy/pypy/changeset/dd71f098d947/ Log: disable this for the purpose of this branch diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -71,9 +71,9 @@ 'intp': 'types.IntP.BoxType', 'uintp': 'types.UIntP.BoxType', 'flexible': 'interp_boxes.W_FlexibleBox', - 'character': 'interp_boxes.W_CharacterBox', - 'str_': 'interp_boxes.W_StringBox', - 'unicode_': 'interp_boxes.W_UnicodeBox', +# 'character': 'interp_boxes.W_CharacterBox', +# 'str_': 'interp_boxes.W_StringBox', +# 'unicode_': 'interp_boxes.W_UnicodeBox', 'void': 'interp_boxes.W_VoidBox', } diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -388,15 +388,15 @@ __setitem__ = interp2app(W_VoidBox.descr_setitem), ) -W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, - __module__ = "numpypy", -) +#W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, +# __module__ = "numpypy", +#) -W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), - __module__ = "numpypy", -) +#W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), +# __module__ = "numpypy", +#) -W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), - __module__ = "numpypy", -) +#W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), +# __module__ = "numpypy", +#) From noreply at buildbot.pypy.org Tue Feb 21 13:27:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 13:27:38 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: disable it completely Message-ID: <20120221122738.8BE468203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52721:a45b3cbb09c3 Date: 2012-02-21 05:27 -0700 http://bitbucket.org/pypy/pypy/changeset/a45b3cbb09c3/ Log: disable it completely diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -226,14 +226,14 @@ dtype.itemtype.store(self.arr, 1, self.ofs, ofs, dtype.coerce(space, w_value)) -class W_CharacterBox(W_FlexibleBox): - pass +#class W_CharacterBox(W_FlexibleBox): +# pass -class W_StringBox(W_CharacterBox): - pass +#class W_StringBox(W_CharacterBox): +# pass -class W_UnicodeBox(W_CharacterBox): - pass +#class W_UnicodeBox(W_CharacterBox): +# pass W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -465,7 +465,7 @@ } typeinfo_partial = { 'Generic': interp_boxes.W_GenericBox, - 'Character': interp_boxes.W_CharacterBox, + #'Character': interp_boxes.W_CharacterBox, 'Flexible': interp_boxes.W_FlexibleBox, 'Inexact': interp_boxes.W_InexactBox, 'Integer': interp_boxes.W_IntegerBox, From noreply at buildbot.pypy.org Tue Feb 21 13:35:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 13:35:50 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: more disabling Message-ID: <20120221123550.403868203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52722:a92fc8dd58c6 Date: 2012-02-21 05:35 -0700 http://bitbucket.org/pypy/pypy/changeset/a92fc8dd58c6/ Log: more disabling diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -164,6 +164,7 @@ except ValueError: raise OperationError(space.w_TypeError, space.wrap("data type not understood")) if char == 'S': + raise NotImplementedError itemtype = types.StringType(size) basename = 'string' num = 18 @@ -175,6 +176,7 @@ raise OperationError(space.w_NotImplementedError, space.wrap( "pure void dtype")) else: + raise NotImplementedError assert char == 'U' basename = 'unicode' itemtype = types.UnicodeType(size) @@ -384,7 +386,7 @@ kind=STRINGLTR, name='string', char='S', - w_box_type = space.gettypefor(interp_boxes.W_StringBox), + w_box_type = None,#space.gettypefor(interp_boxes.W_StringBox), alternate_constructors=[space.w_str], ) self.w_unicodedtype = W_Dtype( @@ -393,7 +395,7 @@ kind=UNICODELTR, name='unicode', char='U', - w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox), + w_box_type = None,#space.gettypefor(interp_boxes.W_UnicodeBox), alternate_constructors=[space.w_unicode], ) self.w_voiddtype = W_Dtype( From noreply at buildbot.pypy.org Tue Feb 21 13:56:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 13:56:57 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: be slightly more robust against random stuff like opaque types Message-ID: <20120221125657.392A18203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52723:c73e7dfd34eb Date: 2012-02-21 05:56 -0700 http://bitbucket.org/pypy/pypy/changeset/c73e7dfd34eb/ Log: be slightly more robust against random stuff like opaque types diff --git a/pypy/rpython/rbuiltin.py b/pypy/rpython/rbuiltin.py --- a/pypy/rpython/rbuiltin.py +++ b/pypy/rpython/rbuiltin.py @@ -362,8 +362,8 @@ flags['track_allocation'] = v_track_allocation.value if i_add_memory_pressure is not None: flags['add_memory_pressure'] = v_add_memory_pressure.value - mpa = hop.r_result.lowleveltype.TO._hints.get('memory_position_alignment', - None) + T = hop.r_result.lowleveltype.TO + mpa = getattr(T, '_hints', {}).get('memory_position_alignment', None) if mpa is not None: flags['memory_position_alignment'] = mpa vlist.append(hop.inputconst(lltype.Void, flags)) From noreply at buildbot.pypy.org Tue Feb 21 14:50:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 14:50:50 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: ARGH; Message-ID: <20120221135050.43ED28203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52724:da24a5614f75 Date: 2012-02-21 06:50 -0700 http://bitbucket.org/pypy/pypy/changeset/da24a5614f75/ Log: ARGH; diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -627,10 +627,10 @@ fnptr = self.raw_malloc_varsize_no_length_zero_ptr else: fnptr = self.raw_malloc_varsize_no_length_ptr - v_raw = hop.genop("direct_call", - [fnptr, v_length, c_const_size, - c_item_size], - resulttype=llmemory.Address) + v_raw = hop.genop("direct_call", + [fnptr, v_length, c_const_size, + c_item_size], + resulttype=llmemory.Address) else: if flags.get('zero'): raise NotImplementedError("raw zero varsize malloc with length field") From noreply at buildbot.pypy.org Tue Feb 21 14:54:01 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 14:54:01 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add a timeout of 600s to tests using pexpect. Timeouts were causing test failures on ARM Message-ID: <20120221135401.473FB8203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52725:515842e726fc Date: 2012-02-21 13:51 +0000 http://bitbucket.org/pypy/pypy/changeset/515842e726fc/ Log: add a timeout of 600s to tests using pexpect. Timeouts were causing test failures on ARM diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -539,6 +539,7 @@ def _spawn(self, *args, **kwds): import pexpect + kwds.setdefault('timeout', 600) child = pexpect.spawn(*args, **kwds) child.logfile = sys.stdout return child From noreply at buildbot.pypy.org Tue Feb 21 14:54:02 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 14:54:02 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Also add a timeout to tests explicitly using pexpect Message-ID: <20120221135402.72CF98203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52726:af0e88e4cedc Date: 2012-02-21 13:52 +0000 http://bitbucket.org/pypy/pypy/changeset/af0e88e4cedc/ Log: Also add a timeout to tests explicitly using pexpect diff --git a/pypy/module/_minimal_curses/test/test_curses.py b/pypy/module/_minimal_curses/test/test_curses.py --- a/pypy/module/_minimal_curses/test/test_curses.py +++ b/pypy/module/_minimal_curses/test/test_curses.py @@ -18,6 +18,7 @@ """ def _spawn(self, *args, **kwds): import pexpect + kwds.setdefault('timeout', 600) print 'SPAWN:', args, kwds child = pexpect.spawn(*args, **kwds) child.logfile = sys.stdout From noreply at buildbot.pypy.org Tue Feb 21 16:07:28 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:28 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the syntax here. The test still fails, no clue why Message-ID: <20120221150728.BCBB68203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52727:5a4f37c69744 Date: 2012-02-21 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/5a4f37c69744/ Log: fix the syntax here. The test still fails, no clue why diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -11,6 +11,12 @@ argslist = map(str, args) popen = subprocess.Popen(argslist, stdout=subprocess.PIPE) stdout, stderr = popen.communicate() + print '--- stdout ---' + print stdout + print + print '--- stderr ---' + print stderr + print return stdout @@ -18,19 +24,19 @@ """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath output = run(sys.executable, pypypath, - "-c", "import sys;print sys.executable") + "-c", "import sys;print(sys.executable)") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" - cmd = "print __name__; print '__file__' in globals()" + cmd = "print(__name__); print('__file__' in globals())" output = run(sys.executable, pypypath, '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' tmpfilepath = str(udir.join("test_py_script_1.py")) tmpfile = file( tmpfilepath, "w" ) - tmpfile.write("print __name__; print __file__\n") + tmpfile.write("print(__name__); print(__file__)\n") tmpfile.close() output = run(sys.executable, pypypath, tmpfilepath) @@ -41,22 +47,22 @@ """Some tests on argv""" # test 1 : no arguments output = run(sys.executable, pypypath, - "-c", "import sys;print sys.argv") + "-c", "import sys;print(sys.argv)") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after output = run(sys.executable, pypypath, - "-c", "import sys;print sys.argv", "hello") + "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters output = run(sys.executable, pypypath, - "-O", "-c", "import sys;print sys.argv", "hello") + "-O", "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) SCRIPT_1 = """ import sys -print sys.argv +print(sys.argv) """ def test_scripts(): tmpfilepath = str(udir.join("test_py_script.py")) From noreply at buildbot.pypy.org Tue Feb 21 16:07:30 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:30 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix these two tests after we killed func_* and im_self Message-ID: <20120221150730.50FD98203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52728:97dcba2db8d1 Date: 2012-02-21 12:20 +0100 http://bitbucket.org/pypy/pypy/changeset/97dcba2db8d1/ Log: fix these two tests after we killed func_* and im_self diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -528,7 +528,7 @@ else: # line 5 if 1: pass # line 6 import dis - co = ireturn_example.func_code + co = ireturn_example.__code__ linestarts = list(dis.findlinestarts(co)) addrreturn = linestarts[-1][0] x = [addrreturn == (len(co.co_code) - 4)] diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -259,10 +259,10 @@ assert ff.__get__(0, int)(42) == (int, 42) assert ff.__get__(0)(42) == (int, 42) - assert C.goo.im_self is C - assert D.goo.im_self is D - assert super(D,D).goo.im_self is D - assert super(D,d).goo.im_self is D + assert C.goo.__self__ is C + assert D.goo.__self__ is D + assert super(D,D).goo.__self__ is D + assert super(D,d).goo.__self__ is D assert super(D,D).goo() == (D,) assert super(D,d).goo() == (D,) From noreply at buildbot.pypy.org Tue Feb 21 16:07:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120221150731.8F9078203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52729:cad083e080fd Date: 2012-02-21 12:29 +0100 http://bitbucket.org/pypy/pypy/changeset/cad083e080fd/ Log: s/func_code/__code__ diff --git a/pypy/module/_collections/app_defaultdict.py b/pypy/module/_collections/app_defaultdict.py --- a/pypy/module/_collections/app_defaultdict.py +++ b/pypy/module/_collections/app_defaultdict.py @@ -25,7 +25,7 @@ def __missing__(self, key): pass # this method is written at interp-level - __missing__.func_code = _collections.__missing__.func_code + __missing__.__code__ = _collections.__missing__.__code__ def __repr__(self, recurse=set()): # XXX not thread-safe, but good enough From noreply at buildbot.pypy.org Tue Feb 21 16:07:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/xrange/range Message-ID: <20120221150732.CF3948203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52730:4dc1b4196c99 Date: 2012-02-21 12:29 +0100 http://bitbucket.org/pypy/pypy/changeset/4dc1b4196c99/ Log: s/xrange/range diff --git a/pypy/module/_collections/test/test_deque.py b/pypy/module/_collections/test/test_deque.py --- a/pypy/module/_collections/test/test_deque.py +++ b/pypy/module/_collections/test/test_deque.py @@ -7,20 +7,20 @@ def test_basics(self): from _collections import deque - d = deque(xrange(-5125, -5000)) - d.__init__(xrange(200)) - for i in xrange(200, 400): + d = deque(range(-5125, -5000)) + d.__init__(range(200)) + for i in range(200, 400): d.append(i) - for i in reversed(xrange(-200, 0)): + for i in reversed(range(-200, 0)): d.appendleft(i) assert list(d) == range(-200, 400) assert len(d) == 600 - left = [d.popleft() for i in xrange(250)] + left = [d.popleft() for i in range(250)] assert left == range(-200, 50) assert list(d) == range(50, 400) - right = [d.pop() for i in xrange(250)] + right = [d.pop() for i in range(250)] right.reverse() assert right == range(150, 400) assert list(d) == range(50, 150) @@ -139,9 +139,9 @@ def test_getitem(self): from _collections import deque n = 200 - l = xrange(1000, 1000 + n) + l = range(1000, 1000 + n) d = deque(l) - for j in xrange(-n, n): + for j in range(-n, n): assert d[j] == l[j] raises(IndexError, "d[-n-1]") raises(IndexError, "d[n]") @@ -149,12 +149,12 @@ def test_setitem(self): from _collections import deque n = 200 - d = deque(xrange(n)) - for i in xrange(n): + d = deque(range(n)) + for i in range(n): d[i] = 10 * i - assert list(d) == [10*i for i in xrange(n)] + assert list(d) == [10*i for i in range(n)] l = list(d) - for i in xrange(1-n, 0, -3): + for i in range(1-n, 0, -3): d[i] = 7*i l[i] = 7*i assert list(d) == l @@ -167,7 +167,7 @@ def test_reverse(self): from _collections import deque - d = deque(xrange(1000, 1200)) + d = deque(range(1000, 1200)) d.reverse() assert list(d) == list(reversed(range(1000, 1200))) # @@ -232,7 +232,7 @@ def test_repr(self): from _collections import deque - d = deque(xrange(20)) + d = deque(range(20)) e = eval(repr(d)) assert d == e d.append(d) @@ -244,7 +244,7 @@ def test_roundtrip_iter_init(self): from _collections import deque - d = deque(xrange(200)) + d = deque(range(200)) e = deque(d) assert d is not e assert d == e @@ -288,7 +288,7 @@ def test_reversed(self): from _collections import deque - for s in ('abcd', xrange(200)): + for s in ('abcd', range(200)): assert list(reversed(deque(s))) == list(reversed(s)) def test_free(self): From noreply at buildbot.pypy.org Tue Feb 21 16:07:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120221150734.198E68203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52731:3c40a4aa219a Date: 2012-02-21 13:28 +0100 http://bitbucket.org/pypy/pypy/changeset/3c40a4aa219a/ Log: s/func_code/__code__ diff --git a/pypy/module/_continuation/interp_continuation.py b/pypy/module/_continuation/interp_continuation.py --- a/pypy/module/_continuation/interp_continuation.py +++ b/pypy/module/_continuation/interp_continuation.py @@ -172,7 +172,7 @@ raise TypeError( "can\'t send non-None value to a just-started continulet") return func(c, *args, **kwds) - return start.func_code + return start.__code__ ''') self.entrypoint_pycode = space.interp_w(PyCode, w_code) self.entrypoint_pycode.hidden_applevel = True From noreply at buildbot.pypy.org Tue Feb 21 16:07:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__, and fix one import of StringIO Message-ID: <20120221150735.5FB378203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52732:fa465e45bc43 Date: 2012-02-21 13:34 +0100 http://bitbucket.org/pypy/pypy/changeset/fa465e45bc43/ Log: s/func_code/__code__, and fix one import of StringIO diff --git a/pypy/module/_lsprof/test/test_cprofile.py b/pypy/module/_lsprof/test/test_cprofile.py --- a/pypy/module/_lsprof/test/test_cprofile.py +++ b/pypy/module/_lsprof/test/test_cprofile.py @@ -107,8 +107,8 @@ entries = {} for entry in stats: entries[entry.code] = entry - efoo = entries[foo.func_code] - ebar = entries[bar.func_code] + efoo = entries[foo.__code__] + ebar = entries[bar.__code__] assert 0.9 < efoo.totaltime < 2.9 # --- cannot test .inlinetime, because it does not include # --- the time spent doing the call to time.time() diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -120,8 +120,8 @@ int_add = elem[3][0] dmp = elem[3][1] assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.pycode is self.f.func_code - assert dmp.greenkey == (self.f.func_code, 0, False) + assert dmp.pycode is self.f.__code__ + assert dmp.greenkey == (self.f.__code__, 0, False) assert dmp.call_depth == 0 assert int_add.name == 'int_add' assert int_add.num == self.int_add_num @@ -132,13 +132,14 @@ assert len(all) == 2 def test_on_compile_exception(self): - import pypyjit, sys, cStringIO + import pypyjit, sys + from io import StringIO def hook(*args): 1/0 pypyjit.set_compile_hook(hook) - s = cStringIO.StringIO() + s = StringIO() prev = sys.stderr sys.stderr = s try: From noreply at buildbot.pypy.org Tue Feb 21 16:07:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__, and force the list out of range() Message-ID: <20120221150736.A015C8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52733:ca3bebdbf7d8 Date: 2012-02-21 13:37 +0100 http://bitbucket.org/pypy/pypy/changeset/ca3bebdbf7d8/ Log: s/func_code/__code__, and force the list out of range() diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -225,9 +225,9 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.__code__, 0, 0)) assert op.bytecode_no == 0 - assert op.pycode is f.func_code + assert op.pycode is f.__code__ assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num diff --git a/pypy/module/test_lib_pypy/test_greenlet.py b/pypy/module/test_lib_pypy/test_greenlet.py --- a/pypy/module/test_lib_pypy/test_greenlet.py +++ b/pypy/module/test_lib_pypy/test_greenlet.py @@ -18,7 +18,7 @@ lst.append(2) g.switch() lst.append(4) - assert lst == range(5) + assert lst == list(range(5)) def test_parent(self): from greenlet import greenlet From noreply at buildbot.pypy.org Tue Feb 21 16:07:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120221150737.DEEC38203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52734:422010ccb862 Date: 2012-02-21 13:40 +0100 http://bitbucket.org/pypy/pypy/changeset/422010ccb862/ Log: s/func_code/__code__ diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -655,7 +655,7 @@ "objspace.opcodes.CALL_METHOD": True}) # def check(space, w_func, name): - w_code = space.getattr(w_func, space.wrap('func_code')) + w_code = space.getattr(w_func, space.wrap('__code__')) nameindex = map(space.str_w, w_code.co_names_w).index(name) entry = w_code._mapdict_caches[nameindex] entry.failure_counter = 0 From noreply at buildbot.pypy.org Tue Feb 21 16:07:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120221150739.2826D8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52735:e2538e50a9a4 Date: 2012-02-21 15:10 +0100 http://bitbucket.org/pypy/pypy/changeset/e2538e50a9a4/ Log: s/func_code/__code__ diff --git a/pypy/objspace/std/test/test_proxy_function.py b/pypy/objspace/std/test/test_proxy_function.py --- a/pypy/objspace/std/test/test_proxy_function.py +++ b/pypy/objspace/std/test/test_proxy_function.py @@ -68,7 +68,7 @@ pass fun = self.get_proxy(f) - assert fun.func_code is f.func_code + assert fun.__code__ is f.__code__ def test_funct_prop_setter_del(self): def f(): From noreply at buildbot.pypy.org Tue Feb 21 16:07:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: don't pop() the __doc__ attribute out of rawdict, else the corresponding Message-ID: <20120221150740.736D78203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52736:46769341e0eb Date: 2012-02-21 16:06 +0100 http://bitbucket.org/pypy/pypy/changeset/46769341e0eb/ Log: don't pop() the __doc__ attribute out of rawdict, else the corresponding GetSetProperty won't be seen by self.add_entries, and its name won't be set. This ultimately causes test_proxy_function.test_funct_propset_del to fail, because transparent proxies expect properties to have the right name. Until today it worked "by chance", because Function.typedef had the same property under two names ('func_doc' and '__doc__'), thus it got a name anyway even if '__doc__' was popped. diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -24,7 +24,7 @@ self.bases = bases self.hasdict = '__dict__' in rawdict self.weakrefable = '__weakref__' in rawdict - self.doc = rawdict.pop('__doc__', None) + self.doc = rawdict.get('__doc__', None) for base in bases: self.hasdict |= base.hasdict self.weakrefable |= base.weakrefable From noreply at buildbot.pypy.org Tue Feb 21 16:07:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 16:07:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: merge heads Message-ID: <20120221150741.CCF1B8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52737:4224a1520aa4 Date: 2012-02-21 16:06 +0100 http://bitbucket.org/pypy/pypy/changeset/4224a1520aa4/ Log: merge heads diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -528,7 +528,7 @@ else: # line 5 if 1: pass # line 6 import dis - co = ireturn_example.func_code + co = ireturn_example.__code__ linestarts = list(dis.findlinestarts(co)) addrreturn = linestarts[-1][0] x = [addrreturn == (len(co.co_code) - 4)] diff --git a/pypy/interpreter/test/test_appinterp.py b/pypy/interpreter/test/test_appinterp.py --- a/pypy/interpreter/test/test_appinterp.py +++ b/pypy/interpreter/test/test_appinterp.py @@ -29,7 +29,7 @@ app = appdef("""app(x,y): return x + y """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space, space.wrap(41), space.wrap(1)) assert space.eq_w(w_result, space.wrap(42)) @@ -37,7 +37,7 @@ app = appdef("""app(x,y=1): return x + y """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space, space.wrap(41)) assert space.eq_w(w_result, space.wrap(42)) @@ -59,7 +59,7 @@ app = appdef("""app(): return 42 """) - assert app.func_name == 'app' + assert app.__name__ == 'app' w_result = app(space) assert space.eq_w(w_result, space.wrap(42)) diff --git a/pypy/interpreter/test/test_compiler.py b/pypy/interpreter/test/test_compiler.py --- a/pypy/interpreter/test/test_compiler.py +++ b/pypy/interpreter/test/test_compiler.py @@ -381,8 +381,8 @@ (4 and 5): def g(): "line 6" - fline = f.func_code.co_firstlineno - gline = g.func_code.co_firstlineno + fline = f.__code__.co_firstlineno + gline = g.__code__.co_firstlineno ''')) code = self.compiler.compile(snippet, '', 'exec', 0) space = self.space @@ -400,7 +400,7 @@ @foo # line 4 def f(): # line 5 pass # line 6 - fline = f.func_code.co_firstlineno + fline = f.__code__.co_firstlineno ''')) code = self.compiler.compile(snippet, '', 'exec', 0) space = self.space @@ -766,7 +766,7 @@ """ ns = {} exec(source, ns) - code = ns['f'].func_code + code = ns['f'].__code__ import dis, sys from io import StringIO s = StringIO() @@ -873,7 +873,7 @@ """ ns = {} exec(source, ns) - code = ns['_f'].func_code + code = ns['_f'].__code__ import sys, dis from io import StringIO @@ -893,7 +893,7 @@ """ ns = {} exec(source, ns) - code = ns['_f'].func_code + code = ns['_f'].__code__ import sys, dis from io import StringIO diff --git a/pypy/interpreter/test/test_descrtypecheck.py b/pypy/interpreter/test/test_descrtypecheck.py --- a/pypy/interpreter/test/test_descrtypecheck.py +++ b/pypy/interpreter/test/test_descrtypecheck.py @@ -5,11 +5,11 @@ def test_getsetprop_get(self): def f(): pass - getter = type(f).__dict__['func_code'].__get__ + getter = type(f).__dict__['__code__'].__get__ getter = getattr(getter, 'im_func', getter) # neutralizes pypy/cpython diff raises(TypeError, getter, 1, None) def test_func_code_get(self): def f(): pass - raises(TypeError, type(f).func_code.__get__,1) + raises(TypeError, type(f).__code__.__get__,1) diff --git a/pypy/interpreter/test/test_eval.py b/pypy/interpreter/test/test_eval.py --- a/pypy/interpreter/test/test_eval.py +++ b/pypy/interpreter/test/test_eval.py @@ -7,7 +7,7 @@ def setup_method(self, method): def c(x, y, *args): pass - code = PyCode._from_code(self.space, c.func_code) + code = PyCode._from_code(self.space, c.__code__) class ConcreteFastscopeFrame(Frame): diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -682,5 +682,5 @@ app_g = gateway.interp2app_temp(g) space = self.space w_g = space.wrap(app_g) - w_defs = space.getattr(w_g, space.wrap("func_defaults")) + w_defs = space.getattr(w_g, space.wrap("__defaults__")) assert space.is_w(w_defs, space.w_None) diff --git a/pypy/interpreter/test/test_gateway.py b/pypy/interpreter/test/test_gateway.py --- a/pypy/interpreter/test/test_gateway.py +++ b/pypy/interpreter/test/test_gateway.py @@ -715,7 +715,7 @@ class X(object): def __init__(self, **kw): pass - clash = type.__call__.func_code.co_varnames[0] + clash = type.__call__.__code__.co_varnames[0] X(**{clash: 33}) type.__call__(X, **{clash: 33}) @@ -724,28 +724,28 @@ class X(object): def __init__(self, **kw): pass - clash = object.__new__.func_code.co_varnames[0] + clash = object.__new__.__code__.co_varnames[0] X(**{clash: 33}) object.__new__(X, **{clash: 33}) def test_dict_new(self): - clash = dict.__new__.func_code.co_varnames[0] + clash = dict.__new__.__code__.co_varnames[0] dict(**{clash: 33}) dict.__new__(dict, **{clash: 33}) def test_dict_init(self): d = {} - clash = dict.__init__.func_code.co_varnames[0] + clash = dict.__init__.__code__.co_varnames[0] d.__init__(**{clash: 33}) dict.__init__(d, **{clash: 33}) def test_dict_update(self): d = {} - clash = dict.update.func_code.co_varnames[0] + clash = dict.update.__code__.co_varnames[0] d.update(**{clash: 33}) dict.update(d, **{clash: 33}) diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -17,7 +17,7 @@ yield 1 assert g.gi_running g = f() - assert g.gi_code is f.func_code + assert g.gi_code is f.__code__ assert g.__name__ == 'f' assert g.gi_frame is not None assert not g.gi_running @@ -26,7 +26,7 @@ raises(StopIteration, next, g) assert not g.gi_running assert g.gi_frame is None - assert g.gi_code is f.func_code + assert g.gi_code is f.__code__ assert g.__name__ == 'f' def test_generator3(self): diff --git a/pypy/interpreter/test/test_nestedscope.py b/pypy/interpreter/test/test_nestedscope.py --- a/pypy/interpreter/test/test_nestedscope.py +++ b/pypy/interpreter/test/test_nestedscope.py @@ -66,7 +66,7 @@ return f g = f(10) - assert g.func_closure[0].cell_contents == 10 + assert g.__closure__[0].cell_contents == 10 def test_empty_cell_contents(self): @@ -77,7 +77,7 @@ x = 1 g = f() - raises(ValueError, "g.func_closure[0].cell_contents") + raises(ValueError, "g.__closure__[0].cell_contents") def test_compare_cells(self): def f(n): @@ -87,8 +87,8 @@ return x + y return f - g0 = f(0).func_closure[0] - g1 = f(1).func_closure[0] + g0 = f(0).__closure__[0] + g1 = f(1).__closure__[0] assert cmp(g0, g1) == -1 def test_leaking_class_locals(self): diff --git a/pypy/interpreter/test/test_pyframe.py b/pypy/interpreter/test/test_pyframe.py --- a/pypy/interpreter/test/test_pyframe.py +++ b/pypy/interpreter/test/test_pyframe.py @@ -36,7 +36,7 @@ import sys f = sys._getframe() return f.f_code - assert g() is g.func_code + assert g() is g.__code__ def test_f_trace_del(self): import sys @@ -52,7 +52,7 @@ y = f.f_lineno z = f.f_lineno return [x, y, z] - origin = g.func_code.co_firstlineno + origin = g.__code__.co_firstlineno assert g() == [origin+3, origin+4, origin+5] def test_f_lineno_set(self): @@ -457,7 +457,7 @@ len(seen) # take one line del f.f_trace len(seen) # take one line - firstline = set_the_trace.func_code.co_firstlineno + firstline = set_the_trace.__code__.co_firstlineno assert seen == [(1, f, firstline + 6, 'line', None), (1, f, firstline + 7, 'line', None), (1, f, firstline + 8, 'line', None)] diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -313,20 +313,22 @@ cls.w_path = cls.space.wrap(str(path)) def test_destructor(self): - import gc, os + import gc, array seen = [] - class MyFile(file): + class MyArray(array.array): def __del__(self): + # here we check that we can still access the array, i.e. that + # the interp-level __del__ has not been called yet seen.append(10) - seen.append(os.lseek(self.fileno(), 2, 0)) - f = MyFile(self.path, 'r') - fd = f.fileno() - seen.append(os.lseek(fd, 5, 0)) - del f + seen.append(self[0]) + a = MyArray('i') + a.append(42) + seen.append(a[0]) + del a gc.collect(); gc.collect(); gc.collect() lst = seen[:] - assert lst == [5, 10, 2] - raises(OSError, os.lseek, fd, 7, 0) + print(lst) + assert lst == [42, 10, 42] def test_method_attrs(self): import sys diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -11,6 +11,12 @@ argslist = map(str, args) popen = subprocess.Popen(argslist, stdout=subprocess.PIPE) stdout, stderr = popen.communicate() + print '--- stdout ---' + print stdout + print + print '--- stderr ---' + print stderr + print return stdout @@ -18,19 +24,19 @@ """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath output = run(sys.executable, pypypath, - "-c", "import sys;print sys.executable") + "-c", "import sys;print(sys.executable)") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" - cmd = "print __name__; print '__file__' in globals()" + cmd = "print(__name__); print('__file__' in globals())" output = run(sys.executable, pypypath, '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' tmpfilepath = str(udir.join("test_py_script_1.py")) tmpfile = file( tmpfilepath, "w" ) - tmpfile.write("print __name__; print __file__\n") + tmpfile.write("print(__name__); print(__file__)\n") tmpfile.close() output = run(sys.executable, pypypath, tmpfilepath) @@ -41,22 +47,22 @@ """Some tests on argv""" # test 1 : no arguments output = run(sys.executable, pypypath, - "-c", "import sys;print sys.argv") + "-c", "import sys;print(sys.argv)") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after output = run(sys.executable, pypypath, - "-c", "import sys;print sys.argv", "hello") + "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters output = run(sys.executable, pypypath, - "-O", "-c", "import sys;print sys.argv", "hello") + "-O", "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) SCRIPT_1 = """ import sys -print sys.argv +print(sys.argv) """ def test_scripts(): tmpfilepath = str(udir.join("test_py_script.py")) diff --git a/pypy/interpreter/test/test_zzpickle_and_slow.py b/pypy/interpreter/test/test_zzpickle_and_slow.py --- a/pypy/interpreter/test/test_zzpickle_and_slow.py +++ b/pypy/interpreter/test/test_zzpickle_and_slow.py @@ -22,7 +22,7 @@ if not hasattr(len, 'func_code'): skip("Cannot run this test if builtins have no func_code") import inspect - args, varargs, varkw = inspect.getargs(len.func_code) + args, varargs, varkw = inspect.getargs(len.__code__) assert args == ['obj'] assert varargs is None assert varkw is None @@ -84,7 +84,7 @@ def f(): return 42 import pickle - code = f.func_code + code = f.__code__ pckl = pickle.dumps(code) result = pickle.loads(pckl) assert code == result @@ -131,13 +131,13 @@ import pickle pckl = pickle.dumps(func) result = pickle.loads(pckl) - assert func.func_name == result.func_name - assert func.func_closure == result.func_closure - assert func.func_code == result.func_code - assert func.func_defaults == result.func_defaults - assert func.func_dict == result.func_dict - assert func.func_doc == result.func_doc - assert func.func_globals == result.func_globals + assert func.__name__ == result.__name__ + assert func.__closure__ == result.__closure__ + assert func.__code__ == result.__code__ + assert func.__defaults__ == result.__defaults__ + assert func.__dict__ == result.__dict__ + assert func.__doc__ == result.__doc__ + assert func.__globals__ == result.__globals__ def test_pickle_cell(self): def g(): @@ -145,7 +145,7 @@ def f(): x[0] += 1 return x - return f.func_closure[0] + return f.__closure__[0] import pickle cell = g() pckl = pickle.dumps(cell) diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -772,19 +772,13 @@ __repr__ = interp2app(Function.descr_function_repr, descrmismatch='__repr__'), __reduce__ = interp2app(Function.descr_function__reduce__), __setstate__ = interp2app(Function.descr_function__setstate__), - func_code = getset_func_code, - func_doc = getset_func_doc, - func_name = getset_func_name, - func_dict = getset_func_dict, - func_defaults = getset_func_defaults, - func_globals = interp_attrproperty_w('w_func_globals', cls=Function), - func_closure = GetSetProperty( Function.fget_func_closure ), __code__ = getset_func_code, __doc__ = getset_func_doc, __name__ = getset_func_name, __dict__ = getset_func_dict, __defaults__ = getset_func_defaults, __globals__ = interp_attrproperty_w('w_func_globals', cls=Function), + __closure__ = GetSetProperty( Function.fget_func_closure ), __module__ = getset___module__, __weakref__ = make_weakref_descr(Function), ) diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -259,10 +259,10 @@ assert ff.__get__(0, int)(42) == (int, 42) assert ff.__get__(0)(42) == (int, 42) - assert C.goo.im_self is C - assert D.goo.im_self is D - assert super(D,D).goo.im_self is D - assert super(D,d).goo.im_self is D + assert C.goo.__self__ is C + assert D.goo.__self__ is D + assert super(D,D).goo.__self__ is D + assert super(D,d).goo.__self__ is D assert super(D,D).goo() == (D,) assert super(D,d).goo() == (D,) diff --git a/pypy/module/_collections/app_defaultdict.py b/pypy/module/_collections/app_defaultdict.py --- a/pypy/module/_collections/app_defaultdict.py +++ b/pypy/module/_collections/app_defaultdict.py @@ -25,7 +25,7 @@ def __missing__(self, key): pass # this method is written at interp-level - __missing__.func_code = _collections.__missing__.func_code + __missing__.__code__ = _collections.__missing__.__code__ def __repr__(self, recurse=set()): # XXX not thread-safe, but good enough diff --git a/pypy/module/_collections/test/test_deque.py b/pypy/module/_collections/test/test_deque.py --- a/pypy/module/_collections/test/test_deque.py +++ b/pypy/module/_collections/test/test_deque.py @@ -7,20 +7,20 @@ def test_basics(self): from _collections import deque - d = deque(xrange(-5125, -5000)) - d.__init__(xrange(200)) - for i in xrange(200, 400): + d = deque(range(-5125, -5000)) + d.__init__(range(200)) + for i in range(200, 400): d.append(i) - for i in reversed(xrange(-200, 0)): + for i in reversed(range(-200, 0)): d.appendleft(i) assert list(d) == range(-200, 400) assert len(d) == 600 - left = [d.popleft() for i in xrange(250)] + left = [d.popleft() for i in range(250)] assert left == range(-200, 50) assert list(d) == range(50, 400) - right = [d.pop() for i in xrange(250)] + right = [d.pop() for i in range(250)] right.reverse() assert right == range(150, 400) assert list(d) == range(50, 150) @@ -139,9 +139,9 @@ def test_getitem(self): from _collections import deque n = 200 - l = xrange(1000, 1000 + n) + l = range(1000, 1000 + n) d = deque(l) - for j in xrange(-n, n): + for j in range(-n, n): assert d[j] == l[j] raises(IndexError, "d[-n-1]") raises(IndexError, "d[n]") @@ -149,12 +149,12 @@ def test_setitem(self): from _collections import deque n = 200 - d = deque(xrange(n)) - for i in xrange(n): + d = deque(range(n)) + for i in range(n): d[i] = 10 * i - assert list(d) == [10*i for i in xrange(n)] + assert list(d) == [10*i for i in range(n)] l = list(d) - for i in xrange(1-n, 0, -3): + for i in range(1-n, 0, -3): d[i] = 7*i l[i] = 7*i assert list(d) == l @@ -167,7 +167,7 @@ def test_reverse(self): from _collections import deque - d = deque(xrange(1000, 1200)) + d = deque(range(1000, 1200)) d.reverse() assert list(d) == list(reversed(range(1000, 1200))) # @@ -232,7 +232,7 @@ def test_repr(self): from _collections import deque - d = deque(xrange(20)) + d = deque(range(20)) e = eval(repr(d)) assert d == e d.append(d) @@ -244,7 +244,7 @@ def test_roundtrip_iter_init(self): from _collections import deque - d = deque(xrange(200)) + d = deque(range(200)) e = deque(d) assert d is not e assert d == e @@ -288,7 +288,7 @@ def test_reversed(self): from _collections import deque - for s in ('abcd', xrange(200)): + for s in ('abcd', range(200)): assert list(reversed(deque(s))) == list(reversed(s)) def test_free(self): diff --git a/pypy/module/_continuation/interp_continuation.py b/pypy/module/_continuation/interp_continuation.py --- a/pypy/module/_continuation/interp_continuation.py +++ b/pypy/module/_continuation/interp_continuation.py @@ -172,7 +172,7 @@ raise TypeError( "can\'t send non-None value to a just-started continulet") return func(c, *args, **kwds) - return start.func_code + return start.__code__ ''') self.entrypoint_pycode = space.interp_w(PyCode, w_code) self.entrypoint_pycode.hidden_applevel = True diff --git a/pypy/module/_lsprof/test/test_cprofile.py b/pypy/module/_lsprof/test/test_cprofile.py --- a/pypy/module/_lsprof/test/test_cprofile.py +++ b/pypy/module/_lsprof/test/test_cprofile.py @@ -107,8 +107,8 @@ entries = {} for entry in stats: entries[entry.code] = entry - efoo = entries[foo.func_code] - ebar = entries[bar.func_code] + efoo = entries[foo.__code__] + ebar = entries[bar.__code__] assert 0.9 < efoo.totaltime < 2.9 # --- cannot test .inlinetime, because it does not include # --- the time spent doing the call to time.time() diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -120,8 +120,8 @@ int_add = elem[3][0] dmp = elem[3][1] assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.pycode is self.f.func_code - assert dmp.greenkey == (self.f.func_code, 0, False) + assert dmp.pycode is self.f.__code__ + assert dmp.greenkey == (self.f.__code__, 0, False) assert dmp.call_depth == 0 assert int_add.name == 'int_add' assert int_add.num == self.int_add_num @@ -132,13 +132,14 @@ assert len(all) == 2 def test_on_compile_exception(self): - import pypyjit, sys, cStringIO + import pypyjit, sys + from io import StringIO def hook(*args): 1/0 pypyjit.set_compile_hook(hook) - s = cStringIO.StringIO() + s = StringIO() prev = sys.stderr sys.stderr = s try: @@ -224,9 +225,9 @@ def f(): pass - op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.func_code, 0, 0)) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', 2, (f.__code__, 0, 0)) assert op.bytecode_no == 0 - assert op.pycode is f.func_code + assert op.pycode is f.__code__ assert repr(op) == 'repr' assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num diff --git a/pypy/module/test_lib_pypy/test_greenlet.py b/pypy/module/test_lib_pypy/test_greenlet.py --- a/pypy/module/test_lib_pypy/test_greenlet.py +++ b/pypy/module/test_lib_pypy/test_greenlet.py @@ -18,7 +18,7 @@ lst.append(2) g.switch() lst.append(4) - assert lst == range(5) + assert lst == list(range(5)) def test_parent(self): from greenlet import greenlet diff --git a/pypy/objspace/std/test/test_mapdict.py b/pypy/objspace/std/test/test_mapdict.py --- a/pypy/objspace/std/test/test_mapdict.py +++ b/pypy/objspace/std/test/test_mapdict.py @@ -655,7 +655,7 @@ "objspace.opcodes.CALL_METHOD": True}) # def check(space, w_func, name): - w_code = space.getattr(w_func, space.wrap('func_code')) + w_code = space.getattr(w_func, space.wrap('__code__')) nameindex = map(space.str_w, w_code.co_names_w).index(name) entry = w_code._mapdict_caches[nameindex] entry.failure_counter = 0 diff --git a/pypy/objspace/std/test/test_proxy_function.py b/pypy/objspace/std/test/test_proxy_function.py --- a/pypy/objspace/std/test/test_proxy_function.py +++ b/pypy/objspace/std/test/test_proxy_function.py @@ -68,7 +68,7 @@ pass fun = self.get_proxy(f) - assert fun.func_code is f.func_code + assert fun.__code__ is f.__code__ def test_funct_prop_setter_del(self): def f(): From noreply at buildbot.pypy.org Tue Feb 21 16:48:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 16:48:53 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: bleh remove leftovers Message-ID: <20120221154853.CE39F8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52738:11d8122c7d7a Date: 2012-02-21 08:48 -0700 http://bitbucket.org/pypy/pypy/changeset/11d8122c7d7a/ Log: bleh remove leftovers diff --git a/pypy/jit/metainterp/optimizeopt/vectorize.py b/pypy/jit/metainterp/optimizeopt/vectorize.py --- a/pypy/jit/metainterp/optimizeopt/vectorize.py +++ b/pypy/jit/metainterp/optimizeopt/vectorize.py @@ -128,7 +128,6 @@ if oopspec == EffectInfo.OS_ASSERT_ALIGNED: index = self.getvalue(op.getarg(2)) self.tracked_indexes[index] = TrackIndex(index, 0) - self.emit_operation(op) else: self.optimize_default(op) From noreply at buildbot.pypy.org Tue Feb 21 16:56:15 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 21 Feb 2012 16:56:15 +0100 (CET) Subject: [pypy-commit] pypy default: a failing test Message-ID: <20120221155615.247DA8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52739:4d956cf97a10 Date: 2012-02-21 10:55 -0500 http://bitbucket.org/pypy/pypy/changeset/4d956cf97a10/ Log: a failing test diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{}".format(numpy.float64(3)) == "3.0" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') From noreply at buildbot.pypy.org Tue Feb 21 17:00:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 17:00:02 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: OneDimIterator has a promoted step. skeptical Message-ID: <20120221160002.4CC658203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52740:aaa9c4ef0367 Date: 2012-02-21 08:59 -0700 http://bitbucket.org/pypy/pypy/changeset/aaa9c4ef0367/ Log: OneDimIterator has a promoted step. skeptical diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -173,7 +173,7 @@ arr = instantiate(OneDimIterator) arr.size = self.size arr.step = self.step - arr.offset = self.offset + self.step + arr.offset = self.offset + jit.promote(self.step) return arr def done(self): From noreply at buildbot.pypy.org Tue Feb 21 18:01:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 21 Feb 2012 18:01:04 +0100 (CET) Subject: [pypy-commit] pypy default: Comment (derived from pypy-dev). Message-ID: <20120221170104.026568203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52741:48ef6cd6e2df Date: 2012-02-21 18:00 +0100 http://bitbucket.org/pypy/pypy/changeset/48ef6cd6e2df/ Log: Comment (derived from pypy-dev). diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -143,6 +143,10 @@ return result def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. space = self.space iterator = self.iter(w_dict) w_key, w_value = iterator.next() From noreply at buildbot.pypy.org Tue Feb 21 18:03:13 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 18:03:13 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, bivab) remove r2 and r13 from the list of volatile registers. Message-ID: <20120221170313.D5E0F8203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52742:16239b2ad2c1 Date: 2012-02-21 08:37 -0800 http://bitbucket.org/pypy/pypy/changeset/16239b2ad2c1/ Log: (edelsohn, bivab) remove r2 and r13 from the list of volatile registers. r2 is persisted around calls anyway and r13 can be ignored diff --git a/pypy/jit/backend/ppc/register.py b/pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -14,7 +14,8 @@ NONVOLATILES = [r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31] -VOLATILES = [r0, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13] +VOLATILES = [r0, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12] +# volatile r2 is persisted around calls and r13 can be ignored NONVOLATILES_FLOAT = [f14, f15, f16, f17, f18, f19, f20, f21, f22, f23, f24, f25, f26, f27, f28, f29, f30, f31] From noreply at buildbot.pypy.org Tue Feb 21 18:03:15 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 21 Feb 2012 18:03:15 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: create a minimal frame malloc_slowpath and save sp and lr to the corresponding slots Message-ID: <20120221170315.12F6D8203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52743:26cf1b59efdb Date: 2012-02-21 09:02 -0800 http://bitbucket.org/pypy/pypy/changeset/26cf1b59efdb/ Log: create a minimal frame malloc_slowpath and save sp and lr to the corresponding slots diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -302,14 +302,15 @@ for _ in range(6): mc.write32(0) frame_size = (# add space for floats later - + WORD # Link Register + BACKCHAIN_SIZE * WORD) - mc.subi(r.SP.value, r.SP.value, frame_size) - mc.mflr(r.SCRATCH.value) if IS_PPC_32: - mc.stw(r.SCRATCH.value, r.SP.value, 0) + mc.stwu(r.SP.value, r.SP.value, -frame_size) + mc.mflr(r.SCRATCH.value) + mc.stw(r.SCRATCH.value, r.SP.value, frame_size + WORD) else: - mc.std(r.SCRATCH.value, r.SP.value, 0) + mc.stdu(r.SP.value, r.SP.value, -frame_size) + mc.mflr(r.SCRATCH.value) + mc.std(r.SCRATCH.value, r.SP.value, frame_size + 2 * WORD) # managed volatiles are saved below if self.cpu.supports_floats: assert 0, "make sure to save floats here" @@ -330,9 +331,13 @@ mc.load_imm(r.r4, nursery_free_adr) mc.load(r.r4.value, r.r4.value, 0) - mc.load(r.SCRATCH.value, r.SP.value, 0) - mc.mtlr(r.SCRATCH.value) # restore LR - mc.addi(r.SP.value, r.SP.value, frame_size) # restore old SP + if IS_PPC_32: + ofs = WORD + else: + ofs = WORD * 2 + mc.load(r.SCRATCH.value, r.SP.value, frame_size + ofs) + mc.mtlr(r.SCRATCH.value) + mc.addi(r.SP.value, r.SP.value, frame_size) mc.blr() # if r3 == 0 we skip the return above and jump to the exception path From noreply at buildbot.pypy.org Tue Feb 21 18:47:59 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 21 Feb 2012 18:47:59 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120221174759.55D098203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52744:6b61991afc70 Date: 2012-02-21 16:11 +0100 http://bitbucket.org/pypy/pypy/changeset/6b61991afc70/ Log: s/func_code/__code__ diff --git a/pypy/translator/test/test_geninterp.py b/pypy/translator/test/test_geninterp.py --- a/pypy/translator/test/test_geninterp.py +++ b/pypy/translator/test/test_geninterp.py @@ -38,7 +38,7 @@ snippet_ad = """if 1: def import_func(): import copyreg - return copyreg._reconstructor.func_code.co_name + return copyreg._reconstructor.__code__.co_name def import_sys_func(): import sys From noreply at buildbot.pypy.org Tue Feb 21 19:49:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 21 Feb 2012 19:49:02 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: fix the assembler names Message-ID: <20120221184902.D7C8F8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52745:7358ce540e67 Date: 2012-02-21 11:48 -0700 http://bitbucket.org/pypy/pypy/changeset/7358ce540e67/ Log: fix the assembler names diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, From noreply at buildbot.pypy.org Tue Feb 21 22:42:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 21 Feb 2012 22:42:33 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Start framework for a separate compilation. Message-ID: <20120221214233.A2CEF8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52746:b97ceb68c086 Date: 2012-02-21 22:41 +0100 http://bitbucket.org/pypy/pypy/changeset/b97ceb68c086/ Log: Start framework for a separate compilation. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py new file mode 100644 --- /dev/null +++ b/pypy/translator/c/exportinfo.py @@ -0,0 +1,134 @@ +from pypy.annotation import description +from pypy.rpython.typesystem import getfunctionptr +from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.rpython.lltypesystem import lltype, rffi +import py +import sys +import types + +class export(object): + """decorator to mark a function as exported by a shared module. + Can be used with a signature:: + @export(float, float) + def f(x, y): + return x + y + or without any argument at all:: + @export + def f(x, y): + return x + y + in which case the function must be used somewhere else, which will + trigger its annotation.""" + + argtypes = None + namespace = None + + def __new__(cls, *args, **kwds): + if len(args) == 1 and isinstance(args[0], types.FunctionType): + func = args[0] + decorated = export()(func) + del decorated.argtypes + return decorated + return object.__new__(cls) + + def __init__(self, *args, **kwds): + self.argtypes = args + self.namespace = kwds.pop('namespace', None) + if kwds: + raise TypeError("unexpected keyword arguments: %s" % kwds.keys()) + + def __call__(self, func): + func.exported = True + if self.argtypes is not None: + func.argtypes = self.argtypes + if self.namespace is not None: + func.namespace = self.namespace + return func + + +class ModuleExportInfo: + """Translates and builds a library, and returns an 'import Module' + which can be used in another translation. + + Using this object will generate external calls to the low-level + functions. + """ + def __init__(self): + self.functions = {} + + def add_function(self, name, func): + """Adds a function to export.""" + self.functions[name] = func + + def annotate(self, annotator): + """Annotate all exported functions.""" + bk = annotator.bookkeeper + + # annotate functions with signatures + for funcname, func in self.functions.items(): + if hasattr(func, 'argtypes'): + annotator.build_types(func, func.argtypes, + complete_now=False) + annotator.complete() + + def get_lowlevel_functions(self, annotator): + """Builds a map of low_level objects.""" + bk = annotator.bookkeeper + + exported_funcptr = {} + for name, item in self.functions.items(): + desc = bk.getdesc(item) + if isinstance(desc, description.FunctionDesc): + graph = desc.getuniquegraph() + funcptr = getfunctionptr(graph) + else: + raise NotImplementedError + + exported_funcptr[name] = funcptr + return exported_funcptr + + def make_import_module(self, builder): + """Builds an object with all exported functions.""" + rtyper = builder.db.translator.rtyper + + exported_funcptr = self.get_lowlevel_functions( + builder.translator.annotator) + # Map exported functions to the names given by the translator. + node_names = dict( + (funcname, builder.db.get(funcptr)) + for funcname, funcptr in exported_funcptr.items()) + + # Declarations of functions defined in the first module. + forwards = [] + for node in builder.db.globalcontainers(): + if node.nodekind == 'func' and node.name in node_names.values(): + forwards.append('\n'.join(node.forward_declaration())) + + so_name = py.path.local(builder.so_name) + + if sys.platform == 'win32': + libraries = [so_name.purebasename] + else: + libraries = [so_name.purebasename[3:]] + + import_eci = ExternalCompilationInfo( + libraries=libraries, + library_dirs=[so_name.dirname], + post_include_bits=forwards, + ) + class Module(object): + __file__ = builder.so_name + mod = Module() + for funcname, funcptr in exported_funcptr.items(): + import_name = node_names[funcname] + func = make_llexternal_function(import_name, funcptr, import_eci) + setattr(mod, funcname, func) + return mod + +def make_llexternal_function(name, funcptr, eci): + functype = lltype.typeOf(funcptr) + imported_func = rffi.llexternal( + name, functype.TO.ARGS, functype.TO.RESULT, + compilation_info=eci, + ) + return imported_func + diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py new file mode 100644 --- /dev/null +++ b/pypy/translator/c/test/test_export.py @@ -0,0 +1,53 @@ +from pypy.translator.translator import TranslationContext +from pypy.translator.c.exportinfo import export, ModuleExportInfo +from pypy.translator.c.dlltool import CLibraryBuilder +from pypy.translator.tool.cbuild import ExternalCompilationInfo +import sys + +class TestExportFunctions: + def setup_method(self, method): + self.additional_PATH = [] + + def compile_module(self, modulename, **exports): + export_info = ModuleExportInfo() + for name, obj in exports.items(): + export_info.add_function(name, obj) + + t = TranslationContext() + t.buildannotator() + export_info.annotate(t.annotator) + t.buildrtyper().specialize() + + functions = [(f, None) for f in export_info.functions.values()] + builder = CLibraryBuilder(t, None, config=t.config, + name='lib' + modulename, + functions=functions) + if sys.platform != 'win32' and self.additional_PATH: + builder.merge_eci(ExternalCompilationInfo( + link_extra=['-Wl,-rpath,%s' % path for path in + self.additional_PATH])) + builder.modulename = 'lib' + modulename + builder.generate_source() + builder.compile() + + mod = export_info.make_import_module(builder) + + filepath = builder.so_name.dirpath() + self.additional_PATH.append(filepath) + + return mod + + def test_simple_call(self): + # function exported from the 'first' module + @export(float) + def f(x): + return x + 42.3 + firstmodule = self.compile_module("first", f=f) + + # call it from a function compiled in another module + @export() + def g(): + return firstmodule.f(12.0) + secondmodule = self.compile_module("second", g=g) + + assert secondmodule.g() == 54.3 From noreply at buildbot.pypy.org Wed Feb 22 00:00:42 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 00:00:42 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: XXX temporarily disable the method cache, again. Even if it is Message-ID: <20120221230042.14C508203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52747:a8e0e462beb9 Date: 2012-02-22 00:00 +0100 http://bitbucket.org/pypy/pypy/changeset/a8e0e462beb9/ Log: XXX temporarily disable the method cache, again. Even if it is thread-local, depending (randomly) on whether there is an update to the cache or not, we need to copy a lot of data or not. This is what makes the performance of even single-thread richards.py vary a lot (simple or double speed). diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -363,7 +363,8 @@ if level in ['2', '3', 'jit']: config.objspace.opcodes.suggest(CALL_METHOD=True) config.objspace.std.suggest(withrangelist=True) - config.objspace.std.suggest(withmethodcache=True) + if not config.translation.stm: # XXX temporary + config.objspace.std.suggest(withmethodcache=True) config.objspace.std.suggest(withprebuiltchar=True) config.objspace.std.suggest(builtinshortcut=True) config.objspace.std.suggest(optimized_list_getitem=True) From noreply at buildbot.pypy.org Wed Feb 22 00:34:13 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 00:34:13 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Functions can be @exported without specifying argument types, Message-ID: <20120221233413.88E728203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52748:ce2d7e8a1b42 Date: 2012-02-21 23:28 +0100 http://bitbucket.org/pypy/pypy/changeset/ce2d7e8a1b42/ Log: Functions can be @exported without specifying argument types, as long as they are annotated in some other way. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -1,4 +1,4 @@ -from pypy.annotation import description +from pypy.annotation import model, description from pypy.rpython.typesystem import getfunctionptr from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.lltypesystem import lltype, rffi @@ -70,6 +70,20 @@ complete_now=False) annotator.complete() + # Ensure that functions without signature are not constant-folded + for funcname, func in self.functions.items(): + if not hasattr(func, 'argtypes'): + # build a list of arguments where constants are erased + newargs = [] + desc = bk.getdesc(func) + if isinstance(desc, description.FunctionDesc): + graph = desc.getuniquegraph() + for arg in graph.startblock.inputargs: + newarg = model.not_const(annotator.binding(arg)) + newargs.append(newarg) + # and reflow + annotator.build_types(func, newargs) + def get_lowlevel_functions(self, annotator): """Builds a map of low_level objects.""" bk = annotator.bookkeeper diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -7,8 +7,11 @@ class TestExportFunctions: def setup_method(self, method): self.additional_PATH = [] + # Uniquify: use the method name without the 'test' prefix. + self.module_suffix = method.__name__[4:] def compile_module(self, modulename, **exports): + modulename += self.module_suffix export_info = ModuleExportInfo() for name, obj in exports.items(): export_info.add_function(name, obj) @@ -51,3 +54,20 @@ secondmodule = self.compile_module("second", g=g) assert secondmodule.g() == 54.3 + + def test_implied_signature(self): + @export # No explicit signature here. + def f(x): + return x + 1.5 + @export() # This is an explicit signature, with no argument. + def f2(): + f(1.0) + firstmodule = self.compile_module("first", f=f, f2=f2) + + @export() + def g(): + return firstmodule.f(41) + secondmodule = self.compile_module("second", g=g) + + assert secondmodule.g() == 42.5 + From noreply at buildbot.pypy.org Wed Feb 22 00:34:14 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 00:34:14 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Add support for passing RPython instances between modules. Message-ID: <20120221233414.BB1738203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52749:162e6879b761 Date: 2012-02-22 00:33 +0100 http://bitbucket.org/pypy/pypy/changeset/162e6879b761/ Log: Add support for passing RPython instances between modules. The constructor is also exported. FIXME: I had to disable a check in the ExceptionTransformer, there are probably objects that we should clean up somehow. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -1,7 +1,13 @@ from pypy.annotation import model, description from pypy.rpython.typesystem import getfunctionptr +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.controllerentry import ( + Controller, ControllerEntry, SomeControlledInstance) +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rlib.objectmodel import instantiate +from pypy.rlib.unroll import unrolling_iterable +from pypy.tool.sourcetools import func_with_new_name from pypy.translator.tool.cbuild import ExternalCompilationInfo -from pypy.rpython.lltypesystem import lltype, rffi import py import sys import types @@ -45,6 +51,52 @@ return func +class ClassExportInfo: + def __init__(self, name, cls): + self.name = name + self.cls = cls + + def make_constructor(self): + self.constructor_name = "__new__%s" % (self.name,) + nbargs = len(self.cls.__init__.argtypes) + args = ', '.join(['arg%d' % d for d in range(nbargs)]) + source = py.code.Source(r""" + def %s(%s): + obj = instantiate(cls) + obj.__init__(%s) + return obj + """ % (self.constructor_name, args, args)) + miniglobals = {'cls': self.cls, 'instantiate': instantiate} + exec source.compile() in miniglobals + constructor = miniglobals[self.constructor_name] + constructor._annspecialcase_ = 'specialize:ll' + constructor._always_inline_ = True + constructor.argtypes = self.cls.__init__.argtypes + return constructor + + def make_repr(self, module, rtyper): + """Returns the class repr, but also installs a Controller that + will intercept all operations on the class.""" + bookkeeper = rtyper.annotator.bookkeeper + classdef = bookkeeper.getuniqueclassdef(self.cls) + classrepr = rtyper.getrepr(model.SomeInstance(classdef)).lowleveltype + STRUCTPTR = classrepr + + constructor = getattr(module, self.constructor_name) + + class ClassController(Controller): + knowntype = STRUCTPTR + + def new(self, *args): + return constructor(*args) + + class Entry(ControllerEntry): + _about_ = STRUCTPTR + _controller_ = ClassController + + return STRUCTPTR + + class ModuleExportInfo: """Translates and builds a library, and returns an 'import Module' which can be used in another translation. @@ -54,17 +106,27 @@ """ def __init__(self): self.functions = {} + self.classes = {} def add_function(self, name, func): """Adds a function to export.""" self.functions[name] = func + def add_class(self, name, cls): + """Adds a class to export.""" + self.classes[name] = ClassExportInfo(name, cls) + def annotate(self, annotator): """Annotate all exported functions.""" bk = annotator.bookkeeper + # annotate constructors of exported classes + for name, class_info in self.classes.items(): + constructor = class_info.make_constructor() + self.functions[constructor.__name__] = constructor + # annotate functions with signatures - for funcname, func in self.functions.items(): + for name, func in self.functions.items(): if hasattr(func, 'argtypes'): annotator.build_types(func, func.argtypes, complete_now=False) @@ -136,13 +198,54 @@ import_name = node_names[funcname] func = make_llexternal_function(import_name, funcptr, import_eci) setattr(mod, funcname, func) + for clsname, class_info in self.classes.items(): + structptr = class_info.make_repr(mod, rtyper) + setattr(mod, clsname, structptr) + return mod +def make_ll_import_arg_converter(TARGET): + from pypy.annotation import model + + def convert(x): + UNUSED + + class Entry(ExtRegistryEntry): + _about_ = convert + + def compute_result_annotation(self, s_arg): + if not (isinstance(s_arg, SomeControlledInstance) and + s_arg.s_real_obj.ll_ptrtype == TARGET): + raise TypeError("Expected a proxy for %s" % (TARGET,)) + return model.lltype_to_annotation(TARGET) + + def specialize_call(self, hop): + [v_instance] = hop.inputargs(*hop.args_r) + return hop.genop('force_cast', [v_instance], + resulttype=TARGET) + + return convert +make_ll_import_arg_converter._annspecialcase_ = 'specialize:memo' + def make_llexternal_function(name, funcptr, eci): functype = lltype.typeOf(funcptr) imported_func = rffi.llexternal( name, functype.TO.ARGS, functype.TO.RESULT, compilation_info=eci, ) - return imported_func + ARGS = functype.TO.ARGS + unrolling_ARGS = unrolling_iterable(enumerate(ARGS)) + def wrapper(*args): + real_args = () + for i, TARGET in unrolling_ARGS: + arg = args[i] + if isinstance(TARGET, lltype.Ptr): # XXX more precise check? + arg = make_ll_import_arg_converter(TARGET)(arg) + real_args = real_args + (arg,) + res = imported_func(*real_args) + return res + wrapper._annspecialcase_ = 'specialize:ll' + wrapper._always_inline_ = True + return func_with_new_name(wrapper, name) + diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -50,8 +50,11 @@ # include "src/ll_strtod.h" #endif +#ifndef PYPY_CPYTHON_EXTENSION +# include "src/allocator.h" +#endif + #ifdef PYPY_STANDALONE -# include "src/allocator.h" # include "src/main.h" #endif diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -4,6 +4,7 @@ #ifdef PYPY_STANDALONE +//#ifndef PYPY_CPYTHON_EXTENSION # include "src/commondefs.h" #endif diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -2,7 +2,9 @@ from pypy.translator.c.exportinfo import export, ModuleExportInfo from pypy.translator.c.dlltool import CLibraryBuilder from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.translator.backendopt.all import backend_optimizations import sys +import types class TestExportFunctions: def setup_method(self, method): @@ -14,12 +16,16 @@ modulename += self.module_suffix export_info = ModuleExportInfo() for name, obj in exports.items(): - export_info.add_function(name, obj) + if isinstance(obj, (type, types.ClassType)): + export_info.add_class(name, obj) + else: + export_info.add_function(name, obj) t = TranslationContext() t.buildannotator() export_info.annotate(t.annotator) t.buildrtyper().specialize() + backend_optimizations(t) functions = [(f, None) for f in export_info.functions.values()] builder = CLibraryBuilder(t, None, config=t.config, @@ -71,3 +77,27 @@ assert secondmodule.g() == 42.5 + def test_pass_structure(self): + class Struct: + @export(float) + def __init__(self, x): + self.x = x + 27.4 + @export(Struct, Struct, int) + def f(s, t, v): + return s.x + t.x + v + firstmodule = self.compile_module("first", f=f, S=Struct) + + S = firstmodule.S + @export() + def g(): + s = S(3.0) + t = S(5.5) + return firstmodule.f(s, t, 7) + secondmodule = self.compile_module("second", g=g) + assert secondmodule.g() == 70.3 + + @export() + def g2(): + # Bad argument type, should not translate + return firstmodule.f(1, 2, 3) + raises(TypeError, self.compile_module, "third", g2=g2) diff --git a/pypy/translator/exceptiontransform.py b/pypy/translator/exceptiontransform.py --- a/pypy/translator/exceptiontransform.py +++ b/pypy/translator/exceptiontransform.py @@ -194,7 +194,8 @@ from the current graph with a special value (False/-1/-1.0/null). Because of the added exitswitch we need an additional block. """ - if hasattr(graph, 'exceptiontransformed'): + # FIXME: Why do we have a graph with an old ExceptionTransform info? + if 0 and hasattr(graph, 'exceptiontransformed'): assert self.same_obj(self.exc_data_ptr, graph.exceptiontransformed) return else: From noreply at buildbot.pypy.org Wed Feb 22 02:16:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 02:16:03 +0100 (CET) Subject: [pypy-commit] pypy default: fix the assembler names Message-ID: <20120222011603.CABC38203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52750:b6874c42c2ff Date: 2012-02-21 11:48 -0700 http://bitbucket.org/pypy/pypy/changeset/b6874c42c2ff/ Log: fix the assembler names diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, From noreply at buildbot.pypy.org Wed Feb 22 02:16:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 02:16:06 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120222011606.BACA98203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52751:8c669112ced4 Date: 2012-02-21 18:15 -0700 http://bitbucket.org/pypy/pypy/changeset/8c669112ced4/ Log: merge diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -183,11 +184,34 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1331,28 +1293,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -1685,15 +1625,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1733,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the @@ -2448,16 +2351,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,44 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,20 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far @@ -538,6 +548,15 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +92,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,7 +435,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{}".format(numpy.float64(3)) == "3.0" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s @@ -558,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Wed Feb 22 02:25:29 2012 From: noreply at buildbot.pypy.org (pjenvey) Date: Wed, 22 Feb 2012 02:25:29 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: legal counseling Message-ID: <20120222012529.6D49D8203C@wyvern.cs.uni-duesseldorf.de> Author: Philip Jenvey Branch: extradoc Changeset: r4097:12a4ad40ab30 Date: 2012-02-21 17:24 -0800 http://bitbucket.org/pypy/extradoc/changeset/12a4ad40ab30/ Log: legal counseling diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -21,7 +21,7 @@ * A framework for writing efficient dynamic language implementations -* An open source project with a lot of volunteer effort, released under the BSD license +* An open source project with a lot of volunteer effort, released under the MIT license * I'll talk today about the first part (mostly) From noreply at buildbot.pypy.org Wed Feb 22 02:26:12 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:26:12 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: more text Message-ID: <20120222012612.CF59E8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4098:3dca0a7b9146 Date: 2012-02-21 20:25 -0500 http://bitbucket.org/pypy/extradoc/changeset/3dca0a7b9146/ Log: more text diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst --- a/talk/pycon2012/tutorial/emails/01_numpy.rst +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -13,5 +13,10 @@ CPython's NumPy. If you're interested in hearing about this in our tutorial, please let us know. +We also plan on going through an open source application and demonstrating some +performance analysis and optimization on it. If you could give us a list of a +few open source application you'd like to see us take a look at, and we'll +choose the most popular one. + Thanks, Alex Gaynor, Maciej Fijalkoski, Armin Rigo From noreply at buildbot.pypy.org Wed Feb 22 02:26:14 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:26:14 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merged upstream Message-ID: <20120222012614.069438203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4099:ba04f56772e5 Date: 2012-02-21 20:26 -0500 http://bitbucket.org/pypy/extradoc/changeset/ba04f56772e5/ Log: merged upstream diff --git a/planning/separate-compilation.txt b/planning/separate-compilation.txt new file mode 100644 --- /dev/null +++ b/planning/separate-compilation.txt @@ -0,0 +1,81 @@ +Separate Compilation +==================== + +Goal +---- + +Translation extension modules written in RPython. +The best form is probably the MixedModule. + +Strategy +-------- + +The main executable (bin/pypy-c or libpypy-c.dll) exports RPython +functions; this "first translation" also produces a pickled object +that describe these functions: signatures, exception info, etc. + +It will probably be necessary to list all exported functions and methods, +or mark them with some @exported decorator. + +The translation of an extension module (the "second translation") will +reuse the information from the pickled object; the content of the +MixedModule is annotated as usual, except that functions exported by +the main executable are now external calls. + +The extension module simply has to export a single function +"init_module()", which at runtime uses space operations to create and +install a module. + + +Roadmap +------- + +* First, a framework to test and measure progress; builds two + shared libraries (.so or .dll): + + - the first one is the "core module", which exports functions + - that can be called from the second module, which exports a single + entry point that we call call with ctypes. + +* Find a way to mark functions as "exported". We need to either + provide a signature, or be sure that the functions is somehow + annotated (because it is already used by the core interpreter) + +* Pass structures (as opaque pointers). At this step, only the core + module has access to the fields. + +* Implement access to struct fields: an idea is to use a Controller + object, and redirect attribute access to the ClassRepr computed by + the first translation. + +* Implement method calls, again with the help of the Controller which + can replace calls to bound methods with calls to exported functions. + +* Share the ExceptionTransformer between the modules: a RPython + exception raised on one side can be caught by the other side. + +* Support subclassing. Two issues here: + + - isinstance() is translated into a range check, but these minid and + maxid work because all classes are known at translation time. + Subclasses defined in the second module must use the same minid + and maxid as their parent; isinstance(X, SecondModuleClass) should + use an additional field. Be sure to not confuse classes + separately created in two extension modules. + + - virtual methods, that override methods defined in the first + module. + +* specialize.memo() needs to know all possible values of a + PreBuildConstant to compute the results during translation and build + some kind of lookup table. The most obvious case is the function + space.gettypeobject(typedef). Fortunately a PBC defined in a module + can only be used from the same module, so the list of prebuilt + results is probably local to the same module and this is not really + an issue. + +* Integration with GC. The GC functions should be exported from the + first module, and we need a way to register the static roots of the + second module. + +* Integration with the JIT. diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -8,9 +8,9 @@ * What is PyPy and why? -* Numeric landscape in python +* Numeric landscape in Python -* What we achieved in PyPy? +* What we achieved in PyPy * Where we're going? @@ -21,7 +21,7 @@ * A framework for writing efficient dynamic language implementations -* An open source project with a lot of volunteer effort, released under the BSD license +* An open source project with a lot of volunteer effort, released under the MIT license * I'll talk today about the first part (mostly) @@ -36,28 +36,28 @@ * XXX some benchmarks -Why would you care? -------------------- +Why should you care? +-------------------- * *If I write this stuff in C/fortran/assembler it'll be faster anyway* * maybe, but ... -Why would you care (2) ----------------------- +Why should you care? (2) +------------------------ * Experimentation is important * Implementing something faster, in **human time**, leaves more time for optimizations and improvements -* For novel algorithms, being clearly expressed in code makes them easier to evaluate (Python is cleaner than C often) +* For novel algorithms, clearer implementation makes them easier to evaluate (Python often is cleaner than C) |pause| * Sometimes makes it **possible** in the first place -Why would you care even more ----------------------------- +Why would you care even more? +----------------------------- * Growing community @@ -65,8 +65,8 @@ * There are many smart people out there addressing hard problems -Example why would you care --------------------------- +Example of why would you care +----------------------------- * You spend a year writing optimized algorithms for a GPU @@ -78,11 +78,11 @@ * Alternative - **express** your algorithms -* Leave low-level details for people who have nothing better to do +* Leave low-level details to people who have nothing better to do |pause| -* .. like me (I don't know enough physics to do the other part) +* ... like me (I don't know enough Physics to do the other part) Numerics in Python ------------------ @@ -197,7 +197,7 @@ |pause| -* However, leave knobs and buttons for advanced users +* However, retain knobs and buttons for advanced users * Don't get penalized too much for not using them From noreply at buildbot.pypy.org Wed Feb 22 02:35:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 02:35:26 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: missing letter and some more slides Message-ID: <20120222013526.9A8698203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4100:8c327578c393 Date: 2012-02-21 18:35 -0700 http://bitbucket.org/pypy/extradoc/changeset/8c327578c393/ Log: missing letter and some more slides diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst --- a/talk/pycon2012/tutorial/emails/01_numpy.rst +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -19,4 +19,4 @@ choose the most popular one. Thanks, -Alex Gaynor, Maciej Fijalkoski, Armin Rigo +Alex Gaynor, Maciej Fijalkowski, Armin Rigo diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -23,6 +23,8 @@ * An open source project with a lot of volunteer effort, released under the MIT license +* Agile development, 13000 unit tests, continuous integration, sprints, distributed team + * I'll talk today about the first part (mostly) PyPy status right now @@ -34,7 +36,7 @@ * Example - real time video processing -* XXX some benchmarks +* 2-300x faster on Python code Why should you care? -------------------- @@ -140,8 +142,6 @@ * Build a tree of operations -XXX a tree picture - * Compile assembler specialized for aliasing and operations * Execute the specialized assembler @@ -158,7 +158,19 @@ Performance comparison ---------------------- -XXX ++---------------------+-------+------+-----+-----------+ +| | NumPy | PyPy | GCC | Pathscale | ++---------------------+-------+------+-----+-----------+ +| ``a+b`` | ++---------------------+-------+------+-----+-----------+ +| ``a+(b+c)`` | ++---------------------+-------+------+-----+-----------+ +| ``(a+b)+((c+d)+e)`` | ++---------------------+-------+------+-----+-----------+ + +|pause| + +* Pathscale is insane, but we'll get there ;-) Status ------ @@ -171,11 +183,6 @@ * Vectorization in progress -Status benchmarks - trivial stuff ---------------------------------- - -XXX - Status benchmarks - slightly more complex ----------------------------------------- @@ -183,10 +190,9 @@ * solutions: - XXX laplace numbers - +---+ - | | - +---+ + +-------+------+-----+-----------+ + | Numpy | PyPy | GCC | Pathscale | + +-------+------+-----+-----------+ Progress plan ------------- From noreply at buildbot.pypy.org Wed Feb 22 02:36:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 02:36:02 +0100 (CET) Subject: [pypy-commit] pypy default: add a jitdriver here Message-ID: <20120222013602.40E508203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52752:126806ef2466 Date: 2012-02-21 18:35 -0700 http://bitbucket.org/pypy/pypy/changeset/126806ef2466/ Log: add a jitdriver here diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,18 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'dtype', + 'itemsize', 's']) + +def fromstring_loop(a, count, dtype, itemsize, s): for i in range(count): + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): From noreply at buildbot.pypy.org Wed Feb 22 02:39:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 02:39:50 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: merge default Message-ID: <20120222013950.7D0DC8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52753:76bf35c876fa Date: 2012-02-21 18:39 -0700 http://bitbucket.org/pypy/pypy/changeset/76bf35c876fa/ Log: merge default diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -173,7 +173,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -207,8 +206,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -479,7 +483,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -569,7 +577,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -570,7 +570,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -434,16 +435,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -183,11 +184,34 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1331,28 +1293,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -1685,15 +1625,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1733,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the @@ -2448,16 +2351,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,44 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -420,3 +420,20 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -395,6 +395,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far @@ -538,6 +548,15 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -85,10 +85,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -108,9 +110,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -74,16 +74,25 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray([count], dtype=dtype) + a = W_NDimArray(count, [count], dtype=dtype) + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'itemsize', 'dtype', + 'ai', 'a', 's']) + +def fromstring_loop(a, count, dtype, itemsize, s): ai = a.create_iter() - for i in range(count): + i = 0 + while i < count: + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s, ai=ai) start = i*itemsize assert start >= 0 val = dtype.itemtype.runpack_str(s[start:start + itemsize]) a.dtype.setitem(a, ai.offset, val) ai = ai.next(1) - - return space.wrap(a) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,7 +435,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -390,6 +390,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{}".format(numpy.float64(3)) == "3.0" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -574,10 +574,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -485,6 +485,8 @@ 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'paddq', 'pinsr', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s @@ -558,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Wed Feb 22 02:41:20 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:41:20 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: return address Message-ID: <20120222014120.3EEED8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4101:3f98dbfa8ce3 Date: 2012-02-21 20:39 -0500 http://bitbucket.org/pypy/extradoc/changeset/3f98dbfa8ce3/ Log: return address diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst --- a/talk/pycon2012/tutorial/emails/01_numpy.rst +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -18,5 +18,7 @@ few open source application you'd like to see us take a look at, and we'll choose the most popular one. +Please write back to us at alex.gaynor at gmail.com and fijall at gmail.com . + Thanks, Alex Gaynor, Maciej Fijalkoski, Armin Rigo From noreply at buildbot.pypy.org Wed Feb 22 02:41:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:41:22 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merged upstream Message-ID: <20120222014122.7B6808203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4102:81cb6c1cc6b2 Date: 2012-02-21 20:41 -0500 http://bitbucket.org/pypy/extradoc/changeset/81cb6c1cc6b2/ Log: merged upstream diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst --- a/talk/pycon2012/tutorial/emails/01_numpy.rst +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -21,4 +21,4 @@ Please write back to us at alex.gaynor at gmail.com and fijall at gmail.com . Thanks, -Alex Gaynor, Maciej Fijalkoski, Armin Rigo +Alex Gaynor, Maciej Fijalkowski, Armin Rigo diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -23,6 +23,8 @@ * An open source project with a lot of volunteer effort, released under the MIT license +* Agile development, 13000 unit tests, continuous integration, sprints, distributed team + * I'll talk today about the first part (mostly) PyPy status right now @@ -34,7 +36,7 @@ * Example - real time video processing -* XXX some benchmarks +* 2-300x faster on Python code Why should you care? -------------------- @@ -140,8 +142,6 @@ * Build a tree of operations -XXX a tree picture - * Compile assembler specialized for aliasing and operations * Execute the specialized assembler @@ -158,7 +158,19 @@ Performance comparison ---------------------- -XXX ++---------------------+-------+------+-----+-----------+ +| | NumPy | PyPy | GCC | Pathscale | ++---------------------+-------+------+-----+-----------+ +| ``a+b`` | ++---------------------+-------+------+-----+-----------+ +| ``a+(b+c)`` | ++---------------------+-------+------+-----+-----------+ +| ``(a+b)+((c+d)+e)`` | ++---------------------+-------+------+-----+-----------+ + +|pause| + +* Pathscale is insane, but we'll get there ;-) Status ------ @@ -171,11 +183,6 @@ * Vectorization in progress -Status benchmarks - trivial stuff ---------------------------------- - -XXX - Status benchmarks - slightly more complex ----------------------------------------- @@ -183,10 +190,9 @@ * solutions: - XXX laplace numbers - +---+ - | | - +---+ + +-------+------+-----+-----------+ + | Numpy | PyPy | GCC | Pathscale | + +-------+------+-----+-----------+ Progress plan ------------- From noreply at buildbot.pypy.org Wed Feb 22 02:49:47 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:49:47 +0100 (CET) Subject: [pypy-commit] pypy default: make numpy boxes work with str.format Message-ID: <20120222014947.EAB088203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52754:dde74845c41f Date: 2012-02-21 20:48 -0500 http://bitbucket.org/pypy/pypy/changeset/dde74845c41f/ Log: make numpy boxes work with str.format diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -39,10 +39,10 @@ ) def descr_str(self, space): - return self.descr_repr(space) + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) - def descr_repr(self, space): - return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + def descr_format(self, space, w_spec): + return space.format(self.item(space), w_spec) def descr_int(self, space): box = self.convert_to(W_LongBox.get_dtype(space)) @@ -194,7 +194,8 @@ __new__ = interp2app(W_GenericBox.descr__new__.im_func), __str__ = interp2app(W_GenericBox.descr_str), - __repr__ = interp2app(W_GenericBox.descr_repr), + __repr__ = interp2app(W_GenericBox.descr_str), + __format__ = interp2app(W_GenericBox.descr_format), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), __nonzero__ = interp2app(W_GenericBox.descr_nonzero), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,7 +371,7 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 - assert "{}".format(numpy.float64(3)) == "3.0" + assert "{:3f}".format(numpy.float64(3)) == "3.000000" assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) From noreply at buildbot.pypy.org Wed Feb 22 02:49:49 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 02:49:49 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20120222014949.29DB68203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52755:6dd80b3ec992 Date: 2012-02-21 20:49 -0500 http://bitbucket.org/pypy/pypy/changeset/6dd80b3ec992/ Log: merged upstream diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,18 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'dtype', + 'itemsize', 's']) + +def fromstring_loop(a, count, dtype, itemsize, s): for i in range(count): + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -143,6 +143,10 @@ return result def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. space = self.space iterator = self.iter(w_dict) w_key, w_value = iterator.next() diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, From noreply at buildbot.pypy.org Wed Feb 22 04:27:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 04:27:11 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: ugh, fix the merge Message-ID: <20120222032711.46B588203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52756:939a94b35997 Date: 2012-02-21 19:46 -0700 http://bitbucket.org/pypy/pypy/changeset/939a94b35997/ Log: ugh, fix the merge diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -74,7 +74,7 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(count, [count], dtype=dtype) + a = W_NDimArray([count], dtype=dtype) fromstring_loop(a, count, dtype, itemsize, s) return space.wrap(a) From noreply at buildbot.pypy.org Wed Feb 22 04:57:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 04:57:52 +0100 (CET) Subject: [pypy-commit] pypy backend-vector-ops: oops Message-ID: <20120222035752.A5D6D8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: backend-vector-ops Changeset: r52757:c72d8c439aff Date: 2012-02-21 20:57 -0700 http://bitbucket.org/pypy/pypy/changeset/c72d8c439aff/ Log: oops diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -78,15 +78,16 @@ fromstring_loop(a, count, dtype, itemsize, s) return space.wrap(a) -fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'itemsize', 'dtype', - 'ai', 'a', 's']) +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 'ai', 'a', 's']) def fromstring_loop(a, count, dtype, itemsize, s): ai = a.create_iter() i = 0 while i < count: fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, - itemsize=itemsize, s=s, ai=ai) + itemsize=itemsize, s=s, ai=ai, + i=i) start = i*itemsize assert start >= 0 val = dtype.itemtype.runpack_str(s[start:start + itemsize]) From noreply at buildbot.pypy.org Wed Feb 22 16:30:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 16:30:22 +0100 (CET) Subject: [pypy-commit] pypy default: Kill this specialization. It's mostly pointless and it gives Message-ID: <20120222153022.EC6508203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52758:e0b455a97502 Date: 2012-02-22 16:24 +0100 http://bitbucket.org/pypy/pypy/changeset/e0b455a97502/ Log: Kill this specialization. It's mostly pointless and it gives occasionally headaches because fatalerror() is called from several levels. (Manual transplant of c1db98c91413 and 8c8b4968177b.) diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,22 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' +fatalerror._annenforceargs_ = [str] + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): From noreply at buildbot.pypy.org Wed Feb 22 16:30:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 16:30:24 +0100 (CET) Subject: [pypy-commit] pypy default: Issue1068: in a pypy translated for x86-32 with SSE2, detect at run-time Message-ID: <20120222153024.2E00282366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52759:4320ef8d1ab2 Date: 2012-02-22 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/4320ef8d1ab2/ Log: Issue1068: in a pypy translated for x86-32 with SSE2, detect at run- time if we really have SSE2, and if not, abort with a nice error message. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints) + have_debug_prints, fatalerror_notb) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,6 +104,7 @@ self._debug = v def setup_once(self): + self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -161,6 +162,28 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') + _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) + + def _check_sse2(self): + if WORD == 8: + return # all x86-64 CPUs support SSE2 + if not self.cpu.supports_floats: + return # the CPU doesn't support float, so we don't need SSE2 + # + from pypy.jit.backend.x86.detect_sse2 import INSNS + mc = codebuf.MachineCodeBlockWrapper() + for c in INSNS: + mc.writechar(c) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) + features = fnptr() + if bool(features & (1<<25)) and bool(features & (1<<26)): + return # CPU supports SSE2 + fatalerror_notb( + "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" + "Your CPU is too old. Please translate a PyPy with the option:\n" + "--jit-backend=x86-without-sse2") + def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,17 +1,18 @@ import autopath -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.rmmap import alloc, free +INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3") # RET def detect_sse2(): + from pypy.rpython.lltypesystem import lltype, rffi + from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3"): # RET + for c in INSNS: data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) From noreply at buildbot.pypy.org Wed Feb 22 17:03:18 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 17:03:18 +0100 (CET) Subject: [pypy-commit] pypy default: sorry sorry fix the translation Message-ID: <20120222160318.5FC178203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52760:4c6c003b391e Date: 2012-02-22 09:02 -0700 http://bitbucket.org/pypy/pypy/changeset/4c6c003b391e/ Log: sorry sorry fix the translation diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -75,8 +75,8 @@ fromstring_loop(a, count, dtype, itemsize, s) return space.wrap(a) -fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'dtype', - 'itemsize', 's']) +fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'itemsize', + 'dtype', 's']) def fromstring_loop(a, count, dtype, itemsize, s): for i in range(count): From noreply at buildbot.pypy.org Wed Feb 22 17:05:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 17:05:37 +0100 (CET) Subject: [pypy-commit] pypy default: pff sorry Message-ID: <20120222160537.D31598203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52761:0e27c93ef49e Date: 2012-02-22 09:05 -0700 http://bitbucket.org/pypy/pypy/changeset/0e27c93ef49e/ Log: pff sorry diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -75,13 +75,13 @@ fromstring_loop(a, count, dtype, itemsize, s) return space.wrap(a) -fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'itemsize', - 'dtype', 's']) +fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'i', + 'itemsize', 'dtype', 's']) def fromstring_loop(a, count, dtype, itemsize, s): for i in range(count): fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, - itemsize=itemsize, s=s) + itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) From noreply at buildbot.pypy.org Wed Feb 22 17:39:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 17:39:46 +0100 (CET) Subject: [pypy-commit] pypy default: of course a is a pointer Message-ID: <20120222163946.083A88203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52762:732abe5a499d Date: 2012-02-22 09:39 -0700 http://bitbucket.org/pypy/pypy/changeset/732abe5a499d/ Log: of course a is a pointer diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -75,8 +75,8 @@ fromstring_loop(a, count, dtype, itemsize, s) return space.wrap(a) -fromstring_driver = jit.JitDriver(greens=[], reds=['a', 'count', 'i', - 'itemsize', 'dtype', 's']) +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 's', 'a']) def fromstring_loop(a, count, dtype, itemsize, s): for i in range(count): From noreply at buildbot.pypy.org Wed Feb 22 18:20:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 18:20:06 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: hopefully the final version Message-ID: <20120222172006.5892A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4103:b5d022ba4121 Date: 2012-02-22 10:12 -0700 http://bitbucket.org/pypy/extradoc/changeset/b5d022ba4121/ Log: hopefully the final version diff --git a/talk/sea2012/talk.pdf b/talk/sea2012/talk.pdf new file mode 100644 index 0000000000000000000000000000000000000000..83bf2426853359a02307c3cd6e43a78a9c22980d GIT binary patch [cut] diff --git a/talk/sea2012/talk.rst b/talk/sea2012/talk.rst --- a/talk/sea2012/talk.rst +++ b/talk/sea2012/talk.rst @@ -158,19 +158,32 @@ Performance comparison ---------------------- -+---------------------+-------+------+-----+-----------+ -| | NumPy | PyPy | GCC | Pathscale | -+---------------------+-------+------+-----+-----------+ -| ``a+b`` | -+---------------------+-------+------+-----+-----------+ -| ``a+(b+c)`` | -+---------------------+-------+------+-----+-----------+ -| ``(a+b)+((c+d)+e)`` | -+---------------------+-------+------+-----+-----------+ ++---------------+-------+------+--------------+ +| | NumPy | PyPy | GCC | ++---------------+-------+------+--------------+ +| ``a+b`` | 0.6s | 0.4s | 0.3s (0.25s) | ++---------------+-------+------+--------------+ +| ``a+b+c`` | 1.9s | 0.5s | 0.7s (0.32s) | ++---------------+-------+------+--------------+ +| ``5+`` | 3.2s | 0.8s | 1.7s (0.51s) | ++---------------+-------+------+--------------+ -|pause| +* Pathscale is actually slower -* Pathscale is insane, but we'll get there ;-) +Performance comparion SSE +------------------------- + +* Branch only so far! + ++---------------+----------+------+--------------+ +| | PyPy SSE | PyPy | GCC | ++---------------+----------+------+--------------+ +| ``a+b`` | 0.3s | 0.4s | 0.3s (0.25s) | ++---------------+----------+------+--------------+ +| ``a+b+c`` | 0.35s | 0.5s | 0.7s (0.32s) | ++---------------+----------+------+--------------+ +| ``5+`` | 0.36s | 0.8s | 1.7s (0.51s) | ++---------------+----------+------+--------------+ Status ------ @@ -189,10 +202,16 @@ * laplace solution * solutions: + + * NumPy: 4.3s - +-------+------+-----+-----------+ - | Numpy | PyPy | GCC | Pathscale | - +-------+------+-----+-----------+ + * looped: too long to run ~2100s + + * PyPy: 1.6s + + * looped: 2.5s + + * C: 0.9s Progress plan ------------- @@ -233,11 +252,6 @@ * We're running a fundraiser, make your employer donate money -Extra - SSE preliminary results -------------------------------- - -XXX - Q&A --- From noreply at buildbot.pypy.org Wed Feb 22 18:20:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 18:20:07 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge Message-ID: <20120222172007.8855182366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4104:603e6334bd6c Date: 2012-02-22 10:19 -0700 http://bitbucket.org/pypy/extradoc/changeset/603e6334bd6c/ Log: merge diff --git a/talk/pycon2012/tutorial/emails/01_numpy.rst b/talk/pycon2012/tutorial/emails/01_numpy.rst --- a/talk/pycon2012/tutorial/emails/01_numpy.rst +++ b/talk/pycon2012/tutorial/emails/01_numpy.rst @@ -18,5 +18,7 @@ few open source application you'd like to see us take a look at, and we'll choose the most popular one. +Please write back to us at alex.gaynor at gmail.com and fijall at gmail.com . + Thanks, Alex Gaynor, Maciej Fijalkowski, Armin Rigo From noreply at buildbot.pypy.org Wed Feb 22 18:32:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 18:32:17 +0100 (CET) Subject: [pypy-commit] pypy default: Try heuristically to check when running on top of CPython Message-ID: <20120222173217.AB0788203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52763:1c381a211cb1 Date: 2012-02-22 18:23 +0100 http://bitbucket.org/pypy/pypy/changeset/1c381a211cb1/ Log: Try heuristically to check when running on top of CPython that the arguments of jitdriver.jit_merge_point() have been specified in the correct order. Unsure that all cases are covered correctly, but should mostly work. diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,59 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if isinstance(value, (r_longlong, r_ulonglong)): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it would " + "need to be ordered as a FLOAT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -146,6 +146,38 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass From noreply at buildbot.pypy.org Wed Feb 22 18:36:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 18:36:03 +0100 (CET) Subject: [pypy-commit] pypy default: one more to ignore Message-ID: <20120222173603.424168203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52764:5dc82977af96 Date: 2012-02-22 10:35 -0700 http://bitbucket.org/pypy/pypy/changeset/5dc82977af96/ Log: one more to ignore diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -484,7 +484,7 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', + 'paddq', 'pinsr', 'pmul', # sign-extending moves should not produce GC pointers 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers From noreply at buildbot.pypy.org Wed Feb 22 18:36:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 18:36:04 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120222173604.73DFD8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52765:bd01f89519db Date: 2012-02-22 10:35 -0700 http://bitbucket.org/pypy/pypy/changeset/bd01f89519db/ Log: merge diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,59 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if isinstance(value, (r_longlong, r_ulonglong)): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it would " + "need to be ordered as a FLOAT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -146,6 +146,38 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass From noreply at buildbot.pypy.org Wed Feb 22 18:41:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 22 Feb 2012 18:41:01 +0100 (CET) Subject: [pypy-commit] pypy default: few more Message-ID: <20120222174101.52CB88203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r52766:6fbb1c3a6c85 Date: 2012-02-22 18:38 +0100 http://bitbucket.org/pypy/pypy/changeset/6fbb1c3a6c85/ Log: few more diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -472,7 +472,7 @@ IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -484,7 +484,7 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', 'pmul', + 'paddq', 'pinsr', 'pmul', 'psrl', # sign-extending moves should not produce GC pointers 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers From noreply at buildbot.pypy.org Wed Feb 22 18:47:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 18:47:21 +0100 (CET) Subject: [pypy-commit] pypy default: Fix the old 'generation' GC to not use env.estimate_best_nursery_size() Message-ID: <20120222174721.5A75B8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52767:0d3e428250f8 Date: 2012-02-22 18:47 +0100 http://bitbucket.org/pypy/pypy/changeset/0d3e428250f8/ Log: Fix the old 'generation' GC to not use env.estimate_best_nursery_size() any more. diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) From noreply at buildbot.pypy.org Wed Feb 22 19:18:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 22 Feb 2012 19:18:24 +0100 (CET) Subject: [pypy-commit] pypy default: Can't use a "for" loop around a jit_merge_point. Message-ID: <20120222181824.53AF38203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52768:75870d03aa4f Date: 2012-02-22 18:51 +0100 http://bitbucket.org/pypy/pypy/changeset/75870d03aa4f/ Log: Can't use a "for" loop around a jit_merge_point. diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -79,11 +79,13 @@ 'dtype', 's', 'a']) def fromstring_loop(a, count, dtype, itemsize, s): - for i in range(count): + i = 0 + while i < count: fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): From noreply at buildbot.pypy.org Wed Feb 22 20:05:04 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 20:05:04 +0100 (CET) Subject: [pypy-commit] pypy default: Fix indexing with numpy boxes. Also remove a long dead test. Message-ID: <20120222190504.EE9CF8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52769:37fb24cc3dde Date: 2012-02-22 14:04 -0500 http://bitbucket.org/pypy/pypy/changeset/37fb24cc3dde/ Log: Fix indexing with numpy boxes. Also remove a long dead test. diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -187,6 +186,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -246,6 +249,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -267,36 +272,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -309,6 +321,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -389,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) From noreply at buildbot.pypy.org Wed Feb 22 21:19:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 21:19:38 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Extract information from the first rtyper, but don't use it Message-ID: <20120222201938.4CC868203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52770:410f58726ea8 Date: 2012-02-22 21:17 +0100 http://bitbucket.org/pypy/pypy/changeset/410f58726ea8/ Log: Extract information from the first rtyper, but don't use it to build the import module. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -73,14 +73,17 @@ constructor._always_inline_ = True constructor.argtypes = self.cls.__init__.argtypes return constructor + + def save_repr(self, rtyper): + bookkeeper = rtyper.annotator.bookkeeper + classdef = bookkeeper.getuniqueclassdef(self.cls) + self.classrepr = rtyper.getrepr(model.SomeInstance(classdef) + ).lowleveltype - def make_repr(self, module, rtyper): + def make_repr(self, module): """Returns the class repr, but also installs a Controller that will intercept all operations on the class.""" - bookkeeper = rtyper.annotator.bookkeeper - classdef = bookkeeper.getuniqueclassdef(self.cls) - classrepr = rtyper.getrepr(model.SomeInstance(classdef)).lowleveltype - STRUCTPTR = classrepr + STRUCTPTR = self.classrepr constructor = getattr(module, self.constructor_name) @@ -166,6 +169,9 @@ """Builds an object with all exported functions.""" rtyper = builder.db.translator.rtyper + for clsname, class_info in self.classes.items(): + class_info.save_repr(rtyper) + exported_funcptr = self.get_lowlevel_functions( builder.translator.annotator) # Map exported functions to the names given by the translator. @@ -199,7 +205,7 @@ func = make_llexternal_function(import_name, funcptr, import_eci) setattr(mod, funcname, func) for clsname, class_info in self.classes.items(): - structptr = class_info.make_repr(mod, rtyper) + structptr = class_info.make_repr(mod) setattr(mod, clsname, structptr) return mod From noreply at buildbot.pypy.org Wed Feb 22 21:19:39 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 21:19:39 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Make better use of controller: no need to access the class Message-ID: <20120222201939.D024B8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52771:d772c8bbd652 Date: 2012-02-22 21:17 +0100 http://bitbucket.org/pypy/pypy/changeset/d772c8bbd652/ Log: Make better use of controller: no need to access the class through the 'Module', the one accessible through RPython already redirects annotations. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -80,7 +80,7 @@ self.classrepr = rtyper.getrepr(model.SomeInstance(classdef) ).lowleveltype - def make_repr(self, module): + def make_controller(self, module): """Returns the class repr, but also installs a Controller that will intercept all operations on the class.""" STRUCTPTR = self.classrepr @@ -94,10 +94,10 @@ return constructor(*args) class Entry(ControllerEntry): - _about_ = STRUCTPTR + _about_ = self.cls _controller_ = ClassController - return STRUCTPTR + return self.cls class ModuleExportInfo: @@ -205,7 +205,7 @@ func = make_llexternal_function(import_name, funcptr, import_eci) setattr(mod, funcname, func) for clsname, class_info in self.classes.items(): - structptr = class_info.make_repr(mod) + structptr = class_info.make_controller(mod) setattr(mod, clsname, structptr) return mod diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -85,13 +85,12 @@ @export(Struct, Struct, int) def f(s, t, v): return s.x + t.x + v - firstmodule = self.compile_module("first", f=f, S=Struct) + firstmodule = self.compile_module("first", f=f, Struct=Struct) - S = firstmodule.S @export() def g(): - s = S(3.0) - t = S(5.5) + s = Struct(3.0) + t = firstmodule.Struct(5.5) return firstmodule.f(s, t, 7) secondmodule = self.compile_module("second", g=g) assert secondmodule.g() == 70.3 From noreply at buildbot.pypy.org Wed Feb 22 21:19:41 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 21:19:41 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Refactor and create a FunctionExportInfo similar to the ClassExportInfo. Message-ID: <20120222201941.105308203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52772:1516c6c5964a Date: 2012-02-22 21:17 +0100 http://bitbucket.org/pypy/pypy/changeset/1516c6c5964a/ Log: Refactor and create a FunctionExportInfo similar to the ClassExportInfo. These classes have methods for the different stages: - save_repr() runs in the context of the first translation. - make_external_function() runs in the context of the second translation, and replaces the RPython function/class with something that calls into the first library. The idea is that after save_repr(), the object can be pickled and the second translation done in another process. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -4,7 +4,7 @@ from pypy.rpython.controllerentry import ( Controller, ControllerEntry, SomeControlledInstance) from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.rlib.objectmodel import instantiate +from pypy.rlib.objectmodel import instantiate, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.tool.sourcetools import func_with_new_name from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -51,6 +51,68 @@ return func +class FunctionExportInfo: + def __init__(self, name, func): + self.name = name + self.func = func + + def save_repr(self, builder): + bk = builder.translator.annotator.bookkeeper + desc = bk.getdesc(self.func) + if isinstance(desc, description.FunctionDesc): + graph = desc.getuniquegraph() + funcptr = getfunctionptr(graph) + else: + raise NotImplementedError + self.external_name = builder.db.get(funcptr) + self.functype = lltype.typeOf(funcptr) + + def make_llexternal_function(self, eci): + functype = self.functype + imported_func = rffi.llexternal( + self.external_name, functype.TO.ARGS, functype.TO.RESULT, + compilation_info=eci, + ) + ARGS = functype.TO.ARGS + unrolling_ARGS = unrolling_iterable(enumerate(ARGS)) + def wrapper(*args): + real_args = () + for i, TARGET in unrolling_ARGS: + arg = args[i] + if isinstance(TARGET, lltype.Ptr): # XXX more precise check? + arg = self.make_ll_import_arg_converter(TARGET)(arg) + + real_args = real_args + (arg,) + res = imported_func(*real_args) + return res + wrapper._always_inline_ = True + return func_with_new_name(wrapper, self.external_name) + + @staticmethod + @specialize.memo() + def make_ll_import_arg_converter(TARGET): + from pypy.annotation import model + + def convert(x): + UNUSED + + class Entry(ExtRegistryEntry): + _about_ = convert + + def compute_result_annotation(self, s_arg): + if not (isinstance(s_arg, SomeControlledInstance) and + s_arg.s_real_obj.ll_ptrtype == TARGET): + raise TypeError("Expected a proxy for %s" % (TARGET,)) + return model.lltype_to_annotation(TARGET) + + def specialize_call(self, hop): + [v_instance] = hop.inputargs(*hop.args_r) + return hop.genop('force_cast', [v_instance], + resulttype=TARGET) + + return convert + + class ClassExportInfo: def __init__(self, name, cls): self.name = name @@ -69,12 +131,12 @@ miniglobals = {'cls': self.cls, 'instantiate': instantiate} exec source.compile() in miniglobals constructor = miniglobals[self.constructor_name] - constructor._annspecialcase_ = 'specialize:ll' constructor._always_inline_ = True constructor.argtypes = self.cls.__init__.argtypes return constructor - def save_repr(self, rtyper): + def save_repr(self, builder): + rtyper = builder.db.translator.rtyper bookkeeper = rtyper.annotator.bookkeeper classdef = bookkeeper.getuniqueclassdef(self.cls) self.classrepr = rtyper.getrepr(model.SomeInstance(classdef) @@ -113,7 +175,7 @@ def add_function(self, name, func): """Adds a function to export.""" - self.functions[name] = func + self.functions[name] = FunctionExportInfo(name, func) def add_class(self, name, cls): """Adds a class to export.""" @@ -126,17 +188,19 @@ # annotate constructors of exported classes for name, class_info in self.classes.items(): constructor = class_info.make_constructor() - self.functions[constructor.__name__] = constructor + self.add_function(constructor.__name__, constructor) # annotate functions with signatures - for name, func in self.functions.items(): + for name, func_info in self.functions.items(): + func = func_info.func if hasattr(func, 'argtypes'): annotator.build_types(func, func.argtypes, complete_now=False) annotator.complete() # Ensure that functions without signature are not constant-folded - for funcname, func in self.functions.items(): + for name, func_info in self.functions.items(): + func = func_info.func if not hasattr(func, 'argtypes'): # build a list of arguments where constants are erased newargs = [] @@ -149,40 +213,19 @@ # and reflow annotator.build_types(func, newargs) - def get_lowlevel_functions(self, annotator): - """Builds a map of low_level objects.""" - bk = annotator.bookkeeper - - exported_funcptr = {} - for name, item in self.functions.items(): - desc = bk.getdesc(item) - if isinstance(desc, description.FunctionDesc): - graph = desc.getuniquegraph() - funcptr = getfunctionptr(graph) - else: - raise NotImplementedError - - exported_funcptr[name] = funcptr - return exported_funcptr - def make_import_module(self, builder): """Builds an object with all exported functions.""" - rtyper = builder.db.translator.rtyper - - for clsname, class_info in self.classes.items(): - class_info.save_repr(rtyper) - - exported_funcptr = self.get_lowlevel_functions( - builder.translator.annotator) - # Map exported functions to the names given by the translator. - node_names = dict( - (funcname, builder.db.get(funcptr)) - for funcname, funcptr in exported_funcptr.items()) + for name, class_info in self.classes.items(): + class_info.save_repr(builder) + for name, func_info in self.functions.items(): + func_info.save_repr(builder) # Declarations of functions defined in the first module. forwards = [] + node_names = set(func_info.external_name + for func_info in self.functions.values()) for node in builder.db.globalcontainers(): - if node.nodekind == 'func' and node.name in node_names.values(): + if node.nodekind == 'func' and node.name in node_names: forwards.append('\n'.join(node.forward_declaration())) so_name = py.path.local(builder.so_name) @@ -200,58 +243,12 @@ class Module(object): __file__ = builder.so_name mod = Module() - for funcname, funcptr in exported_funcptr.items(): - import_name = node_names[funcname] - func = make_llexternal_function(import_name, funcptr, import_eci) - setattr(mod, funcname, func) - for clsname, class_info in self.classes.items(): + for name, func_info in self.functions.items(): + funcptr = func_info.make_llexternal_function(import_eci) + setattr(mod, name, funcptr) + for name, class_info in self.classes.items(): structptr = class_info.make_controller(mod) - setattr(mod, clsname, structptr) + setattr(mod, name, structptr) return mod -def make_ll_import_arg_converter(TARGET): - from pypy.annotation import model - - def convert(x): - UNUSED - - class Entry(ExtRegistryEntry): - _about_ = convert - - def compute_result_annotation(self, s_arg): - if not (isinstance(s_arg, SomeControlledInstance) and - s_arg.s_real_obj.ll_ptrtype == TARGET): - raise TypeError("Expected a proxy for %s" % (TARGET,)) - return model.lltype_to_annotation(TARGET) - - def specialize_call(self, hop): - [v_instance] = hop.inputargs(*hop.args_r) - return hop.genop('force_cast', [v_instance], - resulttype=TARGET) - - return convert -make_ll_import_arg_converter._annspecialcase_ = 'specialize:memo' - -def make_llexternal_function(name, funcptr, eci): - functype = lltype.typeOf(funcptr) - imported_func = rffi.llexternal( - name, functype.TO.ARGS, functype.TO.RESULT, - compilation_info=eci, - ) - ARGS = functype.TO.ARGS - unrolling_ARGS = unrolling_iterable(enumerate(ARGS)) - def wrapper(*args): - real_args = () - for i, TARGET in unrolling_ARGS: - arg = args[i] - if isinstance(TARGET, lltype.Ptr): # XXX more precise check? - arg = make_ll_import_arg_converter(TARGET)(arg) - - real_args = real_args + (arg,) - res = imported_func(*real_args) - return res - wrapper._annspecialcase_ = 'specialize:ll' - wrapper._always_inline_ = True - return func_with_new_name(wrapper, name) - diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -27,7 +27,8 @@ t.buildrtyper().specialize() backend_optimizations(t) - functions = [(f, None) for f in export_info.functions.values()] + functions = [(info.func, None) + for info in export_info.functions.values()] builder = CLibraryBuilder(t, None, config=t.config, name='lib' + modulename, functions=functions) From noreply at buildbot.pypy.org Wed Feb 22 21:19:42 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 21:19:42 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: No need to call functions and classes through the "import module", Message-ID: <20120222201942.413AA8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52773:8c0e4f28c231 Date: 2012-02-22 21:17 +0100 http://bitbucket.org/pypy/pypy/changeset/8c0e4f28c231/ Log: No need to call functions and classes through the "import module", the original RPython objects are correctly annotated to redirect calls. diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -1,9 +1,11 @@ from pypy.annotation import model, description +from pypy.annotation.signature import annotation from pypy.rpython.typesystem import getfunctionptr from pypy.rpython.lltypesystem import lltype, rffi from pypy.rpython.controllerentry import ( Controller, ControllerEntry, SomeControlledInstance) from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rpython.extfunc import ExtFuncEntry from pypy.rlib.objectmodel import instantiate, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.tool.sourcetools import func_with_new_name @@ -67,6 +69,19 @@ self.external_name = builder.db.get(funcptr) self.functype = lltype.typeOf(funcptr) + def register_external(self, eci): + llimpl = self.make_llexternal_function(eci) + functype = self.functype + class FuncEntry(ExtFuncEntry): + _about_ = self.func + name = self.name + def normalize_args(self, *args_s): + return args_s # accept any argument unmodified + signature_result = annotation(functype.TO.RESULT) + lltypeimpl = staticmethod(llimpl) + + return llimpl + def make_llexternal_function(self, eci): functype = self.functype imported_func = rffi.llexternal( @@ -244,7 +259,7 @@ __file__ = builder.so_name mod = Module() for name, func_info in self.functions.items(): - funcptr = func_info.make_llexternal_function(import_eci) + funcptr = func_info.register_external(import_eci) setattr(mod, name, funcptr) for name, class_info in self.classes.items(): structptr = class_info.make_controller(mod) diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -3,6 +3,8 @@ from pypy.translator.c.dlltool import CLibraryBuilder from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.backendopt.all import backend_optimizations +from pypy.rlib.objectmodel import we_are_translated +from pypy.rpython.test.test_llinterp import interpret import sys import types @@ -101,3 +103,25 @@ # Bad argument type, should not translate return firstmodule.f(1, 2, 3) raises(TypeError, self.compile_module, "third", g2=g2) + + def test_without_module_container(self): + # It's not necessary to fetch the functions from some + # container, the RPython calls are automatically redirected. + class Struct: + @export(float) + def __init__(self, x): + self.x = x + 23.4 + @export(Struct, Struct, int) + def f(s, t, v): + assert we_are_translated() + return s.x + t.x + v + self.compile_module("first", f=f, Struct=Struct) + + @export() + def g(): + s = Struct(3.0) + t = Struct(5.5) + return f(s, t, 7) + mod = self.compile_module("second", g=g) + assert mod.g() == 62.3 + # XXX How can we check that the code of f() was not translated again? From noreply at buildbot.pypy.org Wed Feb 22 23:24:31 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 23:24:31 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Implement attribute access Message-ID: <20120222222431.36F9C8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52774:094c275fd0b4 Date: 2012-02-22 21:44 +0100 http://bitbucket.org/pypy/pypy/changeset/094c275fd0b4/ Log: Implement attribute access diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -153,8 +153,8 @@ def save_repr(self, builder): rtyper = builder.db.translator.rtyper bookkeeper = rtyper.annotator.bookkeeper - classdef = bookkeeper.getuniqueclassdef(self.cls) - self.classrepr = rtyper.getrepr(model.SomeInstance(classdef) + self.classdef = bookkeeper.getuniqueclassdef(self.cls) + self.classrepr = rtyper.getrepr(model.SomeInstance(self.classdef) ).lowleveltype def make_controller(self, module): @@ -170,6 +170,16 @@ def new(self, *args): return constructor(*args) + def install_attribute(name): + def getter(self, obj): + return getattr(obj, 'inst_' + name) + setattr(ClassController, 'get_' + name, getter) + def setter(self, obj, value): + return getattr(obj, 'inst_' + name, value) + setattr(ClassController, 'set_' + name, getter) + for name, attrdef in self.classdef.attrs.items(): + install_attribute(name) + class Entry(ControllerEntry): _about_ = self.cls _controller_ = ClassController diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -125,3 +125,21 @@ mod = self.compile_module("second", g=g) assert mod.g() == 62.3 # XXX How can we check that the code of f() was not translated again? + + def test_attribute_access(self): + class Struct: + @export(float) + def __init__(self, x): + self.x = x + @export(Struct) + def increment(s): + s.x += 33.2 + self.compile_module("first", Struct=Struct, increment=increment) + + @export() + def g(): + s = Struct(3.0) + increment(s) + return s.x + mod = self.compile_module("second", g=g) + assert mod.g() == 36.2 From noreply at buildbot.pypy.org Wed Feb 22 23:24:32 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 23:24:32 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: Implement method calls across dll boundaries Message-ID: <20120222222432.6996682366@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52775:cdad5637ce91 Date: 2012-02-22 23:21 +0100 http://bitbucket.org/pypy/pypy/changeset/cdad5637ce91/ Log: Implement method calls across dll boundaries diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -115,8 +115,10 @@ _about_ = convert def compute_result_annotation(self, s_arg): - if not (isinstance(s_arg, SomeControlledInstance) and - s_arg.s_real_obj.ll_ptrtype == TARGET): + if not ((isinstance(s_arg, SomeControlledInstance) and + s_arg.s_real_obj.ll_ptrtype == TARGET) or + (isinstance(s_arg, model.SomePtr) and + s_arg.ll_ptrtype == TARGET)): raise TypeError("Expected a proxy for %s" % (TARGET,)) return model.lltype_to_annotation(TARGET) @@ -134,7 +136,7 @@ self.cls = cls def make_constructor(self): - self.constructor_name = "__new__%s" % (self.name,) + function_name = "__new__%s" % (self.name,) nbargs = len(self.cls.__init__.argtypes) args = ', '.join(['arg%d' % d for d in range(nbargs)]) source = py.code.Source(r""" @@ -142,14 +144,25 @@ obj = instantiate(cls) obj.__init__(%s) return obj - """ % (self.constructor_name, args, args)) + """ % (function_name, args, args)) miniglobals = {'cls': self.cls, 'instantiate': instantiate} exec source.compile() in miniglobals - constructor = miniglobals[self.constructor_name] + constructor = miniglobals[function_name] constructor._always_inline_ = True constructor.argtypes = self.cls.__init__.argtypes return constructor + def make_exported_methods(self): + self.methods = {} + for meth_name, method in self.cls.__dict__.items(): + if not getattr(method, 'exported', None): + continue + if meth_name == '__init__': + method = self.make_constructor() + else: + method.argtypes = (self.cls,) + method.argtypes + self.methods[meth_name] = method + def save_repr(self, builder): rtyper = builder.db.translator.rtyper bookkeeper = rtyper.annotator.bookkeeper @@ -162,14 +175,16 @@ will intercept all operations on the class.""" STRUCTPTR = self.classrepr - constructor = getattr(module, self.constructor_name) - class ClassController(Controller): knowntype = STRUCTPTR + def install_constructor(constructor): def new(self, *args): return constructor(*args) - + ClassController.new = new + if '__init__' in self.methods: + install_constructor(self.methods['__init__']) + def install_attribute(name): def getter(self, obj): return getattr(obj, 'inst_' + name) @@ -180,6 +195,13 @@ for name, attrdef in self.classdef.attrs.items(): install_attribute(name) + def install_method(name, method): + def method_call(self, obj, *args): + method(obj, *args) + setattr(ClassController, 'method_' + name, method_call) + for name, method in self.methods.items(): + install_method(name, method) + class Entry(ControllerEntry): _about_ = self.cls _controller_ = ClassController @@ -210,10 +232,12 @@ """Annotate all exported functions.""" bk = annotator.bookkeeper - # annotate constructors of exported classes + # annotate methods of exported classes for name, class_info in self.classes.items(): - constructor = class_info.make_constructor() - self.add_function(constructor.__name__, constructor) + class_info.make_exported_methods() + for meth_name, method in class_info.methods.items(): + self.add_function(meth_name, method) + # annotate functions with signatures for name, func_info in self.functions.items(): diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -143,3 +143,22 @@ return s.x mod = self.compile_module("second", g=g) assert mod.g() == 36.2 + + def test_method_call(self): + class Struct: + @export(float) + def __init__(self, x): + self.x = x + @export() + def increment(self): + self.x += 33.2 + self.compile_module("first", Struct=Struct) + + @export() + def g(): + s = Struct(3.0) + s.increment() + return s.x + mod = self.compile_module("second", g=g) + assert mod.g() == 36.2 + From noreply at buildbot.pypy.org Wed Feb 22 23:24:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 23:24:33 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: clean whitespace Message-ID: <20120222222433.9EE1A8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52776:8f7ccec230cd Date: 2012-02-22 23:23 +0100 http://bitbucket.org/pypy/pypy/changeset/8f7ccec230cd/ Log: clean whitespace diff --git a/pypy/translator/c/exportinfo.py b/pypy/translator/c/exportinfo.py --- a/pypy/translator/c/exportinfo.py +++ b/pypy/translator/c/exportinfo.py @@ -79,7 +79,7 @@ return args_s # accept any argument unmodified signature_result = annotation(functype.TO.RESULT) lltypeimpl = staticmethod(llimpl) - + return llimpl def make_llexternal_function(self, eci): @@ -169,7 +169,7 @@ self.classdef = bookkeeper.getuniqueclassdef(self.cls) self.classrepr = rtyper.getrepr(model.SomeInstance(self.classdef) ).lowleveltype - + def make_controller(self, module): """Returns the class repr, but also installs a Controller that will intercept all operations on the class.""" @@ -184,7 +184,7 @@ ClassController.new = new if '__init__' in self.methods: install_constructor(self.methods['__init__']) - + def install_attribute(name): def getter(self, obj): return getattr(obj, 'inst_' + name) @@ -237,7 +237,7 @@ class_info.make_exported_methods() for meth_name, method in class_info.methods.items(): self.add_function(meth_name, method) - + # annotate functions with signatures for name, func_info in self.functions.items(): @@ -298,6 +298,5 @@ for name, class_info in self.classes.items(): structptr = class_info.make_controller(mod) setattr(mod, name, structptr) - + return mod - diff --git a/pypy/translator/c/test/test_export.py b/pypy/translator/c/test/test_export.py --- a/pypy/translator/c/test/test_export.py +++ b/pypy/translator/c/test/test_export.py @@ -55,7 +55,7 @@ def f(x): return x + 42.3 firstmodule = self.compile_module("first", f=f) - + # call it from a function compiled in another module @export() def g(): @@ -72,7 +72,7 @@ def f2(): f(1.0) firstmodule = self.compile_module("first", f=f, f2=f2) - + @export() def g(): return firstmodule.f(41) @@ -89,7 +89,7 @@ def f(s, t, v): return s.x + t.x + v firstmodule = self.compile_module("first", f=f, Struct=Struct) - + @export() def g(): s = Struct(3.0) @@ -116,7 +116,7 @@ assert we_are_translated() return s.x + t.x + v self.compile_module("first", f=f, Struct=Struct) - + @export() def g(): s = Struct(3.0) @@ -161,4 +161,3 @@ return s.x mod = self.compile_module("second", g=g) assert mod.g() == 36.2 - From noreply at buildbot.pypy.org Wed Feb 22 23:24:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 22 Feb 2012 23:24:35 +0100 (CET) Subject: [pypy-commit] pypy sepcomp2: hg merge default Message-ID: <20120222222435.71C758203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: sepcomp2 Changeset: r52777:3ac122eefc54 Date: 2012-02-22 23:23 +0100 http://bitbucket.org/pypy/pypy/changeset/3ac122eefc54/ Log: hg merge default diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints) + have_debug_prints, fatalerror_notb) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,6 +104,7 @@ self._debug = v def setup_once(self): + self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -161,6 +162,28 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') + _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) + + def _check_sse2(self): + if WORD == 8: + return # all x86-64 CPUs support SSE2 + if not self.cpu.supports_floats: + return # the CPU doesn't support float, so we don't need SSE2 + # + from pypy.jit.backend.x86.detect_sse2 import INSNS + mc = codebuf.MachineCodeBlockWrapper() + for c in INSNS: + mc.writechar(c) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) + features = fnptr() + if bool(features & (1<<25)) and bool(features & (1<<26)): + return # CPU supports SSE2 + fatalerror_notb( + "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" + "Your CPU is too old. Please translate a PyPy with the option:\n" + "--jit-backend=x86-without-sse2") + def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,17 +1,18 @@ import autopath -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.rmmap import alloc, free +INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3") # RET def detect_sse2(): + from pypy.rpython.lltypesystem import lltype, rffi + from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3"): # RET + for c in INSNS: data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1293,28 +1293,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -2373,16 +2351,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -429,3 +429,11 @@ w_char = api.PyUnicode_FromOrdinal(0xFFFF) assert space.unwrap(w_char) == u'\uFFFF' + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -548,6 +548,15 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -39,10 +38,10 @@ ) def descr_str(self, space): - return self.descr_repr(space) + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) - def descr_repr(self, space): - return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + def descr_format(self, space, w_spec): + return space.format(self.item(space), w_spec) def descr_int(self, space): box = self.convert_to(W_LongBox.get_dtype(space)) @@ -187,6 +186,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -194,7 +197,8 @@ __new__ = interp2app(W_GenericBox.descr__new__.im_func), __str__ = interp2app(W_GenericBox.descr_str), - __repr__ = interp2app(W_GenericBox.descr_repr), + __repr__ = interp2app(W_GenericBox.descr_str), + __format__ = interp2app(W_GenericBox.descr_format), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), __nonzero__ = interp2app(W_GenericBox.descr_nonzero), @@ -245,6 +249,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -266,36 +272,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -308,6 +321,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,20 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) - for i in range(count): + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 's', 'a']) + +def fromstring_loop(a, count, dtype, itemsize, s): + i = 0 + while i < count: + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{:3f}".format(numpy.float64(3)) == "3.000000" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') @@ -387,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,22 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' +fatalerror._annenforceargs_ = [str] + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,59 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if isinstance(value, (r_longlong, r_ulonglong)): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it would " + "need to be ordered as a FLOAT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -146,6 +146,38 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -472,7 +472,7 @@ IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -484,7 +484,7 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', + 'paddq', 'pinsr', 'pmul', 'psrl', # sign-extending moves should not produce GC pointers 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -559,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) From noreply at buildbot.pypy.org Wed Feb 22 23:26:10 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 22 Feb 2012 23:26:10 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Rename this to .rst Message-ID: <20120222222610.27CCE8203C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4105:ed3ea5c4cb3f Date: 2012-02-22 17:25 -0500 http://bitbucket.org/pypy/extradoc/changeset/ed3ea5c4cb3f/ Log: Rename this to .rst diff --git a/planning/separate-compilation.txt b/planning/separate-compilation.rst rename from planning/separate-compilation.txt rename to planning/separate-compilation.rst From noreply at buildbot.pypy.org Thu Feb 23 01:41:38 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:38 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: rules for CINT dictionary generation and easier use Message-ID: <20120223004138.7DCA48203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52778:07c7b2f08065 Date: 2012-02-22 11:00 -0800 http://bitbucket.org/pypy/pypy/changeset/07c7b2f08065/ Log: rules for CINT dictionary generation and easier use diff --git a/pypy/module/cppyy/test/Makefile b/pypy/module/cppyy/test/Makefile --- a/pypy/module/cppyy/test/Makefile +++ b/pypy/module/cppyy/test/Makefile @@ -1,4 +1,5 @@ -dicts = example01Dict.so datatypesDict.so advancedcppDict.so overloadsDict.so stltypesDict.so operatorsDict.so fragileDict.so std_streamsDict.so +dicts = example01Dict.so datatypesDict.so advancedcppDict.so overloadsDict.so \ +stltypesDict.so operatorsDict.so fragileDict.so std_streamsDict.so all : $(dicts) ROOTSYS := ${ROOTSYS} @@ -16,38 +17,33 @@ cppflags+=-dynamiclib -single_module -arch x86_64 endif -ifeq ($(shell $(genreflex) --help | grep -- --with-methptrgetter),) - genreflexflags= - cppflags2=-O3 -fPIC +ifeq ($(CINT),) + ifeq ($(shell $(genreflex) --help | grep -- --with-methptrgetter),) + genreflexflags= + cppflags2=-O3 -fPIC + else + genreflexflags=--with-methptrgetter + cppflags2=-Wno-pmf-conversions -O3 -fPIC + endif else - genreflexflags=--with-methptrgetter - cppflags2=-Wno-pmf-conversions -O3 -fPIC + cppflags2=-O3 -fPIC -rdynamic endif +ifeq ($(CINT),) %Dict.so: %_rflx.cpp %.cxx g++ -o $@ $^ -shared -lReflex $(cppflags) $(cppflags2) %_rflx.cpp: %.h %.xml $(genreflex) $< $(genreflexflags) --selection=$*.xml +else +%Dict.so: %_cint.cxx %.cxx + g++ -o $@ $^ -shared $(cppflags) $(cppflags2) -# rootcint -f example01_cint.cxx -c example01.h example01_LinkDef.h -# g++ -I$ROOTSYS/include example01_cint.cxx example01.cxx -shared -o example01Dict.so -rdynamic -# -# rootcint -f operators_cint.cxx -c operators.h operators_LinkDef.h -# g++ -I$ROOTSYS/include operators_cint.cxx operators.cxx -shared -o operatorsDict.so -rdynamic -# -# rootcint -f datatypes_cint.cxx -c datatypes.h datatypes_LinkDef.h -# g++ -I$ROOTSYS/include datatypes_cint.cxx datatypes.cxx -shared -o datatypesDict.so -rdynamic -# -# rootcint -f advancedcpp_cint.cxx -c advancedcpp.h advancedcpp_LinkDef.h -# g++ -I$ROOTSYS/include advancedcpp_cint.cxx advancedcpp.cxx -shared -o advancedcppDict.so -rdynamic -# -# rootcint -f fragile_cint.cxx -c fragile.h fragile_LinkDef.h -# g++ -I$ROOTSYS/include fragile_cint.cxx fragile.cxx -shared -o fragileDict.so -rdynamic -# -# rootcint -f stltypes_cint.cxx -c stltypes.h stltypes_LinkDef.h -# g++ -I$ROOTSYS/include stltypes_cint.cxx stltypes.cxx -shared -o stltypesDict.so -rdynamic +%_cint.cxx: %.h %_LinkDef.h + rootcint -f $@ -c $*.h $*_LinkDef.h +endif +ifeq ($(CINT),) # TODO: methptrgetter causes these tests to crash, so don't use it for now stltypesDict.so: stltypes.cxx stltypes.h stltypes.xml $(genreflex) stltypes.h --selection=stltypes.xml @@ -60,6 +56,7 @@ operatorsDict.so: operators.cxx operators.h operators.xml $(genreflex) operators.h --selection=operators.xml g++ -o $@ operators_rflx.cpp operators.cxx -shared -lReflex $(cppflags) $(cppflags2) +endif .PHONY: clean clean: From noreply at buildbot.pypy.org Thu Feb 23 01:41:40 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:40 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: resolve a conflict with _multiprocessing (both used the name handle that ended up on W_Root) Message-ID: <20120223004140.2CF5982366@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52779:50add258af6d Date: 2012-02-22 13:35 -0800 http://bitbucket.org/pypy/pypy/changeset/50add258af6d/ Log: resolve a conflict with _multiprocessing (both used the name handle that ended up on W_Root) diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -40,6 +40,7 @@ pass handle = capi.c_get_typehandle(name) + assert lltype.typeOf(handle) == capi.C_TYPEHANDLE if handle: final_name = capi.charp2str_free(capi.c_final_name(handle)) if capi.c_is_namespace(handle): @@ -64,6 +65,7 @@ pass handle = capi.c_get_templatehandle(name) + assert lltype.typeOf(handle) == capi.C_TYPEHANDLE if handle: template = W_CPPTemplateType(space, name, handle) state.cpptype_cache[name] = template @@ -239,6 +241,7 @@ cppinstance = self.space.interp_w(W_CPPInstance, w_cppinstance, can_be_None=True) if cppinstance is not None: cppinstance._nullcheck() + assert isinstance(cppinstance.cppclass, W_CPPType) cppthis = cppinstance.cppclass.get_cppthis(cppinstance, self.scope_handle) else: cppthis = capi.C_NULL_OBJECT @@ -287,6 +290,7 @@ @jit.elidable_promote() def _get_offset(self, cppinstance): if cppinstance: + assert lltype.typeOf(cppinstance.cppclass.handle) == lltype.typeOf(self.scope_handle) offset = self.offset + capi.c_base_offset( cppinstance.cppclass.handle, self.scope_handle, cppinstance.rawobject) else: @@ -474,8 +478,8 @@ data_member = W_CPPDataMember(self.space, self.handle, type_name, offset, is_static) self.data_members[data_member_name] = data_member - @jit.elidable_promote() def get_cppthis(self, cppinstance, scope_handle): + assert self.handle == cppinstance.cppclass.handle return cppinstance.rawobject def is_namespace(self): @@ -505,10 +509,9 @@ class W_ComplexCPPType(W_CPPType): _immutable_ = True - @jit.elidable_promote() def get_cppthis(self, cppinstance, scope_handle): - offset = capi.c_base_offset( - cppinstance.cppclass.handle, scope_handle, cppinstance.rawobject) + assert self.handle == cppinstance.cppclass.handle + offset = capi.c_base_offset(self.handle, scope_handle, cppinstance.rawobject) return capi.direct_ptradd(cppinstance.rawobject, offset) W_ComplexCPPType.typedef = TypeDef( @@ -550,6 +553,7 @@ def __init__(self, space, cppclass, rawobject, python_owns): self.space = space + assert isinstance(cppclass, W_CPPType) self.cppclass = cppclass assert lltype.typeOf(rawobject) == capi.C_OBJECT self.rawobject = rawobject From noreply at buildbot.pypy.org Thu Feb 23 01:41:41 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:41 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: do not test for fast path if CINT is the back-end as it does not support ffi calls Message-ID: <20120223004141.6B0758203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52780:d5c9c43ebf5a Date: 2012-02-22 13:36 -0800 http://bitbucket.org/pypy/pypy/changeset/d5c9c43ebf5a/ Log: do not test for fast path if CINT is the back-end as it does not support ffi calls diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -133,6 +133,11 @@ class TestFastPathJIT(LLJitMixin): def test_simple(self): + """Test fast path being taken for methods""" + + if capi.identify() == 'CINT': # CINT does not support fast path + return + space = FakeSpace() drv = jit.JitDriver(greens=[], reds=["i", "inst", "addDataToInt"]) def f(): @@ -153,6 +158,11 @@ self.check_jitcell_token_count(1) def test_overload(self): + """Test fast path being taken for overloaded methods""" + + if capi.identify() == 'CINT': # CINT does not support fast path + return + space = FakeSpace() drv = jit.JitDriver(greens=[], reds=["i", "inst", "addDataToInt"]) def f(): From noreply at buildbot.pypy.org Thu Feb 23 01:41:42 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:42 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: required link definitions for CINT Message-ID: <20120223004142.A3BF88203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52781:3061664dc973 Date: 2012-02-22 13:37 -0800 http://bitbucket.org/pypy/pypy/changeset/3061664dc973/ Log: required link definitions for CINT diff --git a/pypy/module/cppyy/test/overloads_LinkDef.h b/pypy/module/cppyy/test/overloads_LinkDef.h new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/overloads_LinkDef.h @@ -0,0 +1,25 @@ +#ifdef __CINT__ + +#pragma link off all globals; +#pragma link off all classes; +#pragma link off all functions; + +#pragma link C++ class a_overload; +#pragma link C++ class b_overload; +#pragma link C++ class c_overload; +#pragma link C++ class d_overload; + +#pragma link C++ namespace ns_a_overload; +#pragma link C++ class ns_a_overload::a_overload; +#pragma link C++ class ns_a_overload::b_overload; + +#pragma link C++ class ns_b_overload; +#pragma link C++ class ns_b_overload::a_overload; + +#pragma link C++ class aa_ol; +#pragma link C++ class cc_ol; + +#pragma link C++ class more_overloads; +#pragma link C++ class more_overloads2; + +#endif From noreply at buildbot.pypy.org Thu Feb 23 01:41:43 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:43 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: implement calling of global functions Message-ID: <20120223004143.DCE418203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52782:7baa0df07b9a Date: 2012-02-22 13:38 -0800 http://bitbucket.org/pypy/pypy/changeset/7baa0df07b9a/ Log: implement calling of global functions diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -31,6 +31,8 @@ /* data for life time management ------------------------------------------ */ +#define GLOBAL_HANDLE 1l + typedef std::vector ClassRefs_t; static ClassRefs_t g_classrefs(1); @@ -40,8 +42,8 @@ class ClassRefsInit { public: ClassRefsInit() { // setup dummy holder for global namespace - ClassRefs_t::size_type sz = g_classrefs.size(); - g_classref_indices[""] = sz; + assert(g_classrefs.size() == (ClassRefs_t::size_type)GLOBAL_HANDLE); + g_classref_indices[""] = (ClassRefs_t::size_type)GLOBAL_HANDLE; g_classrefs.push_back(TClassRef("")); } }; @@ -177,24 +179,42 @@ static inline G__value cppyy_call_T(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args) { - TClassRef cr = type_from_handle(handle); - TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); + + if ((long)handle != GLOBAL_HANDLE) { + TClassRef cr = type_from_handle(handle); + assert(method_index < cr->GetListOfMethods()->GetSize()); + TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); - G__InterfaceMethod meth = (G__InterfaceMethod)m->InterfaceMethod(); + G__InterfaceMethod meth = (G__InterfaceMethod)m->InterfaceMethod(); + G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); + assert(libp->paran == numargs); + fixup_args(libp); + + // TODO: access to store_struct_offset won't work on Windows + G__setgvp((long)self); + long store_struct_offset = G__store_struct_offset; + G__store_struct_offset = (long)self; + + G__value result; + G__setnull(&result); + meth(&result, 0, libp, 0); + + G__store_struct_offset = store_struct_offset; + return result; + } + + // global function + assert(method_index < (int)g_globalfuncs.size()); + TFunction* f = g_globalfuncs[method_index]; + + G__InterfaceMethod func = (G__InterfaceMethod)f->InterfaceMethod(); G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); assert(libp->paran == numargs); fixup_args(libp); - // TODO: access to store_struct_offset won't work on Windows - G__setgvp((long)self); - long store_struct_offset = G__store_struct_offset; - G__store_struct_offset = (long)self; - G__value result; G__setnull(&result); - meth(&result, 0, libp, 0); - - G__store_struct_offset = store_struct_offset; + func(&result, 0, libp, 0); return result; } @@ -360,19 +380,21 @@ /* method/function reflection information --------------------------------- */ int cppyy_num_methods(cppyy_typehandle_t handle) { TClassRef cr = type_from_handle(handle); - if (cr.GetClass() && cr->GetListOfMethods()) + if (cr.GetClass() && cr->GetListOfMethods()) return cr->GetListOfMethods()->GetSize(); else if (strcmp(cr.GetClassName(), "") == 0) { + // NOTE: the updated list of global funcs grows with 5 "G__ateval"'s just + // because it is being updated => infinite loop :( TCollection* funcs = gROOT->GetListOfGlobalFunctions(kTRUE); if (g_globalfuncs.size() != (GlobalFuncs_t::size_type)funcs->GetSize()) { - /*g_globalfuncs.clear(); + g_globalfuncs.clear(); g_globalfuncs.reserve(funcs->GetSize()); TIter ifunc(funcs); TFunction* func = 0; while ((func = (TFunction*)ifunc.Next())) - g_globalfuncs.push_back(func);*/ + g_globalfuncs.push_back(func); } return (int)g_globalfuncs.size(); } From noreply at buildbot.pypy.org Thu Feb 23 01:41:45 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:45 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: cleanup Message-ID: <20120223004145.26E528203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52783:6884cdf7f445 Date: 2012-02-22 13:38 -0800 http://bitbucket.org/pypy/pypy/changeset/6884cdf7f445/ Log: cleanup diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -495,14 +495,13 @@ _immutable_ = True def __init__(self, space, cpptype, name): + from pypy.module.cppyy.interp_cppyy import W_CPPType + assert isinstance(cpptype, W_CPPType) self.cpptype = cpptype self.name = name def _unwrap_object(self, space, w_obj): from pypy.module.cppyy.interp_cppyy import W_CPPInstance - w_cppinstance = space.findattr(w_obj, space.wrap("_cppinstance")) - if w_cppinstance: - w_obj = w_cppinstance obj = space.interpclass_w(w_obj) if isinstance(obj, W_CPPInstance): if capi.c_is_subtype(obj.cppclass.handle, self.cpptype.handle): From noreply at buildbot.pypy.org Thu Feb 23 01:41:46 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:46 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: function to identify the back-end (to be used for testing only) Message-ID: <20120223004146.622B48203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52784:bffd3860d4ee Date: 2012-02-22 13:39 -0800 http://bitbucket.org/pypy/pypy/changeset/bffd3860d4ee/ Log: function to identify the back-end (to be used for testing only) diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -4,6 +4,8 @@ import reflex_capi as backend #import cint_capi as backend +identify = backend.identify + _C_OPAQUE_PTR = rffi.VOIDP _C_OPAQUE_NULL = lltype.nullptr(_C_OPAQUE_PTR.TO) diff --git a/pypy/module/cppyy/capi/cint_capi.py b/pypy/module/cppyy/capi/cint_capi.py --- a/pypy/module/cppyy/capi/cint_capi.py +++ b/pypy/module/cppyy/capi/cint_capi.py @@ -4,7 +4,7 @@ from pypy.rpython.lltypesystem import rffi from pypy.rlib import libffi, rdynload -__all__ = ['eci', 'c_load_dictionary'] +__all__ = ['identify', 'eci', 'c_load_dictionary'] pkgpath = py.path.local(__file__).dirpath().join(os.pardir) srcpath = pkgpath.join("src") @@ -17,6 +17,9 @@ rootincpath = [] rootlibpath = [] +def identify(): + return 'CINT' + # force loading in global mode of core libraries, rather than linking with # them as PyPy uses various version of dlopen in various places; note that # this isn't going to fly on Windows (note that locking them in objects and diff --git a/pypy/module/cppyy/capi/reflex_capi.py b/pypy/module/cppyy/capi/reflex_capi.py --- a/pypy/module/cppyy/capi/reflex_capi.py +++ b/pypy/module/cppyy/capi/reflex_capi.py @@ -3,12 +3,15 @@ from pypy.rlib import libffi from pypy.translator.tool.cbuild import ExternalCompilationInfo -__all__ = ['eci', 'c_load_dictionary'] +__all__ = ['identify', 'eci', 'c_load_dictionary'] pkgpath = py.path.local(__file__).dirpath().join(os.pardir) srcpath = pkgpath.join("src") incpath = pkgpath.join("include") +def identify(): + return 'Reflex' + if os.environ.get("ROOTSYS"): rootincpath = [os.path.join(os.environ["ROOTSYS"], "include")] rootlibpath = [os.path.join(os.environ["ROOTSYS"], "lib")] From noreply at buildbot.pypy.org Thu Feb 23 01:41:47 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:47 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: required link definitions for CINT backend Message-ID: <20120223004147.9F5AA8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52785:ac8bc1fdaec0 Date: 2012-02-22 13:40 -0800 http://bitbucket.org/pypy/pypy/changeset/ac8bc1fdaec0/ Log: required link definitions for CINT backend diff --git a/pypy/module/cppyy/test/stltypes_LinkDef.h b/pypy/module/cppyy/test/stltypes_LinkDef.h new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/stltypes_LinkDef.h @@ -0,0 +1,13 @@ +#ifdef __CINT__ + +#pragma link off all globals; +#pragma link off all classes; +#pragma link off all functions; + +#pragma link C++ class std::vector; +#pragma link C++ class std::vector; + +#pragma link C++ class just_a_class; +#pragma link C++ class stringy_class; + +#endif From noreply at buildbot.pypy.org Thu Feb 23 01:41:48 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:48 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: lazier does it Message-ID: <20120223004148.D8B6B8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52786:4461e932bcc6 Date: 2012-02-22 13:48 -0800 http://bitbucket.org/pypy/pypy/changeset/4461e932bcc6/ Log: lazier does it diff --git a/pypy/module/cppyy/pythonify.py b/pypy/module/cppyy/pythonify.py --- a/pypy/module/cppyy/pythonify.py +++ b/pypy/module/cppyy/pythonify.py @@ -90,30 +90,38 @@ setter = cppdm.set return property(binder, setter) -def make_cppnamespace(namespace_name, cppns): - d = {"_cpp_proxy" : cppns} + +def update_cppnamespace(nsdct, metans): + cppns = nsdct["_cpp_proxy"] # insert static methods into the "namespace" dictionary for func_name in cppns.get_method_names(): cppol = cppns.get_overload(func_name) - d[func_name] = make_static_function(cppns, func_name, cppol) - - # create a meta class to allow properties (for static data write access) - metans = type(CppyyNamespaceMeta)(namespace_name+'_meta', (CppyyNamespaceMeta,), {}) + nsdct[func_name] = make_static_function(cppns, func_name, cppol) # add all data members to the dictionary of the class to be created, and # static ones also to the meta class (needed for property setters) for dm in cppns.get_data_member_names(): cppdm = cppns.get_data_member(dm) pydm = make_datamember(cppdm) - d[dm] = pydm + nsdct[dm] = pydm setattr(metans, dm, pydm) +def make_cppnamespace(namespace_name, cppns, update=True): + nsdct = {"_cpp_proxy" : cppns } + + # create a meta class to allow properties (for static data write access) + metans = type(CppyyNamespaceMeta)(namespace_name+'_meta', (CppyyNamespaceMeta,), {}) + + if update: + update_cppnamespace(nsdct, metans) + # create the python-side C++ namespace representation - pycppns = metans(namespace_name, (object,), d) + pycppns = metans(namespace_name, (object,), nsdct) # cache result and return _existing_cppitems[namespace_name] = pycppns + return pycppns @@ -217,11 +225,14 @@ scope.__dict__[name] = pycppitem if not cppitem and isinstance(scope, CppyyNamespaceMeta): - scope._cpp_proxy.update() # TODO: this is currently quadratic + global _loaded_dictionaries_isdirty + if _loaded_dictionaries_isdirty: + scope._cpp_proxy.update() # TODO: this is currently quadratic cppitem = scope._cpp_proxy.get_overload(name) pycppitem = make_static_function(scope._cpp_proxy, name, cppitem) setattr(scope.__class__, name, pycppitem) pycppitem = getattr(scope, name) + _loaded_dictionaries_isdirty = False if pycppitem: _existing_cppitems[fullname] = pycppitem @@ -251,7 +262,7 @@ pyclass.__iter__ = __iter__ # string comparisons - if pyclass.__name__ == 'std::string': + if pyclass.__name__ == 'std::string' or pyclass.__name__ == 'string': def eq(self, other): if type(other) == pyclass: return self.c_str() == other.c_str() @@ -261,14 +272,20 @@ _loaded_dictionaries = {} +_loaded_dictionaries_isdirty = False def load_reflection_info(name): try: return _loaded_dictionaries[name] except KeyError: dct = cppyy._load_dictionary(name) _loaded_dictionaries[name] = dct + global _loaded_dictionaries_isdirty + _loaded_dictionaries_isdirty = True return dct -# user interface objects -gbl = make_cppnamespace("::", cppyy._type_byname("")) # global C++ namespace +# user interface objects (note the two-step: creation of global functions may +# cause the creation of classes in the global namespace, so gbl must exist at +# that point to cache them) +gbl = make_cppnamespace("::", cppyy._type_byname(""), False) # global C++ namespace +update_cppnamespace(gbl.__dict__, type(gbl)) From noreply at buildbot.pypy.org Thu Feb 23 01:41:50 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:50 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: required linkdef for CINT Message-ID: <20120223004150.20E2A8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52787:75af268bf2bc Date: 2012-02-22 16:30 -0800 http://bitbucket.org/pypy/pypy/changeset/75af268bf2bc/ Log: required linkdef for CINT diff --git a/pypy/module/cppyy/test/std_streams_LinkDef.h b/pypy/module/cppyy/test/std_streams_LinkDef.h new file mode 100644 --- /dev/null +++ b/pypy/module/cppyy/test/std_streams_LinkDef.h @@ -0,0 +1,9 @@ +#ifdef __CINT__ + +#pragma link off all globals; +#pragma link off all classes; +#pragma link off all functions; + +#pragma link C++ class std::ostream; + +#endif From noreply at buildbot.pypy.org Thu Feb 23 01:41:51 2012 From: noreply at buildbot.pypy.org (wlav) Date: Thu, 23 Feb 2012 01:41:51 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: bring CINT backend to the level of the Reflex backend Message-ID: <20120223004151.65D9F8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52788:2d378037bef0 Date: 2012-02-22 16:40 -0800 http://bitbucket.org/pypy/pypy/changeset/2d378037bef0/ Log: bring CINT backend to the level of the Reflex backend diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -132,6 +132,11 @@ [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, compilation_info=backend.eci) +c_call_r = rffi.llexternal( + "cppyy_call_r", + [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.VOIDP, + compilation_info=backend.eci) + c_call_s = rffi.llexternal( "cppyy_call_s", [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CCHARP, diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -362,7 +362,7 @@ x[0] = rffi.cast(rffi.LONG, rffi.str2charp(arg)) typecode = rffi.cast(rffi.CCHARP, capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'a' + typecode[0] = 'o' def from_memory(self, space, w_obj, w_type, offset): address = self._get_raw_address(space, w_obj, offset) @@ -647,11 +647,17 @@ # special cases _converters["std::string"] = StdStringConverter +_converters["string"] = _converters["std::string"] _converters["std::basic_string"] = StdStringConverter +_converters["basic_string"] = _converters["std::basic_string"] _converters["const std::string&"] = StdStringConverter # TODO: shouldn't copy +_converters["const string&"] = _converters["const std::string&"] _converters["const std::basic_string&"] = StdStringConverter +_converters["const basic_string&"] = _converters["const std::basic_string&"] _converters["std::string&"] = StdStringRefConverter +_converters["string&"] = _converters["std::string&"] _converters["std::basic_string&"] = StdStringRefConverter +_converters["basic_string&"] = _converters["std::basic_string&"] # it should be possible to generate these: _a_converters["short int*"] = ShortPtrConverter diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -145,7 +145,7 @@ result = libffifunc.call(argchain, rffi.ULONG) return space.wrap(result) -class ConstIntRefExecutor(LongExecutor): +class ConstIntRefExecutor(FunctionExecutor): _immutable_ = True libffitype = libffi.types.pointer @@ -153,11 +153,15 @@ intptr = rffi.cast(rffi.INTP, result) return space.wrap(intptr[0]) + def execute(self, space, w_returntype, func, cppthis, num_args, args): + result = capi.c_call_r(func.cpptype.handle, func.method_index, cppthis, num_args, args) + return self._wrap_result(space, result) + def execute_libffi(self, space, w_returntype, libffifunc, argchain): result = libffifunc.call(argchain, rffi.INTP) return space.wrap(result[0]) -class ConstLongRefExecutor(LongExecutor): +class ConstLongRefExecutor(ConstIntRefExecutor): _immutable_ = True libffitype = libffi.types.pointer diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -30,6 +30,7 @@ double cppyy_call_f(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); double cppyy_call_d(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); + void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t handle, int method_index); diff --git a/pypy/module/cppyy/pythonify.py b/pypy/module/cppyy/pythonify.py --- a/pypy/module/cppyy/pythonify.py +++ b/pypy/module/cppyy/pythonify.py @@ -226,7 +226,7 @@ if not cppitem and isinstance(scope, CppyyNamespaceMeta): global _loaded_dictionaries_isdirty - if _loaded_dictionaries_isdirty: + if _loaded_dictionaries_isdirty: # TODO: this should've been per namespace scope._cpp_proxy.update() # TODO: this is currently quadratic cppitem = scope._cpp_proxy.get_overload(name) pycppitem = make_static_function(scope._cpp_proxy, name, cppitem) @@ -254,6 +254,7 @@ if hasattr(pyclass, 'begin') and hasattr(pyclass, 'end'): def __iter__(self): iter = self.begin() + # TODO: make gnu-independent while gbl.__gnu_cxx.__ne__(iter, self.end()): yield iter.__deref__() iter.__preinc__() @@ -268,11 +269,16 @@ return self.c_str() == other.c_str() else: return self.c_str() == other - pyclass.__eq__ = eq + pyclass.__eq__ = eq + + # TODO: clean this up + # fixup lack of __getitem__ if no const return + if hasattr(pyclass, '__setitem__') and not hasattr(pyclass, '__getitem___'): + pyclass.__getitem__ = pyclass.__setitem__ _loaded_dictionaries = {} -_loaded_dictionaries_isdirty = False +_loaded_dictionaries_isdirty = False # should be per namespace def load_reflection_info(name): try: return _loaded_dictionaries[name] @@ -289,3 +295,6 @@ # that point to cache them) gbl = make_cppnamespace("::", cppyy._type_byname(""), False) # global C++ namespace update_cppnamespace(gbl.__dict__, type(gbl)) + +# mostly for the benefit of CINT, which treats std as special +gbl.std = make_cppnamespace("std", cppyy._type_byname("std"), False) diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -151,32 +151,6 @@ /* method/function dispatching -------------------------------------------- */ -long cppyy_call_o(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args, - cppyy_typehandle_t rettype) { - TClassRef cr = type_from_handle(handle); - TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); - - G__InterfaceMethod meth = (G__InterfaceMethod)m->InterfaceMethod(); - G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); - assert(libp->paran == numargs); - fixup_args(libp); - - // TODO: access to store_struct_offset won't work on Windows - G__setgvp((long)self); - long store_struct_offset = G__store_struct_offset; - G__store_struct_offset = (long)self; - - G__value result; - G__setnull(&result); - meth(&result, 0, libp, 0); - - G__store_struct_offset = store_struct_offset; - - G__pop_tempobject_nodel(); - return G__int(result); -} - static inline G__value cppyy_call_T(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args) { @@ -219,6 +193,14 @@ return result; } +long cppyy_call_o(cppyy_typehandle_t handle, int method_index, + cppyy_object_t self, int numargs, void* args, + cppyy_typehandle_t) { + G__value result = cppyy_call_T(handle, method_index, self, numargs, args); + G__pop_tempobject_nodel(); + return G__int(result); +} + void cppyy_call_v(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args) { cppyy_call_T(handle, method_index, self, numargs, args); @@ -264,8 +246,25 @@ cppyy_object_t self, int numargs, void* args) { G__value result = cppyy_call_T(handle, method_index, self, numargs, args); return G__double(result); -} +} +void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, + cppyy_object_t self, int numargs, void* args) { + G__value result = cppyy_call_T(handle, method_index, self, numargs, args); + return (void*)result.ref; +} + +char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, + cppyy_object_t self, int numargs, void* args) { + G__value result = cppyy_call_T(handle, method_index, self, numargs, args); + G__pop_tempobject_nodel(); + if (result.ref && *(long*)result.ref) { + char* charp = cppstring_to_cstring(*(std::string*)result.ref); + delete (std::string*)result.ref; + return charp; + } + return cppstring_to_cstring(""); +} cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t /*handle*/, int /*method_index*/) { return (cppyy_methptrgetter_t)NULL; @@ -507,6 +506,18 @@ free(ptr); } +void* cppyy_charp2stdstring(const char* str) { + return new std::string(str); +} + +void* cppyy_stdstring2stdstring(void* ptr) { + return new std::string(*(std::string*)ptr); +} + +void cppyy_free_stdstring(void* ptr) { + delete (std::string*)ptr; +} + void* cppyy_load_dictionary(const char* lib_name) { if (0 <= gSystem->Load(lib_name)) return (void*)1; diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -155,6 +155,11 @@ return cppyy_call_T(handle, method_index, self, numargs, args); } +void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, + cppyy_object_t self, int numargs, void* args) { + return (void*)cppyy_call_T(handle, method_index, self, numargs, args); +} + char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args) { std::string result(""); @@ -420,4 +425,3 @@ void cppyy_free_stdstring(void* ptr) { delete (std::string*)ptr; } - diff --git a/pypy/module/cppyy/test/example01_LinkDef.h b/pypy/module/cppyy/test/example01_LinkDef.h --- a/pypy/module/cppyy/test/example01_LinkDef.h +++ b/pypy/module/cppyy/test/example01_LinkDef.h @@ -6,6 +6,12 @@ #pragma link C++ class example01; #pragma link C++ class payload; +#pragma link C++ class ArgPasser; #pragma link C++ class z_; +#pragma link C++ function globalAddOneToInt(int); + +#pragma link C++ namespace ns_example01; +#pragma link C++ function ns_example01::globalAddOneToInt(int); + #endif diff --git a/pypy/module/cppyy/test/std_streams.h b/pypy/module/cppyy/test/std_streams.h --- a/pypy/module/cppyy/test/std_streams.h +++ b/pypy/module/cppyy/test/std_streams.h @@ -1,9 +1,13 @@ #ifndef STD_STREAMS_H #define STD_STREAMS_H 1 +#ifndef __CINT__ #include +#endif #include +#ifndef __CINT__ extern template class std::basic_ios >; +#endif #endif // STD_STREAMS_H diff --git a/pypy/module/cppyy/test/stltypes.h b/pypy/module/cppyy/test/stltypes.h --- a/pypy/module/cppyy/test/stltypes.h +++ b/pypy/module/cppyy/test/stltypes.h @@ -22,9 +22,11 @@ }; +#ifndef __CINT__ //- explicit instantiations of used types STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, int) STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, just_a_class) +#endif //- class with lots of std::string handling diff --git a/pypy/module/cppyy/test/stltypes_LinkDef.h b/pypy/module/cppyy/test/stltypes_LinkDef.h --- a/pypy/module/cppyy/test/stltypes_LinkDef.h +++ b/pypy/module/cppyy/test/stltypes_LinkDef.h @@ -5,7 +5,11 @@ #pragma link off all functions; #pragma link C++ class std::vector; +#pragma link C++ class std::vector::iterator; +#pragma link C++ class std::vector::const_iterator; #pragma link C++ class std::vector; +#pragma link C++ class std::vector::iterator; +#pragma link C++ class std::vector::const_iterator; #pragma link C++ class just_a_class; #pragma link C++ class stringy_class; diff --git a/pypy/module/cppyy/test/test_stltypes.py b/pypy/module/cppyy/test/test_stltypes.py --- a/pypy/module/cppyy/test/test_stltypes.py +++ b/pypy/module/cppyy/test/test_stltypes.py @@ -42,6 +42,7 @@ #----- v = tv1(self.N) for i in range(self.N): + # TODO: # v[i] = i # assert v[i] == i # assert v.at(i) == i From noreply at buildbot.pypy.org Thu Feb 23 04:56:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 04:56:36 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: look how list comprehension can be sped up Message-ID: <20120223035636.1773C8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52789:25b085bcabfe Date: 2012-02-22 20:56 -0700 http://bitbucket.org/pypy/pypy/changeset/25b085bcabfe/ Log: look how list comprehension can be sped up diff --git a/lib-python/modified-2.7/opcode.py b/lib-python/modified-2.7/opcode.py --- a/lib-python/modified-2.7/opcode.py +++ b/lib-python/modified-2.7/opcode.py @@ -192,5 +192,6 @@ def_op('LOOKUP_METHOD', 201) # Index in name list hasname.append(201) def_op('CALL_METHOD', 202) # #args not including 'self' +def_op('BUILD_LIST_FROM_ARG', 203) del def_op, name_op, jrel_op, jabs_op diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -610,6 +610,8 @@ ops.JUMP_IF_FALSE_OR_POP : 0, ops.POP_JUMP_IF_TRUE : -1, ops.POP_JUMP_IF_FALSE : -1, + + ops.BUILD_LIST_FROM_ARG: 1, } diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -973,6 +973,7 @@ gen = gens[gen_index] assert isinstance(gen, ast.comprehension) gen.iter.walkabout(self) + self.emit_op(ops.BUILD_LIST_FROM_ARG) self.emit_op(ops.GET_ITER) self.use_next_block(start) self.emit_jump(ops.FOR_ITER, anchor) @@ -998,7 +999,6 @@ def visit_ListComp(self, lc): self.update_position(lc.lineno) - self.emit_op_arg(ops.BUILD_LIST, 0) self._listcomp_generator(lc.generators, 0, lc.elt) def _comp_generator(self, node, generators, gen_index): diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -10,7 +10,7 @@ from pypy.interpreter import gateway, function, eval, pyframe, pytraceback from pypy.interpreter.pycode import PyCode from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, newlist from pypy.rlib import jit, rstackovf from pypy.rlib.rarithmetic import r_uint, intmask from pypy.rlib.unroll import unrolling_iterable @@ -713,6 +713,17 @@ w_list = self.space.newlist(items) self.pushvalue(w_list) + def BUILD_LIST_FROM_ARG(self, _, next_instr): + # this is a little dance, because list has to be before the + # value + last_val = self.popvalue() + try: + lgt = self.space.int_w(self.space.len(last_val)) + except OperationError: + lgt = 0 # oh well + self.pushvalue(self.space.newlist(newlist(lgt))) + self.pushvalue(last_val) + def LOAD_ATTR(self, nameindex, next_instr): "obj.attributename" w_obj = self.popvalue() From noreply at buildbot.pypy.org Thu Feb 23 05:12:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 05:12:45 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: disable this check for now Message-ID: <20120223041245.5851A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52790:3846122a4dd5 Date: 2012-02-22 20:59 -0700 http://bitbucket.org/pypy/pypy/changeset/3846122a4dd5/ Log: disable this check for now diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -474,9 +474,10 @@ for var, value in livevars.items(): if var not in self._heuristic_order: if isinstance(value, (r_longlong, r_ulonglong)): - assert 0, ("should not pass a r_longlong argument for " - "now, because on 32-bit machines it would " - "need to be ordered as a FLOAT") + pass + #assert 0, ("should not pass a r_longlong argument for " + # "now, because on 32-bit machines it would " + # "need to be ordered as a FLOAT") elif isinstance(value, (int, long, r_singlefloat)): kind = '1:INT' elif isinstance(value, float): From noreply at buildbot.pypy.org Thu Feb 23 05:12:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 05:12:46 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: make sure we create only one list Message-ID: <20120223041246.887538203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52791:8b1f69191c51 Date: 2012-02-22 21:12 -0700 http://bitbucket.org/pypy/pypy/changeset/8b1f69191c51/ Log: make sure we create only one list diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -965,7 +965,7 @@ self.emit_op_arg(ops.CALL_METHOD, (kwarg_count << 8) | arg_count) return True - def _listcomp_generator(self, gens, gen_index, elt): + def _listcomp_generator(self, gens, gen_index, elt, outermost=False): start = self.new_block() skip = self.new_block() if_cleanup = self.new_block() @@ -973,7 +973,8 @@ gen = gens[gen_index] assert isinstance(gen, ast.comprehension) gen.iter.walkabout(self) - self.emit_op(ops.BUILD_LIST_FROM_ARG) + if outermost: + self.emit_op(ops.BUILD_LIST_FROM_ARG) self.emit_op(ops.GET_ITER) self.use_next_block(start) self.emit_jump(ops.FOR_ITER, anchor) @@ -999,7 +1000,7 @@ def visit_ListComp(self, lc): self.update_position(lc.lineno) - self._listcomp_generator(lc.generators, 0, lc.elt) + self._listcomp_generator(lc.generators, 0, lc.elt, outermost=True) def _comp_generator(self, node, generators, gen_index): start = self.new_block() From noreply at buildbot.pypy.org Thu Feb 23 05:39:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 05:39:09 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: merge default Message-ID: <20120223043909.A6BBB8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52792:70d8c227d622 Date: 2012-02-22 21:38 -0700 http://bitbucket.org/pypy/pypy/changeset/70d8c227d622/ Log: merge default diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -187,6 +186,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -246,6 +249,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -267,36 +272,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -309,6 +321,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -79,11 +79,13 @@ 'dtype', 's', 'a']) def fromstring_loop(a, count, dtype, itemsize, s): - for i in range(count): + i = 0 + while i < count: fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -389,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -472,7 +472,7 @@ IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -484,7 +484,7 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', 'pmul', + 'paddq', 'pinsr', 'pmul', 'psrl', # sign-extending moves should not produce GC pointers 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers From noreply at buildbot.pypy.org Thu Feb 23 10:46:55 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 23 Feb 2012 10:46:55 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Add missing get/set interiorfield_raw operations Message-ID: <20120223094655.0E8418203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52793:d2db5350e2aa Date: 2012-02-23 01:44 -0800 http://bitbucket.org/pypy/pypy/changeset/d2db5350e2aa/ Log: Add missing get/set interiorfield_raw operations diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -559,6 +559,7 @@ if not we_are_translated(): signed = op.getdescr().fielddescr.is_field_signed() self._ensure_result_bit_extension(res_loc, fieldsize.value, signed) + emit_getinteriorfield_raw = emit_getinteriorfield_gc def emit_setinteriorfield_gc(self, op, arglocs, regalloc): (base_loc, index_loc, value_loc, @@ -580,7 +581,7 @@ self.mc.stbx(value_loc.value, base_loc.value, r.SCRATCH.value) else: assert 0 - + emit_setinteriorfield_raw = emit_setinteriorfield_gc class ArrayOpAssembler(object): diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -604,6 +604,7 @@ self.possibly_free_var(op.result) return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_getinteriorfield_raw = prepare_getinteriorfield_gc def prepare_setinteriorfield_gc(self, op): t = unpack_interiorfielddescr(op.getdescr()) @@ -618,6 +619,7 @@ ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_setinteriorfield_raw = prepare_setinteriorfield_gc def prepare_arraylen_gc(self, op): arraydescr = op.getdescr() From noreply at buildbot.pypy.org Thu Feb 23 10:46:56 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 23 Feb 2012 10:46:56 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, bivab) fix jump conditions in malloc_cond and cond_call_gc_wb to jump on equality Message-ID: <20120223094656.43B0A8204C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r52794:6af6f2607858 Date: 2012-02-23 01:45 -0800 http://bitbucket.org/pypy/pypy/changeset/6af6f2607858/ Log: (edelsohn, bivab) fix jump conditions in malloc_cond and cond_call_gc_wb to jump on equality diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -938,7 +938,7 @@ # patch the JZ above offset = self.mc.currpos() - jz_location pmc = OverwritingBuilder(self.mc, jz_location, 1) - pmc.bc(4, 2, offset) # jump if the two values are equal + pmc.bc(12, 2, offset) # jump if the two values are equal pmc.overwrite() emit_cond_call_gc_wb_array = emit_cond_call_gc_wb diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -343,8 +343,12 @@ # if r3 == 0 we skip the return above and jump to the exception path offset = mc.currpos() - jmp_pos pmc = OverwritingBuilder(mc, jmp_pos, 1) - pmc.bc(4, 2, offset) + pmc.bc(12, 2, offset) pmc.overwrite() + # restore the frame before leaving + mc.load(r.SCRATCH.value, r.SP.value, frame_size + ofs) + mc.mtlr(r.SCRATCH.value) + mc.addi(r.SP.value, r.SP.value, frame_size) mc.b_abs(self.propagate_exception_path) From noreply at buildbot.pypy.org Thu Feb 23 11:21:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 11:21:03 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Trying out a version of the RTyper that runs every block in its own Message-ID: <20120223102103.5B9AB8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52795:df24c4bac7bc Date: 2012-02-23 11:20 +0100 http://bitbucket.org/pypy/pypy/changeset/df24c4bac7bc/ Log: Trying out a version of the RTyper that runs every block in its own transaction. diff --git a/pypy/rpython/rtyper.py b/pypy/rpython/rtyper.py --- a/pypy/rpython/rtyper.py +++ b/pypy/rpython/rtyper.py @@ -246,9 +246,12 @@ else: tracking = lambda block: None - previous_percentage = 0 - # specialize all blocks in the 'pending' list - for block in pending: + try: + import transaction + except ImportError: + previous_percentage = 0 + # specialize all blocks in the 'pending' list + for block in pending: tracking(block) blockcount += 1 self.specialize_block(block) @@ -266,6 +269,16 @@ error_report = '' self.log.event('specializing: %d / %d blocks (%d%%)%s' % (n, total, percentage, error_report)) + else: + # try a version using the transaction module + for block in pending: + transaction.add(self.specialize_block, block) + self.log.event('specializing transactionally %d blocks' % + (len(pending),)) + transaction.run() + blockcount += len(pending) + self.already_seen.update(dict.fromkeys(pending, True)) + # make sure all reprs so far have had their setup() called self.call_all_setups() From noreply at buildbot.pypy.org Thu Feb 23 11:30:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 11:30:32 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Auto-enable the 'transaction' module if --stm is specified. Message-ID: <20120223103032.BE8D58203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52796:5c8abeef1057 Date: 2012-02-23 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/5c8abeef1057/ Log: Auto-enable the 'transaction' module if --stm is specified. diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -185,6 +185,11 @@ # module if translation.continuation cannot be enabled config.objspace.usemodules._continuation = False + if config.translation.stm: + config.objspace.usemodules.transaction = True + elif config.objspace.usemodules.transaction: + raise Exception("use --stm, not --withmod-transaction alone") + if not config.translation.rweakref: config.objspace.usemodules._weakref = False From noreply at buildbot.pypy.org Thu Feb 23 13:58:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 13:58:02 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add comment Message-ID: <20120223125802.5D1198203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52797:8b5bdb3aeda9 Date: 2012-02-23 13:18 +0100 http://bitbucket.org/pypy/pypy/changeset/8b5bdb3aeda9/ Log: Add comment diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -44,6 +44,7 @@ # weak test: check that there are exactly 3 stm_writebarrier inserted. # one should be for 'x.n = n', one should cover both field assignments # to the Z instance, and the 3rd one is in the block 'x.n *= 2'. + # (the latter two should be killed by the later phases.) sum = summary(graph) assert sum['stm_writebarrier'] == 3 From noreply at buildbot.pypy.org Thu Feb 23 13:58:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 13:58:03 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add a failing test Message-ID: <20120223125803.8ECED8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52798:64b23c237c68 Date: 2012-02-23 13:18 +0100 http://bitbucket.org/pypy/pypy/changeset/64b23c237c68/ Log: Add a failing test diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -63,6 +63,16 @@ print "thread done." glob.done += 1 +def _check_pointer(arg1): + arg1.foobar = 40 # now 'arg1' is local + return arg1 + +def check_pointer_equality(arg, retry_counter): + res = _check_pointer(arg) + if res is not arg: + debug_print("ERROR: bogus pointer equality") + raise AssertionError + def run_me(): rstm.descriptor_init() try: @@ -70,6 +80,7 @@ arg = glob._arg ll_thread.release_NOAUTO(glob.lock) arg.foobar = 41 + rstm.perform_transaction(check_pointer_equality, Arg, arg) i = 0 while i < glob.LENGTH: arg.anchor = glob.anchor From noreply at buildbot.pypy.org Thu Feb 23 13:58:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 13:58:04 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Fix pointer comparison between two non-NULL objects. Message-ID: <20120223125804.C28B48203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52799:077ddb94d35b Date: 2012-02-23 13:57 +0100 http://bitbucket.org/pypy/pypy/changeset/077ddb94d35b/ Log: Fix pointer comparison between two non-NULL objects. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -403,6 +403,7 @@ 'stm_descriptor_init': LLOp(canrun=True), 'stm_descriptor_done': LLOp(canrun=True), 'stm_writebarrier': LLOp(sideeffects=False), + 'stm_normalize_global': LLOp(), 'stm_start_transaction': LLOp(canrun=True), 'stm_commit_transaction': LLOp(canrun=True), diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -320,6 +320,11 @@ # @always_inline def stm_writebarrier(obj): + """The write barrier must be called on any object that may be + a global. It looks for, and possibly makes, a local copy of + this object. The result of this call is the local copy --- + or 'obj' itself if it is already local. + """ if self.header(obj).tid & GCFLAG_GLOBAL != 0: obj = _stm_write_barrier_global(obj) return obj @@ -380,6 +385,23 @@ stm_operations.tldict_add(obj, localobj) # return localobj + # + def stm_normalize_global(obj): + """Normalize a pointer for the purpose of equality + comparison with another pointer. If 'obj' is the local + version of an existing global object, then returns the + global object. Don't use for e.g. hashing, because if 'obj' + is a purely local object, it just returns 'obj' --- which + will change at the next commit. + """ + if not obj: + return obj + tid = self.header(obj).tid + if tid & (GCFLAG_GLOBAL|GCFLAG_WAS_COPIED) != GCFLAG_WAS_COPIED: + return obj + # the only relevant case: it's the local copy of a global object + return self.header(obj).version + self.stm_normalize_global = stm_normalize_global # ---------- diff --git a/pypy/rpython/memory/gc/test/test_stmgc.py b/pypy/rpython/memory/gc/test/test_stmgc.py --- a/pypy/rpython/memory/gc/test/test_stmgc.py +++ b/pypy/rpython/memory/gc/test/test_stmgc.py @@ -564,3 +564,27 @@ s2 = llmemory.cast_adr_to_ptr(wr2.wadr, lltype.Ptr(S)) assert s2.a == 4242 assert s2 == tr1.s1 # tr1 is a root, so not copied yet + + def test_normalize_global_null(self): + a = self.gc.stm_normalize_global(llmemory.NULL) + assert a == llmemory.NULL + + def test_normalize_global_already_global(self): + sr1, sr1_adr = self.malloc(SR) + a = self.gc.stm_normalize_global(sr1_adr) + assert a == sr1_adr + + def test_normalize_global_purely_local(self): + self.select_thread(1) + sr1, sr1_adr = self.malloc(SR) + a = self.gc.stm_normalize_global(sr1_adr) + assert a == sr1_adr + + def test_normalize_global_local_copy(self): + sr1, sr1_adr = self.malloc(SR) + self.select_thread(1) + tr1_adr = self.gc.stm_writebarrier(sr1_adr) + a = self.gc.stm_normalize_global(sr1_adr) + assert a == sr1_adr + a = self.gc.stm_normalize_global(tr1_adr) + assert a == sr1_adr diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -18,6 +18,9 @@ self.stm_writebarrier_ptr = getfn( self.gcdata.gc.stm_writebarrier, [annmodel.SomeAddress()], annmodel.SomeAddress()) + self.stm_normalize_global_ptr = getfn( + self.gcdata.gc.stm_normalize_global, + [annmodel.SomeAddress()], annmodel.SomeAddress()) self.stm_start_ptr = getfn( self.gcdata.gc.start_transaction.im_func, [s_gc], annmodel.s_None) @@ -50,6 +53,15 @@ resulttype=llmemory.Address) hop.genop('cast_adr_to_ptr', [v_localadr], resultvar=op.result) + def gct_stm_normalize_global(self, hop): + op = hop.spaceop + v_adr = hop.genop('cast_ptr_to_adr', + [op.args[0]], resulttype=llmemory.Address) + v_globaladr = hop.genop("direct_call", + [self.stm_normalize_global_ptr, v_adr], + resulttype=llmemory.Address) + hop.genop('cast_adr_to_ptr', [v_globaladr], resultvar=op.result) + def gct_stm_start_transaction(self, hop): hop.genop("direct_call", [self.stm_start_ptr, self.c_const_gc]) diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -20,7 +20,11 @@ self.gsrc = GcSource(translator) def is_local(self, variable): - assert isinstance(variable, Variable) + if isinstance(variable, Constant): + if not variable.value: # the constant NULL can be considered local + return True + self.reason = 'constant' + return False try: srcs = self.gsrc[variable] except KeyError: diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -102,11 +102,10 @@ self.count_get_immutable += 1 newoperations.append(op) return - if isinstance(op.args[0], Variable): - if self.localtracker.is_local(op.args[0]): - self.count_get_local += 1 - newoperations.append(op) - return + if self.localtracker.is_local(op.args[0]): + self.count_get_local += 1 + newoperations.append(op) + return self.count_get_nonlocal += 1 op1 = SpaceOperation(stmopname, op.args, op.result) newoperations.append(op1) @@ -153,8 +152,7 @@ self.transform_set(newoperations, op) def stt_stm_writebarrier(self, newoperations, op): - if (isinstance(op.args[0], Variable) and - self.localtracker.is_local(op.args[0])): + if self.localtracker.is_local(op.args[0]): op = SpaceOperation('same_as', op.args, op.result) else: self.count_write_barrier += 1 @@ -183,6 +181,24 @@ return newoperations.append(op) + def pointer_comparison(self, newoperations, op): + if (self.localtracker.is_local(op.args[0]) and + self.localtracker.is_local(op.args[1])): + newoperations.append(op) + return + nargs = [] + for v1 in op.args: + if isinstance(v1, Variable): + v0 = v1 + v1 = copyvar(self.translator.annotator, v0) + newoperations.append( + SpaceOperation('stm_normalize_global', [v0], v1)) + nargs.append(v1) + newoperations.append(SpaceOperation(op.opname, nargs, op.result)) + + stt_ptr_eq = pointer_comparison + stt_ptr_ne = pointer_comparison + def transform_graph(graph): # for tests: only transforms one graph From noreply at buildbot.pypy.org Thu Feb 23 14:42:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 14:42:21 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Unifies the two detections of 'Constant' in this function. Message-ID: <20120223134221.9A6568203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52800:f4469915accd Date: 2012-02-23 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/f4469915accd/ Log: Unifies the two detections of 'Constant' in this function. diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -20,19 +20,17 @@ self.gsrc = GcSource(translator) def is_local(self, variable): - if isinstance(variable, Constant): - if not variable.value: # the constant NULL can be considered local - return True - self.reason = 'constant' - return False try: srcs = self.gsrc[variable] except KeyError: - # XXX we shouldn't get here, but we do translating the whole - # pypy. We should investigate at some point. In the meantime - # returning False is always safe. - self.reason = 'variable not in gsrc!' - return False + if isinstance(variable, Constant): + srcs = [variable] + else: + # XXX we shouldn't get here, but we do translating the whole + # pypy. We should investigate at some point. In the meantime + # returning False is always safe. + self.reason = 'variable not in gsrc!' + return False for src in srcs: if isinstance(src, SpaceOperation): if src.opname in RETURNS_LOCAL_POINTER: From noreply at buildbot.pypy.org Thu Feb 23 14:42:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 14:42:22 +0100 (CET) Subject: [pypy-commit] pypy default: Test and fix: skip that test on 64-bit. Message-ID: <20120223134222.CC6498204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52801:f3469e6103b2 Date: 2012-02-23 14:42 +0100 http://bitbucket.org/pypy/pypy/changeset/f3469e6103b2/ Log: Test and fix: skip that test on 64-bit. diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -469,14 +469,16 @@ # FLOATs. if len(self._heuristic_order) < len(livevars): from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, - r_ulonglong) + r_ulonglong, r_uint) added = False for var, value in livevars.items(): if var not in self._heuristic_order: - if isinstance(value, (r_longlong, r_ulonglong)): + if (r_ulonglong is not r_uint and + isinstance(value, (r_longlong, r_ulonglong))): assert 0, ("should not pass a r_longlong argument for " - "now, because on 32-bit machines it would " - "need to be ordered as a FLOAT") + "now, because on 32-bit machines it needs " + "to be ordered as a FLOAT but on 64-bit " + "machines as an INT") elif isinstance(value, (int, long, r_singlefloat)): kind = '1:INT' elif isinstance(value, float): diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -2,6 +2,7 @@ from pypy.conftest import option from pypy.rlib.jit import hint, we_are_jitted, JitDriver, elidable_promote from pypy.rlib.jit import JitHintError, oopspec, isconstant +from pypy.rlib.rarithmetic import r_uint from pypy.translator.translator import TranslationContext, graphof from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin from pypy.rpython.lltypesystem import lltype @@ -178,6 +179,11 @@ myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + def test_argument_order_accept_r_uint(self): + # this used to fail on 64-bit, because r_uint == r_ulonglong + myjitdriver = JitDriver(greens=['i1'], reds=[]) + myjitdriver.jit_merge_point(i1=r_uint(42)) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass From noreply at buildbot.pypy.org Thu Feb 23 15:02:07 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 15:02:07 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Ignore calls to collect() for now Message-ID: <20120223140207.E1E268203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52802:7326ccf9874d Date: 2012-02-23 14:00 +0100 http://bitbucket.org/pypy/pypy/changeset/7326ccf9874d/ Log: Ignore calls to collect() for now diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -250,7 +250,8 @@ def collect(self, gen=0): - raise NotImplementedError + #raise NotImplementedError + debug_print("XXX collect() ignored") def start_transaction(self): self.collector.start_transaction() From noreply at buildbot.pypy.org Thu Feb 23 15:02:09 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 15:02:09 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Change what is printed. Now even non-debug builds can have Message-ID: <20120223140209.7C06A8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52803:a70cb92dc2e7 Date: 2012-02-23 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/a70cb92dc2e7/ Log: Change what is printed. Now even non-debug builds can have logged the operation that caused the transaction to become inevitable. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -690,25 +690,19 @@ if (d == NULL) return; + if (is_inevitable(d)) /* also when the transaction is inactive */ + { + return; /* I am already inevitable */ + } + #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-inevitable"); -# ifdef RPY_STM_ASSERT if (PYPY_HAVE_DEBUG_PRINTS) { - fprintf(PYPY_DEBUG_FILE, "%s%s\n", why, - is_inevitable(d) ? "" : " <===="); + fprintf(PYPY_DEBUG_FILE, "%s\n", why); } -# endif #endif - if (is_inevitable(d)) /* also when the transaction is inactive */ - { -#ifdef RPY_STM_DEBUG_PRINT - PYPY_DEBUG_STOP("stm-inevitable"); -#endif - return; /* I am already inevitable */ - } - while (1) { unsigned long curtime = get_global_timestamp(d); diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -30,7 +30,7 @@ float stm_read_int4f(void *, long); -#ifdef RPY_STM_ASSERT +#if 1 /* #ifdef RPY_STM_ASSERT --- but it's always useful to have this info */ # define STM_CCHARP1(arg) char* arg # define STM_EXPLAIN1(info) info #else From noreply at buildbot.pypy.org Thu Feb 23 15:02:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 15:02:10 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: merge heads Message-ID: <20120223140210.A92CE8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52804:5501a1e8eba4 Date: 2012-02-23 14:59 +0100 http://bitbucket.org/pypy/pypy/changeset/5501a1e8eba4/ Log: merge heads diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -20,19 +20,17 @@ self.gsrc = GcSource(translator) def is_local(self, variable): - if isinstance(variable, Constant): - if not variable.value: # the constant NULL can be considered local - return True - self.reason = 'constant' - return False try: srcs = self.gsrc[variable] except KeyError: - # XXX we shouldn't get here, but we do translating the whole - # pypy. We should investigate at some point. In the meantime - # returning False is always safe. - self.reason = 'variable not in gsrc!' - return False + if isinstance(variable, Constant): + srcs = [variable] + else: + # XXX we shouldn't get here, but we do translating the whole + # pypy. We should investigate at some point. In the meantime + # returning False is always safe. + self.reason = 'variable not in gsrc!' + return False for src in srcs: if isinstance(src, SpaceOperation): if src.opname in RETURNS_LOCAL_POINTER: From noreply at buildbot.pypy.org Thu Feb 23 15:02:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 15:02:11 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Disable the signal module with stm for now. Message-ID: <20120223140211.D61698203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52805:5c85b1a8c846 Date: 2012-02-23 15:00 +0100 http://bitbucket.org/pypy/pypy/changeset/5c85b1a8c846/ Log: Disable the signal module with stm for now. diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -187,6 +187,7 @@ if config.translation.stm: config.objspace.usemodules.transaction = True + config.objspace.usemodules.signal = False # XXX! elif config.objspace.usemodules.transaction: raise Exception("use --stm, not --withmod-transaction alone") From noreply at buildbot.pypy.org Thu Feb 23 16:27:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 23 Feb 2012 16:27:10 +0100 (CET) Subject: [pypy-commit] pypy default: make sure to flush all _io streams when we exit the interpreter Message-ID: <20120223152710.653108203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52806:d4dee87e47cc Date: 2012-02-23 12:06 +0100 http://bitbucket.org/pypy/pypy/changeset/d4dee87e47cc/ Log: make sure to flush all _io streams when we exit the interpreter diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,8 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import flush_all_streams + flush_all_streams(space) diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -43,6 +43,7 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + register_flushable_stream(space, self) def getdict(self, space): return self.w_dict @@ -98,6 +99,8 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + flushable_streams = get_flushable_streams(space) + del flushable_streams[self] def flush_w(self, space): if self._CLOSED(): @@ -303,3 +306,32 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class IoState: + def __init__(self, space): + self.flushable_streams = {} + +def get_flushable_streams(space): + return space.fromcache(IoState).flushable_streams + +def register_flushable_stream(space, w_stream): + streams = get_flushable_streams(space) + streams[w_stream] = None + +def flush_all_streams(space): + flushable_streams = get_flushable_streams(space) + while flushable_streams: + for w_stream in flushable_streams.keys(): + assert isinstance(w_stream, W_IOBase) + try: + del flushable_streams[w_stream] + except KeyError: + pass # key was removed in the meantime + else: + space.call_method(w_stream, 'flush') # XXX: ignore IOErrors? + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -160,3 +160,20 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' From noreply at buildbot.pypy.org Thu Feb 23 16:27:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 23 Feb 2012 16:27:11 +0100 (CET) Subject: [pypy-commit] pypy default: refactor the autoflush of streams: we cannot keep a set of w_iobase instances, else they would be never collected by the GC. Instead, we keep a set of 'holders', which have a weakref to the actual stream. When the stream is closed, the holder is removed from the set Message-ID: <20120223152711.92CAC8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52807:4867e38be6fb Date: 2012-02-23 16:11 +0100 http://bitbucket.org/pypy/pypy/changeset/4867e38be6fb/ Log: refactor the autoflush of streams: we cannot keep a set of w_iobase instances, else they would be never collected by the GC. Instead, we keep a set of 'holders', which have a weakref to the actual stream. When the stream is closed, the holder is removed from the set diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -39,5 +39,6 @@ def shutdown(self, space): # at shutdown, flush all open streams. Ignore I/O errors. - from pypy.module._io.interp_iobase import flush_all_streams - flush_all_streams(space) + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,7 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False - register_flushable_stream(space, self) + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -99,8 +102,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True - flushable_streams = get_flushable_streams(space) - del flushable_streams[self] + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -312,26 +314,46 @@ # functions to make sure that all streams are flushed on exit # ------------------------------------------------------------ -class IoState: +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + + +class AutoFlusher(object): + def __init__(self, space): - self.flushable_streams = {} + self.streams = {} -def get_flushable_streams(space): - return space.fromcache(IoState).flushable_streams + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None -def register_flushable_stream(space, w_stream): - streams = get_flushable_streams(space) - streams[w_stream] = None + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] -def flush_all_streams(space): - flushable_streams = get_flushable_streams(space) - while flushable_streams: - for w_stream in flushable_streams.keys(): - assert isinstance(w_stream, W_IOBase) - try: - del flushable_streams[w_stream] - except KeyError: - pass # key was removed in the meantime - else: - space.call_method(w_stream, 'flush') # XXX: ignore IOErrors? - + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + From noreply at buildbot.pypy.org Thu Feb 23 16:27:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 23 Feb 2012 16:27:12 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120223152712.C436F8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52808:599b70ca76ec Date: 2012-02-23 16:26 +0100 http://bitbucket.org/pypy/pypy/changeset/599b70ca76ec/ Log: merge heads diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,9 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,6 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -98,6 +102,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -303,3 +308,52 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + + +class AutoFlusher(object): + + def __init__(self, space): + self.streams = {} + + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] + + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -160,3 +160,20 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' From noreply at buildbot.pypy.org Thu Feb 23 16:37:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 16:37:49 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: store sizehint on listobjects Message-ID: <20120223153749.58EED8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52809:a3b63091cb8c Date: 2012-02-23 08:37 -0700 http://bitbucket.org/pypy/pypy/changeset/a3b63091cb8c/ Log: store sizehint on listobjects diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -721,7 +721,7 @@ lgt = self.space.int_w(self.space.len(last_val)) except OperationError: lgt = 0 # oh well - self.pushvalue(self.space.newlist(newlist(lgt))) + self.pushvalue(self.space.newlist([], sizehint=lgt)) self.pushvalue(last_val) def LOAD_ATTR(self, nameindex, next_instr): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -7,7 +7,7 @@ from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice from pypy.objspace.std import slicetype from pypy.interpreter import gateway, baseobjspace -from pypy.rlib.objectmodel import instantiate, specialize +from pypy.rlib.objectmodel import instantiate, specialize, newlist from pypy.rlib.listsort import make_timsort_class from pypy.rlib import rerased, jit, debug from pypy.interpreter.argument import Signature @@ -75,13 +75,15 @@ class W_ListObject(W_AbstractListObject): from pypy.objspace.std.listtype import list_typedef as typedef - def __init__(w_self, space, wrappeditems): + def __init__(w_self, space, wrappeditems, sizehint=-1): assert isinstance(wrappeditems, list) w_self.space = space if space.config.objspace.std.withliststrategies: - w_self.strategy = get_strategy_from_list_objects(space, wrappeditems) + w_self.strategy = get_strategy_from_list_objects(space, + wrappeditems) else: w_self.strategy = space.fromcache(ObjectListStrategy) + w_self.sizehint = sizehint w_self.init_from_list_w(wrappeditems) @staticmethod @@ -397,7 +399,7 @@ else: strategy = self.space.fromcache(ObjectListStrategy) - storage = strategy.get_empty_storage() + storage = strategy.get_empty_storage(w_list.sizehint) w_list.strategy = strategy w_list.lstorage = storage @@ -660,8 +662,8 @@ l = [self.unwrap(w_item) for w_item in list_w] w_list.lstorage = self.erase(l) - def get_empty_storage(self): - return self.erase([]) + def get_empty_storage(self, sizehint): + return self.erase(newlist(sizehint)) def clone(self, w_list): l = self.unerase(w_list.lstorage) diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -300,8 +300,9 @@ make_sure_not_resized(list_w) return wraptuple(self, list_w) - def newlist(self, list_w): - return W_ListObject(self, list_w) + def newlist(self, list_w, sizehint=-1): + assert not list_w or sizehint == -1 + return W_ListObject(self, list_w, sizehint) def newlist_str(self, list_s): return W_ListObject.newlist_str(self, list_s) From noreply at buildbot.pypy.org Thu Feb 23 16:40:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 23 Feb 2012 16:40:51 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: oops, a missing case Message-ID: <20120223154051.AF4278203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52810:de49996d2864 Date: 2012-02-23 08:40 -0700 http://bitbucket.org/pypy/pypy/changeset/de49996d2864/ Log: oops, a missing case diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -663,6 +663,8 @@ w_list.lstorage = self.erase(l) def get_empty_storage(self, sizehint): + if sizehint == -1: + return self.erase([]) return self.erase(newlist(sizehint)) def clone(self, w_list): From noreply at buildbot.pypy.org Thu Feb 23 18:28:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 18:28:45 +0100 (CET) Subject: [pypy-commit] pypy default: Fixes. Sorry. Message-ID: <20120223172845.449E88203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52811:9ba62b41a086 Date: 2012-02-23 18:28 +0100 http://bitbucket.org/pypy/pypy/changeset/9ba62b41a086/ Log: Fixes. Sorry. diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -52,6 +52,7 @@ set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) + j = float(j) while frame.i > 3: jitdriver.can_enter_jit(frame=frame, total=total, j=j) jitdriver.jit_merge_point(frame=frame, total=total, j=j) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2943,11 +2943,18 @@ self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): - myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) + myjitdriver = JitDriver(greens = [], reds = ['n', 'a']) + class A: + pass def f(n): sa = i = rffi.cast(rffi.ULONGLONG, 1) + a = A() while i < rffi.cast(rffi.ULONGLONG, n): - myjitdriver.jit_merge_point(sa=sa, n=n, i=i) + a.sa = sa + a.i = i + myjitdriver.jit_merge_point(n=n, a=a) + sa = a.sa + i = a.i sa += sa % i i += 1 res = self.meta_interp(f, [32]) diff --git a/pypy/jit/tl/tinyframe/tinyframe.py b/pypy/jit/tl/tinyframe/tinyframe.py --- a/pypy/jit/tl/tinyframe/tinyframe.py +++ b/pypy/jit/tl/tinyframe/tinyframe.py @@ -210,7 +210,7 @@ def repr(self): return "" % (self.outer.repr(), self.inner.repr()) -driver = JitDriver(greens = ['code', 'i'], reds = ['self'], +driver = JitDriver(greens = ['i', 'code'], reds = ['self'], virtualizables = ['self']) class Frame(object): From noreply at buildbot.pypy.org Thu Feb 23 18:30:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 18:30:31 +0100 (CET) Subject: [pypy-commit] pypy default: Fix. Message-ID: <20120223173031.73F758203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52812:b8aed975987f Date: 2012-02-23 18:30 +0100 http://bitbucket.org/pypy/pypy/changeset/b8aed975987f/ Log: Fix. diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -26,6 +26,7 @@ llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True +fatalerror._jit_look_inside_ = False fatalerror._annenforceargs_ = [str] def fatalerror_notb(msg): @@ -34,6 +35,7 @@ from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) fatalerror_notb._dont_inline_ = True +fatalerror_notb._jit_look_inside_ = False fatalerror_notb._annenforceargs_ = [str] From noreply at buildbot.pypy.org Thu Feb 23 18:32:25 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 23 Feb 2012 18:32:25 +0100 (CET) Subject: [pypy-commit] pypy default: Fix this test Message-ID: <20120223173225.8F51A8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52813:55fd1d7090fb Date: 2012-02-23 18:32 +0100 http://bitbucket.org/pypy/pypy/changeset/55fd1d7090fb/ Log: Fix this test diff --git a/pypy/translator/sandbox/test/test_sandbox.py b/pypy/translator/sandbox/test/test_sandbox.py --- a/pypy/translator/sandbox/test/test_sandbox.py +++ b/pypy/translator/sandbox/test/test_sandbox.py @@ -145,9 +145,9 @@ g = pipe.stdin f = pipe.stdout expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GENERATIONGC_NURSERY",), None) - if sys.platform.startswith('linux'): # on Mac, uses another (sandboxsafe) approach - expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), - OSError(5232, "xyz")) + #if sys.platform.startswith('linux'): + # expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), + # OSError(5232, "xyz")) expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GC_DEBUG",), None) g.close() tail = f.read() From noreply at buildbot.pypy.org Thu Feb 23 21:19:46 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:46 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add pymath.h and a definition of Py_HUGE_VAL. Message-ID: <20120223201946.6872F8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52814:13ea80daa6d4 Date: 2012-02-23 00:07 +0100 http://bitbucket.org/pypy/pypy/changeset/13ea80daa6d4/ Log: cpyext: add pymath.h and a definition of Py_HUGE_VAL. diff --git a/pypy/module/cpyext/include/Python.h b/pypy/module/cpyext/include/Python.h --- a/pypy/module/cpyext/include/Python.h +++ b/pypy/module/cpyext/include/Python.h @@ -113,6 +113,7 @@ #include "compile.h" #include "frameobject.h" #include "eval.h" +#include "pymath.h" #include "pymem.h" #include "pycobject.h" #include "pycapsule.h" diff --git a/pypy/module/cpyext/include/pymath.h b/pypy/module/cpyext/include/pymath.h new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/include/pymath.h @@ -0,0 +1,20 @@ +#ifndef Py_PYMATH_H +#define Py_PYMATH_H + +/************************************************************************** +Symbols and macros to supply platform-independent interfaces to mathematical +functions and constants +**************************************************************************/ + +/* HUGE_VAL is supposed to expand to a positive double infinity. Python + * uses Py_HUGE_VAL instead because some platforms are broken in this + * respect. We used to embed code in pyport.h to try to worm around that, + * but different platforms are broken in conflicting ways. If you're on + * a platform where HUGE_VAL is defined incorrectly, fiddle your Python + * config to #define Py_HUGE_VAL to something that works on your platform. + */ +#ifndef Py_HUGE_VAL +#define Py_HUGE_VAL HUGE_VAL +#endif + +#endif /* Py_PYMATH_H */ From noreply at buildbot.pypy.org Thu Feb 23 21:19:47 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:47 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PyUnicode_Tailmatch Message-ID: <20120223201947.A4A0D8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52815:db6f7731ed5f Date: 2012-02-23 00:19 +0100 http://bitbucket.org/pypy/pypy/changeset/db6f7731ed5f/ Log: cpyext: implement PyUnicode_Tailmatch diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2317,17 +2317,6 @@ use the default error handling.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], rffi.INT_real, error=-1) -def PyUnicode_Tailmatch(space, str, substr, start, end, direction): - """Return 1 if substr matches str*[*start:end] at the given tail end - (direction == -1 means to do a prefix match, direction == 1 a suffix match), - 0 otherwise. Return -1 if an error occurred. - - This function used an int type for start and end. This - might require changes in your code for properly supporting 64-bit - systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], Py_ssize_t, error=-2) def PyUnicode_Find(space, str, substr, start, end, direction): """Return the first position of substr in str*[*start:end] using the given diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -437,3 +437,10 @@ api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) assert u"zbzbzbzb" == space.unwrap( api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) + + def test_tailmatch(self, space, api): + w_str = space.wrap(u"abcdef") + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -12,7 +12,7 @@ make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check from pypy.module.sys.interp_encoding import setdefaultencoding -from pypy.objspace.std import unicodeobject, unicodetype +from pypy.objspace.std import unicodeobject, unicodetype, stringtype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer import sys @@ -560,3 +560,16 @@ return space.call_method(w_str, "replace", w_substr, w_replstr, space.wrap(maxcount)) + at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], + rffi.INT_real, error=-1) +def PyUnicode_Tailmatch(space, w_str, w_substr, start, end, direction): + """Return 1 if substr matches str[start:end] at the given tail end + (direction == -1 means to do a prefix match, direction == 1 a + suffix match), 0 otherwise. Return -1 if an error occurred.""" + str = space.unicode_w(w_str) + substr = space.unicode_w(w_substr) + if rffi.cast(lltype.Signed, direction) >= 0: + return stringtype.stringstartswith(str, substr, start, end) + else: + return stringtype.stringendswith(str, substr, start, end) + From noreply at buildbot.pypy.org Thu Feb 23 21:19:48 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:48 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyFrozenSet_Type Message-ID: <20120223201948.DC3608203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52816:7bf5725d49fd Date: 2012-02-23 00:20 +0100 http://bitbucket.org/pypy/pypy/changeset/7bf5725d49fd/ Log: cpyext: add PyFrozenSet_Type diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -385,6 +385,7 @@ "Tuple": "space.w_tuple", "List": "space.w_list", "Set": "space.w_set", + "FrozenSet": "space.w_frozenset", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", From noreply at buildbot.pypy.org Thu Feb 23 21:19:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:50 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyUnicode_GetMax() Message-ID: <20120223201950.17A928203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52817:16b015b252c3 Date: 2012-02-23 00:25 +0100 http://bitbucket.org/pypy/pypy/changeset/16b015b252c3/ Log: cpyext: add PyUnicode_GetMax() diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -155,6 +155,11 @@ except KeyError: return -1.0 + at cpython_api([], Py_UNICODE, error=CANNOT_FAIL) +def PyUnicode_GetMax(space): + """Get the maximum ordinal for a Unicode character.""" + return unichr(runicode.MAXUNICODE) + @cpython_api([PyObject], rffi.CCHARP, error=CANNOT_FAIL) def PyUnicode_AS_DATA(space, ref): """Return a pointer to the internal buffer of the object. o has to be a From noreply at buildbot.pypy.org Thu Feb 23 21:19:51 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:51 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement remaining Py_UNICODE_IS* functions Message-ID: <20120223201951.54AA18203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52818:4f01039bfe04 Date: 2012-02-23 00:38 +0100 http://bitbucket.org/pypy/pypy/changeset/4f01039bfe04/ Log: cpyext: implement remaining Py_UNICODE_IS* functions diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1853,26 +1853,6 @@ """ raise NotImplementedError - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISTITLE(space, ch): - """Return 1 or 0 depending on whether ch is a titlecase character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISDIGIT(space, ch): - """Return 1 or 0 depending on whether ch is a digit character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISNUMERIC(space, ch): - """Return 1 or 0 depending on whether ch is a numeric character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISALPHA(space, ch): - """Return 1 or 0 depending on whether ch is an alphabetic character.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP], PyObject) def PyUnicode_FromFormat(space, format): """Take a C printf()-style format string and a variable number of diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -204,8 +204,18 @@ assert api.Py_UNICODE_ISSPACE(unichr(char)) assert not api.Py_UNICODE_ISSPACE(u'a') + assert api.Py_UNICODE_ISALPHA(u'a') + assert not api.Py_UNICODE_ISALPHA(u'0') + assert api.Py_UNICODE_ISALNUM(u'a') + assert api.Py_UNICODE_ISALNUM(u'0') + assert not api.Py_UNICODE_ISALNUM(u'+') + assert api.Py_UNICODE_ISDECIMAL(u'\u0660') assert not api.Py_UNICODE_ISDECIMAL(u'a') + assert api.Py_UNICODE_ISDIGIT(u'9') + assert not api.Py_UNICODE_ISDIGIT(u'@') + assert api.Py_UNICODE_ISNUMERIC(u'9') + assert not api.Py_UNICODE_ISNUMERIC(u'@') for char in [0x0a, 0x0d, 0x1c, 0x1d, 0x1e, 0x85, 0x2028, 0x2029]: assert api.Py_UNICODE_ISLINEBREAK(unichr(char)) @@ -216,6 +226,9 @@ assert not api.Py_UNICODE_ISUPPER(u'a') assert not api.Py_UNICODE_ISLOWER(u'�') assert api.Py_UNICODE_ISUPPER(u'�') + assert not api.Py_UNICODE_ISTITLE(u'A') + assert api.Py_UNICODE_ISTITLE( + u'\N{LATIN CAPITAL LETTER L WITH SMALL LETTER J}') def test_TOLOWER(self, space, api): assert api.Py_UNICODE_TOLOWER(u'�') == u'�' diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -89,6 +89,11 @@ return unicodedb.isspace(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISALPHA(space, ch): + """Return 1 or 0 depending on whether ch is an alphabetic character.""" + return unicodedb.isalpha(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISALNUM(space, ch): """Return 1 or 0 depending on whether ch is an alphanumeric character.""" return unicodedb.isalnum(ord(ch)) @@ -104,6 +109,16 @@ return unicodedb.isdecimal(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISDIGIT(space, ch): + """Return 1 or 0 depending on whether ch is a digit character.""" + return unicodedb.isdigit(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISNUMERIC(space, ch): + """Return 1 or 0 depending on whether ch is a numeric character.""" + return unicodedb.isnumeric(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISLOWER(space, ch): """Return 1 or 0 depending on whether ch is a lowercase character.""" return unicodedb.islower(ord(ch)) @@ -113,6 +128,11 @@ """Return 1 or 0 depending on whether ch is an uppercase character.""" return unicodedb.isupper(ord(ch)) + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISTITLE(space, ch): + """Return 1 or 0 depending on whether ch is a titlecase character.""" + return unicodedb.istitle(ord(ch)) + @cpython_api([Py_UNICODE], Py_UNICODE, error=CANNOT_FAIL) def Py_UNICODE_TOLOWER(space, ch): """Return the character ch converted to lower case.""" From noreply at buildbot.pypy.org Thu Feb 23 21:19:52 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:52 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyCode_Check(), PyCode_GetNumFree() Message-ID: <20120223201952.8ED458203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52819:6cc0977ede36 Date: 2012-02-23 00:50 +0100 http://bitbucket.org/pypy/pypy/changeset/6cc0977ede36/ Log: cpyext: add PyCode_Check(), PyCode_GetNumFree() diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - PyObjectFields, generic_cpy_call, CONST_STRING, + PyObjectFields, generic_cpy_call, CONST_STRING, CANNOT_FAIL, cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) @@ -48,6 +48,7 @@ PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) +PyCode_Check, PyCode_CheckExact = build_type_checkers("Code", PyCode) def function_attach(space, py_obj, w_obj): py_func = rffi.cast(PyFunctionObject, py_obj) @@ -167,3 +168,9 @@ freevars=[], cellvars=[])) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyCode_GetNumFree(space, w_co): + """Return the number of free variables in co.""" + co = space.interp_w(PyCode, w_co) + return len(co.co_freevars) + diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -182,16 +182,6 @@ used as the positional and keyword parameters to the object's constructor.""" raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_Check(space, co): - """Return true if co is a code object""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_GetNumFree(space, co): - """Return the number of free variables in co.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=-1) def PyCodec_Register(space, search_function): """Register a new codec search function. diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -81,6 +81,14 @@ rffi.free_charp(filename) rffi.free_charp(funcname) + def test_getnumfree(self, space, api): + w_function = space.appexec([], """(): + a = 5 + def method(x): return a, x + return method + """) + assert api.PyCode_GetNumFree(w_function.code) == 1 + def test_classmethod(self, space, api): w_function = space.appexec([], """(): def method(x): return x From noreply at buildbot.pypy.org Thu Feb 23 21:19:53 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:53 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: add PyEval_EvalCode() Message-ID: <20120223201953.D1D1A8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52820:33f342e61049 Date: 2012-02-23 01:05 +0100 http://bitbucket.org/pypy/pypy/changeset/33f342e61049/ Log: cpyext: add PyEval_EvalCode() diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -5,6 +5,7 @@ cpython_struct) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.pyerrors import PyErr_SetFromErrno +from pypy.module.cpyext.funcobject import PyCodeObject from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( @@ -48,6 +49,17 @@ return None return borrow_from(None, caller.w_globals) + at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) +def PyEval_EvalCode(space, w_code, w_globals, w_locals): + """This is a simplified interface to PyEval_EvalCodeEx(), with just + the code object, and the dictionaries of global and local variables. + The other arguments are set to NULL.""" + if w_globals is None: + w_globals = space.w_None + if w_locals is None: + w_locals = space.w_None + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([PyObject, PyObject], PyObject) def PyObject_CallObject(space, w_obj, w_arg): """ diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2514,13 +2514,6 @@ returns.""" raise NotImplementedError - at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) -def PyEval_EvalCode(space, co, globals, locals): - """This is a simplified interface to PyEval_EvalCodeEx(), with just - the code object, and the dictionaries of global and local variables. - The other arguments are set to NULL.""" - raise NotImplementedError - @cpython_api([PyCodeObject, PyObject, PyObject, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObject], PyObject) def PyEval_EvalCodeEx(space, co, globals, locals, args, argcount, kws, kwcount, defs, defcount, closure): """Evaluate a precompiled code object, given a particular environment for its diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -63,6 +63,22 @@ assert space.int_w(w_res) == 10 + def test_evalcode(self, space, api): + w_f = space.appexec([], """(): + def f(*args): + assert isinstance(args, tuple) + return len(args) + 8 + return f + """) + + w_t = space.newtuple([space.wrap(1), space.wrap(2)]) + w_globals = space.newdict() + w_locals = space.newdict() + space.setitem(w_locals, space.wrap("args"), w_t) + w_res = api.PyEval_EvalCode(w_f.code, w_globals, w_locals) + + assert space.int_w(w_res) == 10 + def test_run_simple_string(self, space, api): def run(code): buf = rffi.str2charp(code) From noreply at buildbot.pypy.org Thu Feb 23 21:19:55 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:55 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: implement PyRun_StringFlags() Message-ID: <20120223201955.19DAD8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52821:91960d426061 Date: 2012-02-23 01:37 +0100 http://bitbucket.org/pypy/pypy/changeset/91960d426061/ Log: cpyext: implement PyRun_StringFlags() diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -9,7 +9,7 @@ from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( - "PyCompilerFlags", ()) + "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) @cpython_api([PyObject, PyObject, PyObject], PyObject) @@ -86,7 +86,7 @@ Py_file_input = 257 Py_eval_input = 258 -def compile_string(space, source, filename, start): +def compile_string(space, source, filename, start, flags=0): w_source = space.wrap(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: @@ -98,7 +98,7 @@ else: raise OperationError(space.w_ValueError, space.wrap( "invalid mode parameter for compilation")) - return compiling.compile(space, w_source, filename, mode) + return compiling.compile(space, w_source, filename, mode, flags) def run_string(space, source, filename, start, w_globals, w_locals): w_code = compile_string(space, source, filename, start) @@ -121,6 +121,23 @@ filename = "" return run_string(space, source, filename, start, w_globals, w_locals) + at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, + PyCompilerFlagsPtr], PyObject) +def PyRun_StringFlags(space, source, start, w_globals, w_locals, flagsptr): + """Execute Python source code from str in the context specified by the + dictionaries globals and locals with the compiler flags specified by + flags. The parameter start specifies the start token that should be used to + parse the source code. + + Returns the result of executing the code as a Python object, or NULL if an + exception was raised.""" + if flagsptr: + flags = flagsptr.c_cf_flags + else: + flags = 0 + w_code = compile_string(space, source, "", start, flags) + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([FILEP, CONST_STRING, rffi.INT_real, PyObject, PyObject], PyObject) def PyRun_File(space, fp, filename, start, w_globals, w_locals): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -162,7 +179,7 @@ @cpython_api([rffi.CCHARP, rffi.CCHARP, rffi.INT_real, PyCompilerFlagsPtr], PyObject) -def Py_CompileStringFlags(space, source, filename, start, flags): +def Py_CompileStringFlags(space, source, filename, start, flagsptr): """Parse and compile the Python source code in str, returning the resulting code object. The start token is given by start; this can be used to constrain the code which can be compiled and should @@ -172,7 +189,8 @@ returns NULL if the code cannot be parsed or compiled.""" source = rffi.charp2str(source) filename = rffi.charp2str(filename) - if flags: - raise OperationError(space.w_NotImplementedError, space.wrap( - "cpyext Py_CompileStringFlags does not accept flags")) - return compile_string(space, source, filename, start) + if flagsptr: + flags = flagsptr.c_cf_flags + else: + flags = 0 + return compile_string(space, source, filename, start, flags) diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,6 +19,8 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_SOURCE_IS_UTF8 0x0100 + #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) #ifdef __cplusplus diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2483,17 +2483,6 @@ source code is read from fp instead of an in-memory string.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, PyCompilerFlags], PyObject) -def PyRun_StringFlags(space, str, start, globals, locals, flags): - """Execute Python source code from str in the context specified by the - dictionaries globals and locals with the compiler flags specified by - flags. The parameter start specifies the start token that should be used to - parse the source code. - - Returns the result of executing the code as a Python object, or NULL if an - exception was raised.""" - raise NotImplementedError - @cpython_api([FILE, rffi.CCHARP, rffi.INT_real, PyObject, PyObject, rffi.INT_real], PyObject) def PyRun_FileEx(space, fp, filename, start, globals, locals, closeit): """This is a simplified interface to PyRun_FileExFlags() below, leaving diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -2,9 +2,10 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.eval import ( - Py_single_input, Py_file_input, Py_eval_input) + Py_single_input, Py_file_input, Py_eval_input, PyCompilerFlags) from pypy.module.cpyext.api import fopen, fclose, fileno, Py_ssize_tP from pypy.interpreter.gateway import interp2app +from pypy.interpreter.astcompiler import consts from pypy.tool.udir import udir import sys, os @@ -112,6 +113,16 @@ assert 42 * 43 == space.unwrap( api.PyObject_GetItem(w_globals, space.wrap("a"))) + def test_run_string_flags(self, space, api): + flags = lltype.malloc(PyCompilerFlags, flavor='raw') + flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) + w_globals = space.newdict() + api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, + w_globals, w_globals, flags) + w_a = space.getitem(w_globals, space.wrap("a")) + assert space.unwrap(w_a) == u'caf\xe9' + lltype.free(flags, flavor='raw') + def test_run_file(self, space, api): filepath = udir / "cpyext_test_runfile.py" filepath.write("raise ZeroDivisionError") From noreply at buildbot.pypy.org Thu Feb 23 21:19:56 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:56 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Implement PyEval_MergeCompilerFlags() Message-ID: <20120223201956.561658203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52822:116c15429aa4 Date: 2012-02-23 02:05 +0100 http://bitbucket.org/pypy/pypy/changeset/116c15429aa4/ Log: cpyext: Implement PyEval_MergeCompilerFlags() diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -1,4 +1,5 @@ from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler import consts from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, fread, feof, Py_ssize_tP, @@ -12,6 +13,12 @@ "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) +PyCF_MASK = (consts.CO_FUTURE_DIVISION | + consts.CO_FUTURE_ABSOLUTE_IMPORT | + consts.CO_FUTURE_WITH_STATEMENT | + consts.CO_FUTURE_PRINT_FUNCTION | + consts.CO_FUTURE_UNICODE_LITERALS) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyEval_CallObjectWithKeywords(space, w_obj, w_arg, w_kwds): return space.call(w_obj, w_arg, w_kwds) @@ -194,3 +201,23 @@ else: flags = 0 return compile_string(space, source, filename, start, flags) + + at cpython_api([PyCompilerFlagsPtr], rffi.INT_real, error=CANNOT_FAIL) +def PyEval_MergeCompilerFlags(space, cf): + """This function changes the flags of the current evaluation + frame, and returns true on success, false on failure.""" + result = cf.c_cf_flags != 0 + current_frame = space.getexecutioncontext().gettopframe_nohidden() + if current_frame: + codeflags = current_frame.pycode.co_flags + compilerflags = codeflags & PyCF_MASK + if compilerflags: + result = 1; + cf.c_cf_flags |= compilerflags + # No future keyword at the moment + # if codeflags & CO_GENERATOR_ALLOWED: + # result = 1 + # cf.c_cf_flags |= CO_GENERATOR_ALLOWED + return result + + diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -13,13 +13,19 @@ /* Masks for co_flags above */ /* These values are also in funcobject.py */ -#define CO_OPTIMIZED 0x0001 -#define CO_NEWLOCALS 0x0002 -#define CO_VARARGS 0x0004 -#define CO_VARKEYWORDS 0x0008 +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 #define CO_NESTED 0x0010 #define CO_GENERATOR 0x0020 +#define CO_FUTURE_DIVISION 0x02000 +#define CO_FUTURE_ABSOLUTE_IMPORT 0x04000 +#define CO_FUTURE_WITH_STATEMENT 0x08000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,7 +19,13 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) +#define PyCF_MASK_OBSOLETE (CO_NESTED) #define PyCF_SOURCE_IS_UTF8 0x0100 +#define PyCF_DONT_IMPLY_DEDENT 0x0200 +#define PyCF_ONLY_AST 0x0400 #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -2527,12 +2527,6 @@ throw() methods of generator objects.""" raise NotImplementedError - at cpython_api([PyCompilerFlags], rffi.INT_real, error=CANNOT_FAIL) -def PyEval_MergeCompilerFlags(space, cf): - """This function changes the flags of the current evaluation frame, and returns - true on success, false on failure.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyWeakref_Check(space, ob): """Return true if ob is either a reference or proxy object. diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -283,3 +283,21 @@ print dir(mod) print mod.__dict__ assert mod.f(42) == 47 + + def test_merge_compiler_flags(self): + module = self.import_extension('foo', [ + ("get_flags", "METH_NOARGS", + """ + PyCompilerFlags flags; + flags.cf_flags = 0; + int result = PyEval_MergeCompilerFlags(&flags); + return Py_BuildValue("ii", result, flags.cf_flags); + """), + ]) + assert module.get_flags() == (0, 0) + + ns = {'module':module} + exec """from __future__ import division \nif 1: + def nested_flags(): + return module.get_flags()""" in ns + assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION From noreply at buildbot.pypy.org Thu Feb 23 21:19:57 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 23 Feb 2012 21:19:57 +0100 (CET) Subject: [pypy-commit] pypy default: Translation fixes Message-ID: <20120223201957.8B5AD8203C@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52823:f50a42098ae3 Date: 2012-02-23 16:02 +0100 http://bitbucket.org/pypy/pypy/changeset/f50a42098ae3/ Log: Translation fixes diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -138,8 +138,9 @@ Returns the result of executing the code as a Python object, or NULL if an exception was raised.""" + source = rffi.charp2str(source) if flagsptr: - flags = flagsptr.c_cf_flags + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) else: flags = 0 w_code = compile_string(space, source, "", start, flags) @@ -197,7 +198,7 @@ source = rffi.charp2str(source) filename = rffi.charp2str(filename) if flagsptr: - flags = flagsptr.c_cf_flags + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) else: flags = 0 return compile_string(space, source, filename, start, flags) @@ -206,18 +207,20 @@ def PyEval_MergeCompilerFlags(space, cf): """This function changes the flags of the current evaluation frame, and returns true on success, false on failure.""" - result = cf.c_cf_flags != 0 + flags = rffi.cast(lltype.Signed, cf.c_cf_flags) + result = flags != 0 current_frame = space.getexecutioncontext().gettopframe_nohidden() if current_frame: codeflags = current_frame.pycode.co_flags compilerflags = codeflags & PyCF_MASK if compilerflags: - result = 1; - cf.c_cf_flags |= compilerflags + result = 1 + flags |= compilerflags # No future keyword at the moment # if codeflags & CO_GENERATOR_ALLOWED: # result = 1 - # cf.c_cf_flags |= CO_GENERATOR_ALLOWED + # flags |= CO_GENERATOR_ALLOWED + cf.c_cf_flags = rffi.cast(rffi.INT, flags) return result From noreply at buildbot.pypy.org Fri Feb 24 02:36:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 02:36:09 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: bump the pycode number and the sad case of renaming opcodes Message-ID: <20120224013609.B3A5D8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52824:287a83309e74 Date: 2012-02-23 18:35 -0700 http://bitbucket.org/pypy/pypy/changeset/287a83309e74/ Log: bump the pycode number and the sad case of renaming opcodes diff --git a/lib-python/modified-2.7/opcode.py b/lib-python/modified-2.7/opcode.py --- a/lib-python/modified-2.7/opcode.py +++ b/lib-python/modified-2.7/opcode.py @@ -119,39 +119,40 @@ def_op('POP_BLOCK', 87) def_op('END_FINALLY', 88) def_op('BUILD_CLASS', 89) +def_op('BUILD_LIST_FROM_ARG', 90) -HAVE_ARGUMENT = 90 # Opcodes from here have an argument: +HAVE_ARGUMENT = 91 # Opcodes from here have an argument: -name_op('STORE_NAME', 90) # Index in name list -name_op('DELETE_NAME', 91) # "" -def_op('UNPACK_SEQUENCE', 92) # Number of tuple items -jrel_op('FOR_ITER', 93) -def_op('LIST_APPEND', 94) -name_op('STORE_ATTR', 95) # Index in name list -name_op('DELETE_ATTR', 96) # "" -name_op('STORE_GLOBAL', 97) # "" -name_op('DELETE_GLOBAL', 98) # "" -def_op('DUP_TOPX', 99) # number of items to duplicate -def_op('LOAD_CONST', 100) # Index in const list -hasconst.append(100) -name_op('LOAD_NAME', 101) # Index in name list -def_op('BUILD_TUPLE', 102) # Number of tuple items -def_op('BUILD_LIST', 103) # Number of list items -def_op('BUILD_SET', 104) # Number of set items -def_op('BUILD_MAP', 105) # Number of dict entries (upto 255) -name_op('LOAD_ATTR', 106) # Index in name list -def_op('COMPARE_OP', 107) # Comparison operator -hascompare.append(107) -name_op('IMPORT_NAME', 108) # Index in name list -name_op('IMPORT_FROM', 109) # Index in name list -jrel_op('JUMP_FORWARD', 110) # Number of bytes to skip -jabs_op('JUMP_IF_FALSE_OR_POP', 111) # Target byte offset from beginning of code -jabs_op('JUMP_IF_TRUE_OR_POP', 112) # "" -jabs_op('JUMP_ABSOLUTE', 113) # "" -jabs_op('POP_JUMP_IF_FALSE', 114) # "" -jabs_op('POP_JUMP_IF_TRUE', 115) # "" +name_op('STORE_NAME', 91) # Index in name list +name_op('DELETE_NAME', 92) # "" +def_op('UNPACK_SEQUENCE', 93) # Number of tuple items +jrel_op('FOR_ITER', 94) +def_op('LIST_APPEND', 95) +name_op('STORE_ATTR', 96) # Index in name list +name_op('DELETE_ATTR', 97) # "" +name_op('STORE_GLOBAL', 98) # "" +name_op('DELETE_GLOBAL', 99) # "" +def_op('DUP_TOPX', 100) # number of items to duplicate +def_op('LOAD_CONST', 101) # Index in const list +hasconst.append(101) +name_op('LOAD_NAME', 102) # Index in name list +def_op('BUILD_TUPLE', 103) # Number of tuple items +def_op('BUILD_LIST', 104) # Number of list items +def_op('BUILD_SET', 105) # Number of set items +def_op('BUILD_MAP', 106) # Number of dict entries (upto 255) +name_op('LOAD_ATTR', 107) # Index in name list +def_op('COMPARE_OP', 108) # Comparison operator +hascompare.append(108) +name_op('IMPORT_NAME', 109) # Index in name list +name_op('IMPORT_FROM', 110) # Index in name list +jrel_op('JUMP_FORWARD', 111) # Number of bytes to skip +jabs_op('JUMP_IF_FALSE_OR_POP', 112) # Target byte offset from beginning of code +jabs_op('JUMP_IF_TRUE_OR_POP', 113) # "" +jabs_op('JUMP_ABSOLUTE', 114) # "" +jabs_op('POP_JUMP_IF_FALSE', 115) # "" +jabs_op('POP_JUMP_IF_TRUE', 116) # "" -name_op('LOAD_GLOBAL', 116) # Index in name list +name_op('LOAD_GLOBAL', 117) # Index in name list jabs_op('CONTINUE_LOOP', 119) # Target address jrel_op('SETUP_LOOP', 120) # Distance to target address @@ -192,6 +193,5 @@ def_op('LOOKUP_METHOD', 201) # Index in name list hasname.append(201) def_op('CALL_METHOD', 202) # #args not including 'self' -def_op('BUILD_LIST_FROM_ARG', 203) del def_op, name_op, jrel_op, jabs_op diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -28,7 +28,7 @@ # Magic numbers for the bytecode version in code objects. # See comments in pypy/module/imp/importing. cpython_magic, = struct.unpack(" Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52826:baf60af5868e Date: 2012-02-23 19:30 -0700 http://bitbucket.org/pypy/pypy/changeset/baf60af5868e/ Log: Backed out changeset b3406c3e63a4 diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -965,7 +965,7 @@ self.emit_op_arg(ops.CALL_METHOD, (kwarg_count << 8) | arg_count) return True - def _listcomp_generator(self, gens, gen_index, elt, emit_build=False): + def _listcomp_generator(self, gens, gen_index, elt, outermost=False): start = self.new_block() skip = self.new_block() if_cleanup = self.new_block() @@ -973,7 +973,7 @@ gen = gens[gen_index] assert isinstance(gen, ast.comprehension) gen.iter.walkabout(self) - if emit_build: + if outermost: self.emit_op(ops.BUILD_LIST_FROM_ARG) self.emit_op(ops.GET_ITER) self.use_next_block(start) @@ -1000,12 +1000,7 @@ def visit_ListComp(self, lc): self.update_position(lc.lineno) - if not lc.generators[0].ifs and len(lc.generators) == 1: - emit_build = True - else: - emit_build = False - self.emit_op_arg(ops.BUILD_LIST, 0) - self._listcomp_generator(lc.generators, 0, lc.elt, emit_build) + self._listcomp_generator(lc.generators, 0, lc.elt, outermost=True) def _comp_generator(self, node, generators, gen_index): start = self.new_block() diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -4,7 +4,7 @@ The bytecode interpreter itself is implemented by the PyFrame class. """ -import imp, struct, types, new, sys, dis +import dis, imp, struct, types, new, sys from pypy.interpreter import eval from pypy.interpreter.argument import Signature @@ -14,6 +14,7 @@ CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_NESTED, CO_GENERATOR, CO_CONTAINSGLOBALS) from pypy.rlib.rarithmetic import intmask +from pypy.rlib.debug import make_sure_not_resized from pypy.rlib import jit from pypy.rlib.objectmodel import compute_hash from pypy.tool.stdlib_opcode import opcodedesc, HAVE_ARGUMENT @@ -264,7 +265,6 @@ def dump(self): """A dis.dis() dump of the code object.""" - return co = self._to_code() dis.dis(co) From noreply at buildbot.pypy.org Fri Feb 24 03:35:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 03:35:02 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: disable the code dump. A bit of progress when and how we emit the correct Message-ID: <20120224023502.6270A8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52825:b3406c3e63a4 Date: 2012-02-23 19:20 -0700 http://bitbucket.org/pypy/pypy/changeset/b3406c3e63a4/ Log: disable the code dump. A bit of progress when and how we emit the correct opcode. diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -965,7 +965,7 @@ self.emit_op_arg(ops.CALL_METHOD, (kwarg_count << 8) | arg_count) return True - def _listcomp_generator(self, gens, gen_index, elt, outermost=False): + def _listcomp_generator(self, gens, gen_index, elt, emit_build=False): start = self.new_block() skip = self.new_block() if_cleanup = self.new_block() @@ -973,7 +973,7 @@ gen = gens[gen_index] assert isinstance(gen, ast.comprehension) gen.iter.walkabout(self) - if outermost: + if emit_build: self.emit_op(ops.BUILD_LIST_FROM_ARG) self.emit_op(ops.GET_ITER) self.use_next_block(start) @@ -1000,7 +1000,12 @@ def visit_ListComp(self, lc): self.update_position(lc.lineno) - self._listcomp_generator(lc.generators, 0, lc.elt, outermost=True) + if not lc.generators[0].ifs and len(lc.generators) == 1: + emit_build = True + else: + emit_build = False + self.emit_op_arg(ops.BUILD_LIST, 0) + self._listcomp_generator(lc.generators, 0, lc.elt, emit_build) def _comp_generator(self, node, generators, gen_index): start = self.new_block() diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -4,7 +4,7 @@ The bytecode interpreter itself is implemented by the PyFrame class. """ -import dis, imp, struct, types, new, sys +import imp, struct, types, new, sys, dis from pypy.interpreter import eval from pypy.interpreter.argument import Signature @@ -14,7 +14,6 @@ CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS, CO_NESTED, CO_GENERATOR, CO_CONTAINSGLOBALS) from pypy.rlib.rarithmetic import intmask -from pypy.rlib.debug import make_sure_not_resized from pypy.rlib import jit from pypy.rlib.objectmodel import compute_hash from pypy.tool.stdlib_opcode import opcodedesc, HAVE_ARGUMENT @@ -265,6 +264,7 @@ def dump(self): """A dis.dis() dump of the code object.""" + return co = self._to_code() dis.dis(co) From noreply at buildbot.pypy.org Fri Feb 24 03:35:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 03:35:04 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: backout. not worth having a bytecode without arg Message-ID: <20120224023504.C39AE8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52827:febd5ba50dab Date: 2012-02-23 19:31 -0700 http://bitbucket.org/pypy/pypy/changeset/febd5ba50dab/ Log: backout. not worth having a bytecode without arg diff --git a/lib-python/modified-2.7/opcode.py b/lib-python/modified-2.7/opcode.py --- a/lib-python/modified-2.7/opcode.py +++ b/lib-python/modified-2.7/opcode.py @@ -119,40 +119,39 @@ def_op('POP_BLOCK', 87) def_op('END_FINALLY', 88) def_op('BUILD_CLASS', 89) -def_op('BUILD_LIST_FROM_ARG', 90) -HAVE_ARGUMENT = 91 # Opcodes from here have an argument: +HAVE_ARGUMENT = 90 # Opcodes from here have an argument: -name_op('STORE_NAME', 91) # Index in name list -name_op('DELETE_NAME', 92) # "" -def_op('UNPACK_SEQUENCE', 93) # Number of tuple items -jrel_op('FOR_ITER', 94) -def_op('LIST_APPEND', 95) -name_op('STORE_ATTR', 96) # Index in name list -name_op('DELETE_ATTR', 97) # "" -name_op('STORE_GLOBAL', 98) # "" -name_op('DELETE_GLOBAL', 99) # "" -def_op('DUP_TOPX', 100) # number of items to duplicate -def_op('LOAD_CONST', 101) # Index in const list -hasconst.append(101) -name_op('LOAD_NAME', 102) # Index in name list -def_op('BUILD_TUPLE', 103) # Number of tuple items -def_op('BUILD_LIST', 104) # Number of list items -def_op('BUILD_SET', 105) # Number of set items -def_op('BUILD_MAP', 106) # Number of dict entries (upto 255) -name_op('LOAD_ATTR', 107) # Index in name list -def_op('COMPARE_OP', 108) # Comparison operator -hascompare.append(108) -name_op('IMPORT_NAME', 109) # Index in name list -name_op('IMPORT_FROM', 110) # Index in name list -jrel_op('JUMP_FORWARD', 111) # Number of bytes to skip -jabs_op('JUMP_IF_FALSE_OR_POP', 112) # Target byte offset from beginning of code -jabs_op('JUMP_IF_TRUE_OR_POP', 113) # "" -jabs_op('JUMP_ABSOLUTE', 114) # "" -jabs_op('POP_JUMP_IF_FALSE', 115) # "" -jabs_op('POP_JUMP_IF_TRUE', 116) # "" +name_op('STORE_NAME', 90) # Index in name list +name_op('DELETE_NAME', 91) # "" +def_op('UNPACK_SEQUENCE', 92) # Number of tuple items +jrel_op('FOR_ITER', 93) +def_op('LIST_APPEND', 94) +name_op('STORE_ATTR', 95) # Index in name list +name_op('DELETE_ATTR', 96) # "" +name_op('STORE_GLOBAL', 97) # "" +name_op('DELETE_GLOBAL', 98) # "" +def_op('DUP_TOPX', 99) # number of items to duplicate +def_op('LOAD_CONST', 100) # Index in const list +hasconst.append(100) +name_op('LOAD_NAME', 101) # Index in name list +def_op('BUILD_TUPLE', 102) # Number of tuple items +def_op('BUILD_LIST', 103) # Number of list items +def_op('BUILD_SET', 104) # Number of set items +def_op('BUILD_MAP', 105) # Number of dict entries (upto 255) +name_op('LOAD_ATTR', 106) # Index in name list +def_op('COMPARE_OP', 107) # Comparison operator +hascompare.append(107) +name_op('IMPORT_NAME', 108) # Index in name list +name_op('IMPORT_FROM', 109) # Index in name list +jrel_op('JUMP_FORWARD', 110) # Number of bytes to skip +jabs_op('JUMP_IF_FALSE_OR_POP', 111) # Target byte offset from beginning of code +jabs_op('JUMP_IF_TRUE_OR_POP', 112) # "" +jabs_op('JUMP_ABSOLUTE', 113) # "" +jabs_op('POP_JUMP_IF_FALSE', 114) # "" +jabs_op('POP_JUMP_IF_TRUE', 115) # "" -name_op('LOAD_GLOBAL', 117) # Index in name list +name_op('LOAD_GLOBAL', 116) # Index in name list jabs_op('CONTINUE_LOOP', 119) # Target address jrel_op('SETUP_LOOP', 120) # Distance to target address @@ -193,5 +192,6 @@ def_op('LOOKUP_METHOD', 201) # Index in name list hasname.append(201) def_op('CALL_METHOD', 202) # #args not including 'self' +def_op('BUILD_LIST_FROM_ARG', 203) del def_op, name_op, jrel_op, jabs_op diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -28,7 +28,7 @@ # Magic numbers for the bytecode version in code objects. # See comments in pypy/module/imp/importing. cpython_magic, = struct.unpack(" Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52828:45a68e50a262 Date: 2012-02-23 19:34 -0700 http://bitbucket.org/pypy/pypy/changeset/45a68e50a262/ Log: redo the reasonable part diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -965,7 +965,7 @@ self.emit_op_arg(ops.CALL_METHOD, (kwarg_count << 8) | arg_count) return True - def _listcomp_generator(self, gens, gen_index, elt, outermost=False): + def _listcomp_generator(self, gens, gen_index, elt, single=False): start = self.new_block() skip = self.new_block() if_cleanup = self.new_block() @@ -973,8 +973,8 @@ gen = gens[gen_index] assert isinstance(gen, ast.comprehension) gen.iter.walkabout(self) - if outermost: - self.emit_op(ops.BUILD_LIST_FROM_ARG) + if single: + self.emit_op_arg(ops.BUILD_LIST_FROM_ARG, 0) self.emit_op(ops.GET_ITER) self.use_next_block(start) self.emit_jump(ops.FOR_ITER, anchor) @@ -1000,7 +1000,12 @@ def visit_ListComp(self, lc): self.update_position(lc.lineno) - self._listcomp_generator(lc.generators, 0, lc.elt, outermost=True) + if len(lc.generators) != 1 or lc.generators[0].ifs: + single = False + self.emit_op_arg(ops.BUILD_LIST, 0) + else: + single = True + self._listcomp_generator(lc.generators, 0, lc.elt, single=single) def _comp_generator(self, node, generators, gen_index): start = self.new_block() From noreply at buildbot.pypy.org Fri Feb 24 04:05:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 04:05:34 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: remove confusing oopspecs Message-ID: <20120224030534.2DE868203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52829:19b8e4c5fe4f Date: 2012-02-23 20:05 -0700 http://bitbucket.org/pypy/pypy/changeset/19b8e4c5fe4f/ Log: remove confusing oopspecs diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -262,7 +262,6 @@ l.items = malloc(LIST.items.TO, length) return l ll_newlist = typeMethod(ll_newlist) -ll_newlist.oopspec = 'newlist(length)' def ll_newlist_hint(LIST, lengthhint): ll_assert(lengthhint >= 0, "negative list length") @@ -271,7 +270,6 @@ l.items = malloc(LIST.items.TO, lengthhint) return l ll_newlist_hint = typeMethod(ll_newlist_hint) -ll_newlist_hint.oopspec = 'newlist(lengthhint)' # should empty lists start with no allocated memory, or with a preallocated # minimal number of entries? XXX compare memory usage versus speed, and @@ -294,11 +292,9 @@ l.items = _ll_new_empty_item_array(LIST) return l ll_newemptylist = typeMethod(ll_newemptylist) -ll_newemptylist.oopspec = 'newlist(0)' def ll_length(l): return l.length -ll_length.oopspec = 'list.len(l)' def ll_items(l): return l.items @@ -306,12 +302,10 @@ def ll_getitem_fast(l, index): ll_assert(index < l.length, "getitem out of bounds") return l.ll_items()[index] -ll_getitem_fast.oopspec = 'list.getitem(l, index)' def ll_setitem_fast(l, index, item): ll_assert(index < l.length, "setitem out of bounds") l.ll_items()[index] = item -ll_setitem_fast.oopspec = 'list.setitem(l, index, item)' # fixed size versions @@ -320,15 +314,12 @@ l = malloc(LIST, length) return l ll_fixed_newlist = typeMethod(ll_fixed_newlist) -ll_fixed_newlist.oopspec = 'newlist(length)' def ll_fixed_newemptylist(LIST): return ll_fixed_newlist(LIST, 0) -ll_fixed_newemptylist = typeMethod(ll_fixed_newemptylist) def ll_fixed_length(l): return len(l) -ll_fixed_length.oopspec = 'list.len(l)' def ll_fixed_items(l): return l @@ -336,12 +327,10 @@ def ll_fixed_getitem_fast(l, index): ll_assert(index < len(l), "fixed getitem out of bounds") return l[index] -ll_fixed_getitem_fast.oopspec = 'list.getitem(l, index)' def ll_fixed_setitem_fast(l, index, item): ll_assert(index < len(l), "fixed setitem out of bounds") l[index] = item -ll_fixed_setitem_fast.oopspec = 'list.setitem(l, index, item)' def newlist(llops, r_list, items_v, v_sizehint=None): LIST = r_list.LIST From noreply at buildbot.pypy.org Fri Feb 24 04:21:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 04:21:57 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: merge default Message-ID: <20120224032157.E861C8203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52830:5b39d23e1c0e Date: 2012-02-23 20:10 -0700 http://bitbucket.org/pypy/pypy/changeset/5b39d23e1c0e/ Log: merge default diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,9 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,6 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -98,6 +102,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -303,3 +308,52 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + + +class AutoFlusher(object): + + def __init__(self, space): + self.streams = {} + + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] + + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -160,3 +160,20 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -469,15 +469,16 @@ # FLOATs. if len(self._heuristic_order) < len(livevars): from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, - r_ulonglong) + r_ulonglong, r_uint) added = False for var, value in livevars.items(): if var not in self._heuristic_order: - if isinstance(value, (r_longlong, r_ulonglong)): - pass - #assert 0, ("should not pass a r_longlong argument for " - # "now, because on 32-bit machines it would " - # "need to be ordered as a FLOAT") + if (r_ulonglong is not r_uint and + isinstance(value, (r_longlong, r_ulonglong))): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it needs " + "to be ordered as a FLOAT but on 64-bit " + "machines as an INT") elif isinstance(value, (int, long, r_singlefloat)): kind = '1:INT' elif isinstance(value, float): diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -2,6 +2,7 @@ from pypy.conftest import option from pypy.rlib.jit import hint, we_are_jitted, JitDriver, elidable_promote from pypy.rlib.jit import JitHintError, oopspec, isconstant +from pypy.rlib.rarithmetic import r_uint from pypy.translator.translator import TranslationContext, graphof from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin from pypy.rpython.lltypesystem import lltype @@ -178,6 +179,11 @@ myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + def test_argument_order_accept_r_uint(self): + # this used to fail on 64-bit, because r_uint == r_ulonglong + myjitdriver = JitDriver(greens=['i1'], reds=[]) + myjitdriver.jit_merge_point(i1=r_uint(42)) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass From noreply at buildbot.pypy.org Fri Feb 24 04:21:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 24 Feb 2012 04:21:59 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: removing this was accidental Message-ID: <20120224032159.E9A418203C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52831:67935212c8c2 Date: 2012-02-23 20:21 -0700 http://bitbucket.org/pypy/pypy/changeset/67935212c8c2/ Log: removing this was accidental diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -309,12 +309,13 @@ # fixed size versions + at typeMethod def ll_fixed_newlist(LIST, length): ll_assert(length >= 0, "negative fixed list length") l = malloc(LIST, length) return l -ll_fixed_newlist = typeMethod(ll_fixed_newlist) + at typeMethod def ll_fixed_newemptylist(LIST): return ll_fixed_newlist(LIST, 0) From notifications-noreply at bitbucket.org Fri Feb 24 09:28:44 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Fri, 24 Feb 2012 08:28:44 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120224082844.8351.44808@bitbucket01.managed.contegix.com> You have received a notification from 300mg_integration. Hi, I forked pypy. My fork is at https://bitbucket.org/300mg_integration/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Fri Feb 24 09:59:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 09:59:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120224085940.3593382366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52832:89576adc26f9 Date: 2012-02-23 16:39 +0100 http://bitbucket.org/pypy/pypy/changeset/89576adc26f9/ Log: hg merge default diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -309,3 +310,13 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/config/translation.check_str_without_nul.txt b/pypy/doc/config/translation.check_str_without_nul.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/translation.check_str_without_nul.txt @@ -0,0 +1,5 @@ +If turned on, the annotator will keep track of which strings can +potentially contain NUL characters, and complain if one such string +is passed to some external functions --- e.g. if it is used as a +filename in os.open(). Defaults to False because it is usually more +pain than benefit, but turned on by targetpypystandalone. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.8.0.rst @@ -0,0 +1,98 @@ +============================ +PyPy 1.8 - business as usual +============================ + +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: + + http://pypy.org/download.html + +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.8 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. + +.. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* List strategies. Now lists that contain only ints or only floats should + be as efficient as storing them in a binary-packed array. It also improves + the JIT performance in places that use such lists. There are also special + strategies for unicode and string lists. + +* As usual, numerous performance improvements. There are many examples + of python constructs that now should be faster; too many to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* NumPy effort progress; for the exact list of things that have been done, + consult the `numpy status page`_. A tentative list of things that has + been done: + + * multi dimensional arrays + + * various sizes of dtypes + + * a lot of ufuncs + + * a lot of other minor changes + + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -296,8 +296,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -305,7 +304,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -874,7 +874,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return "abc"[0] """ @@ -889,6 +889,23 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + + # getslice is not yet optimized. + # Still, check a case which yields the empty string. + source = """def f(): + return u"abc"[:0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.SLICE+2: 1, + ops.RETURN_VALUE: 1} + def test_remove_dead_code(self): source = """def f(x): return 5 diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1334,6 +1334,15 @@ assert False, "unicode_w was called with a bytes string" return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1627,6 +1636,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -170,6 +170,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints) + have_debug_prints, fatalerror_notb) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,6 +104,7 @@ self._debug = v def setup_once(self): + self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -161,6 +162,28 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') + _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) + + def _check_sse2(self): + if WORD == 8: + return # all x86-64 CPUs support SSE2 + if not self.cpu.supports_floats: + return # the CPU doesn't support float, so we don't need SSE2 + # + from pypy.jit.backend.x86.detect_sse2 import INSNS + mc = codebuf.MachineCodeBlockWrapper() + for c in INSNS: + mc.writechar(c) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) + features = fnptr() + if bool(features & (1<<25)) and bool(features & (1<<26)): + return # CPU supports SSE2 + fatalerror_notb( + "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" + "Your CPU is too old. Please translate a PyPy with the option:\n" + "--jit-backend=x86-without-sse2") + def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,17 +1,18 @@ import autopath -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.rmmap import alloc, free +INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3") # RET def detect_sse2(): + from pypy.rpython.lltypesystem import lltype, rffi + from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3"): # RET + for c in INSNS: data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3706,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,9 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,6 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -101,6 +105,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -307,3 +312,52 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + + +class AutoFlusher(object): + + def __init__(self, space): + self.streams = {} + + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] + + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == b'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) @@ -157,3 +160,20 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -382,6 +383,8 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", + "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", "Long": "space.w_int", @@ -395,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' @@ -430,16 +434,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -183,11 +184,34 @@ w_item = space.next(w_iter) w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -390,6 +390,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -439,6 +448,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -454,6 +465,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -619,6 +636,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -244,6 +244,45 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + + at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) +def PyString_AsEncodedObject(space, w_str, encoding, errors): + """Encode a string object using the codec registered for encoding and return + the result as Python object. encoding and errors have the same meaning as + the parameters of the same name in the string encode() method. The codec to + be used is looked up using the Python codec registry. Return NULL if an + exception was raised by the codec. + + This function is not available in 3.x and does not have a PyBytes alias.""" + if not PyString_Check(space, w_str): + PyErr_BadArgument(space) + + w_encoding = w_errors = space.w_None + if encoding: + w_encoding = space.wrap(rffi.charp2str(encoding)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + return space.call_method(w_str, 'encode', w_encoding, w_errors) + @cpython_api([PyObject, PyObject], PyObject) def _PyString_Join(space, w_sep, w_seq): return space.call_method(w_sep, 'join', w_seq) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -684,28 +681,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1072,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1331,28 +1293,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -1685,15 +1625,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1733,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the @@ -2448,16 +2351,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -748,6 +748,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,44 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -130,6 +130,56 @@ ]) module.getstring() + def test_format_v(self): + module = self.import_extension('foo', [ + ("test_string_format_v", "METH_VARARGS", + ''' + return helper("bla %d ble %s\\n", + PyInt_AsLong(PyTuple_GetItem(args, 0)), + PyString_AsString(PyTuple_GetItem(args, 1))); + ''' + ) + ], prologue=''' + PyObject* helper(char* fmt, ...) + { + va_list va; + PyObject* res; + va_start(va, fmt); + res = PyString_FromFormatV(fmt, va); + va_end(va); + return res; + } + ''') + res = module.test_string_format_v(1, "xyz") + assert res == "bla 1 ble xyz\n" + + def test_format(self): + module = self.import_extension('foo', [ + ("test_string_format", "METH_VARARGS", + ''' + return PyString_FromFormat("bla %d ble %s\\n", + PyInt_AsLong(PyTuple_GetItem(args, 0)), + PyString_AsString(PyTuple_GetItem(args, 1))); + ''' + ) + ]) + res = module.test_string_format(1, "xyz") + assert res == "bla 1 ble xyz\n" + + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -455,3 +455,20 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -399,6 +399,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far @@ -542,6 +552,15 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -31,7 +31,7 @@ 'concatenate': 'interp_numarray.concatenate', 'set_string_function': 'appbridge.set_string_function', - + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,11 +92,14 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), @@ -111,8 +116,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -39,10 +38,10 @@ ) def descr_str(self, space): - return self.descr_repr(space) + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) - def descr_repr(self, space): - return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + def descr_format(self, space, w_spec): + return space.format(self.item(space), w_spec) def descr_int(self, space): box = self.convert_to(W_LongBox.get_dtype(space)) @@ -80,7 +79,15 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -91,9 +98,30 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") + descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") + + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) @@ -158,6 +186,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -165,7 +197,8 @@ __new__ = interp2app(W_GenericBox.descr__new__.im_func), __str__ = interp2app(W_GenericBox.descr_str), - __repr__ = interp2app(W_GenericBox.descr_repr), + __repr__ = interp2app(W_GenericBox.descr_str), + __format__ = interp2app(W_GenericBox.descr_format), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), __nonzero__ = interp2app(W_GenericBox.descr_nonzero), @@ -174,11 +207,29 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), + __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -187,8 +238,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) @@ -196,6 +249,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -217,36 +272,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -259,6 +321,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,17 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator) from pypy.module.micronumpy.strides import (calculate_slice_strides, shape_agreement, find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -101,8 +101,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +117,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +135,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -1227,21 +1246,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1250,10 +1284,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1267,6 +1297,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,20 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) - for i in range(count): + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 's', 'a']) + +def fromstring_loop(a, count, dtype, itemsize, s): + i = 0 + while i < count: + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -378,14 +378,17 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), @@ -427,7 +430,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{:3f}".format(numpy.float64(3)) == "3.000000" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') @@ -387,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys @@ -401,3 +403,33 @@ else: assert issubclass(int64, int) assert int_ is int64 + + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_, True_, False_ + + assert 5 / int_(2) == int_(2) + assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) + assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) + assert int_(3) & int_(1) == int_(1) + assert 2 & int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) + + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + + raises(TypeError, lambda: float64(3) & 1) + diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -21,13 +21,3 @@ from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 - - def test_constants(self): - import math - from _numpypy import inf, e, pi - assert type(inf) is float - assert inf == float("inf") - assert e == math.e - assert type(e) is float - assert pi == math.pi - assert type(pi) is float diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -579,7 +579,7 @@ def test_div(self): from math import isnan - from _numpypy import array, dtype, inf + from _numpypy import array, dtype a = array(range(1, 6)) b = a / a @@ -600,15 +600,15 @@ a = array([-1.0, 0.0, 1.0]) b = array([0.0, 0.0, 0.0]) c = a / b - assert c[0] == -inf + assert c[0] == float('-inf') assert isnan(c[1]) - assert c[2] == inf + assert c[2] == float('inf') b = array([-0.0, -0.0, -0.0]) c = a / b - assert c[0] == inf + assert c[0] == float('inf') assert isnan(c[1]) - assert c[2] == -inf + assert c[2] == float('-inf') def test_div_other(self): from _numpypy import array @@ -625,6 +625,59 @@ for i in range(5): assert b[i] == i / 5.0 + def test_truediv(self): + from operator import truediv + from _numpypy import arange + + assert (truediv(arange(5), 2) == [0., .5, 1., 1.5, 2.]).all() + assert (truediv(2, arange(3)) == [float("inf"), 2., 1.]).all() + + def test_divmod(self): + from _numpypy import arange + + a, b = divmod(arange(10), 3) + assert (a == [0, 0, 0, 1, 1, 1, 2, 2, 2, 3]).all() + assert (b == [0, 1, 2, 0, 1, 2, 0, 1, 2, 0]).all() + + def test_rdivmod(self): + from _numpypy import arange + + a, b = divmod(3, arange(1, 5)) + assert (a == [3, 1, 1, 0]).all() + assert (b == [0, 1, 0, 3]).all() + + def test_lshift(self): + from _numpypy import array + + a = array([0, 1, 2, 3]) + assert (a << 2 == [0, 4, 8, 12]).all() + a = array([True, False]) + assert (a << 2 == [4, 0]).all() + a = array([1.0]) + raises(TypeError, lambda: a << 2) + + def test_rlshift(self): + from _numpypy import arange + + a = arange(3) + assert (2 << a == [2, 4, 8]).all() + + def test_rshift(self): + from _numpypy import arange, array + + a = arange(10) + assert (a >> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +731,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1410,6 +1487,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,14 +310,50 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh @@ -367,15 +403,15 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): @@ -416,7 +452,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -59,10 +59,6 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc class Primitive(object): _mixin_ = True @@ -253,6 +249,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -295,6 +295,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: @@ -313,6 +321,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -477,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -541,10 +541,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrapbytes(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrapbytes(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -277,13 +278,21 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == ['somefile'] + typed_result = [(type(x), x) for x in result] + assert (str, 'somefile') in typed_result + try: + u = b"caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (bytes, b"caf\xe9") in typed_result + else: + assert (str, u) in typed_result def test_access(self): pdir = self.pdir + '/file1' diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/pypyjit/test_pypy_c/test_call.py b/pypy/module/pypyjit/test_pypy_c/test_call.py --- a/pypy/module/pypyjit/test_pypy_c/test_call.py +++ b/pypy/module/pypyjit/test_pypy_c/test_call.py @@ -27,6 +27,7 @@ ... p53 = call_assembler(..., descr=...) guard_not_forced(descr=...) + keepalive(...) guard_no_exception(descr=...) ... """) diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time -import datetime +from lib_pypy import datetime +import copy import os def test_utcfromtimestamp(): @@ -22,3 +25,22 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -330,4 +330,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,22 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' +fatalerror._annenforceargs_ = [str] + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,61 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong, r_uint) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if (r_ulonglong is not r_uint and + isinstance(value, (r_longlong, r_ulonglong))): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it needs " + "to be ordered as a FLOAT but on 64-bit " + "machines as an INT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -55,6 +55,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -109,12 +110,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -126,6 +126,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") OBJ_NAME_st = rffi_platform.Struct( 'OBJ_NAME', @@ -250,7 +252,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -2,6 +2,7 @@ from pypy.conftest import option from pypy.rlib.jit import hint, we_are_jitted, JitDriver, elidable_promote from pypy.rlib.jit import JitHintError, oopspec, isconstant +from pypy.rlib.rarithmetic import r_uint from pypy.translator.translator import TranslationContext, graphof from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin from pypy.rpython.lltypesystem import lltype @@ -146,6 +147,43 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + + def test_argument_order_accept_r_uint(self): + # this used to fail on 64-bit, because r_uint == r_ulonglong + myjitdriver = JitDriver(greens=['i1'], reds=[]) + myjitdriver.jit_merge_point(i1=r_uint(42)) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,12 +43,15 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) transformed_args = ', '.join(transformed_arglist) - main_arg = 'arg%d' % (signature.index(unicode),) + try: + main_arg = 'arg%d' % (signature.index(unicode0),) + except ValueError: + main_arg = 'arg%d' % (signature.index(unicode),) source = py.code.Source(""" def %(func_name)s(%(args)s): @@ -64,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix @@ -823,7 +826,7 @@ def os_open_oofakeimpl(path, flags, mode): return os.open(OOSupport.from_rstr(path), flags, mode) - return extdef([str0, int, int], int, traits.ll_os_name('open'), + return extdef([traits.str0, int, int], int, traits.ll_os_name('open'), llimpl=os_open_llimpl, oofakeimpl=os_open_oofakeimpl) @registering_if(os, 'getloadavg') diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) @@ -81,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') @@ -125,7 +129,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,19 +471,22 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', + 'paddq', 'pinsr', 'pmul', 'psrl', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1694,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s @@ -558,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -141,8 +141,14 @@ items = list(pypyjit.defaults.items()) items.sort() for key, value in items: - print(' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value)) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print(prefix + doc[:i]) + doc = doc[i+1:] + prefix = ' '*len(prefix) + print(prefix + doc) print(' --jit off turn off the JIT') def print_version(*args): From noreply at buildbot.pypy.org Fri Feb 24 09:59:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 09:59:41 +0100 (CET) Subject: [pypy-commit] pypy default: don't init() the builtin modules at space.startup() if they have already been initialized before Message-ID: <20120224085941.7876482366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52833:a82f18539bbb Date: 2012-02-24 09:55 +0100 http://bitbucket.org/pypy/pypy/changeset/a82f18539bbb/ Log: don't init() the builtin modules at space.startup() if they have already been initialized before diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -328,7 +328,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -322,3 +322,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' From noreply at buildbot.pypy.org Fri Feb 24 09:59:42 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 09:59:42 +0100 (CET) Subject: [pypy-commit] pypy default: pass -S to all invocations of py.py in this test, it produces a big speedup Message-ID: <20120224085942.A58D282366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52834:bd096c33e5cb Date: 2012-02-24 09:58 +0100 http://bitbucket.org/pypy/pypy/changeset/bd096c33e5cb/ Log: pass -S to all invocations of py.py in this test, it produces a big speedup diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -17,14 +17,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.executable") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print __name__; print '__file__' in globals()" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -33,24 +33,24 @@ tmpfile.write("print __name__; print __file__\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -65,15 +65,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) From noreply at buildbot.pypy.org Fri Feb 24 09:59:43 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 09:59:43 +0100 (CET) Subject: [pypy-commit] pypy default: merge default Message-ID: <20120224085943.D7C5782366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52835:c6b3b1bc1eee Date: 2012-02-24 09:59 +0100 http://bitbucket.org/pypy/pypy/changeset/c6b3b1bc1eee/ Log: merge default diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -328,7 +328,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -322,3 +322,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -17,14 +17,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.executable") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print __name__; print '__file__' in globals()" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -33,24 +33,24 @@ tmpfile.write("print __name__; print __file__\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -65,15 +65,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) From noreply at buildbot.pypy.org Fri Feb 24 10:14:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 10:14:24 +0100 (CET) Subject: [pypy-commit] pypy default: ignore IOError()s when flushing the files at exit Message-ID: <20120224091424.04DEF82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52836:9f0e8a37712b Date: 2012-02-24 10:09 +0100 http://bitbucket.org/pypy/pypy/changeset/9f0e8a37712b/ Log: ignore IOError()s when flushing the files at exit diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -323,7 +323,12 @@ def autoflush(self, space): w_iobase = self.w_iobase_ref() if w_iobase is not None: - space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + try: + space.call_method(w_iobase, 'flush') + except OperationError, e: + # if it's an IOError, ignore it + if not e.match(space, space.w_IOError): + raise class AutoFlusher(object): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -177,3 +177,20 @@ """) space.finish() assert tmpfile.read() == '42' + +def test_flush_at_exit_IOError(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([], """(): + import io + class MyStream(io.IOBase): + def flush(self): + raise IOError + + s = MyStream() + import sys; sys._keepalivesomewhereobscure = s + """) + space.finish() # the IOError has been ignored From noreply at buildbot.pypy.org Fri Feb 24 10:14:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 10:14:25 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120224091425.9CED682366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52837:303252b916c9 Date: 2012-02-24 10:14 +0100 http://bitbucket.org/pypy/pypy/changeset/303252b916c9/ Log: hg merge default diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -329,7 +329,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -314,3 +314,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -23,14 +23,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print(sys.executable)") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print(__name__); print('__file__' in globals())" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -39,24 +39,24 @@ tmpfile.write("print(__name__); print(__file__)\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print(sys.argv)") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print(sys.argv)", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -71,15 +71,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -52,6 +52,7 @@ set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) + j = float(j) while frame.i > 3: jitdriver.can_enter_jit(frame=frame, total=total, j=j) jitdriver.jit_merge_point(frame=frame, total=total, j=j) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2943,11 +2943,18 @@ self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): - myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) + myjitdriver = JitDriver(greens = [], reds = ['n', 'a']) + class A: + pass def f(n): sa = i = rffi.cast(rffi.ULONGLONG, 1) + a = A() while i < rffi.cast(rffi.ULONGLONG, n): - myjitdriver.jit_merge_point(sa=sa, n=n, i=i) + a.sa = sa + a.i = i + myjitdriver.jit_merge_point(n=n, a=a) + sa = a.sa + i = a.i sa += sa % i i += 1 res = self.meta_interp(f, [32]) diff --git a/pypy/jit/tl/tinyframe/tinyframe.py b/pypy/jit/tl/tinyframe/tinyframe.py --- a/pypy/jit/tl/tinyframe/tinyframe.py +++ b/pypy/jit/tl/tinyframe/tinyframe.py @@ -210,7 +210,7 @@ def repr(self): return "" % (self.outer.repr(), self.inner.repr()) -driver = JitDriver(greens = ['code', 'i'], reds = ['self'], +driver = JitDriver(greens = ['i', 'code'], reds = ['self'], virtualizables = ['self']) class Frame(object): diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -327,7 +327,12 @@ def autoflush(self, space): w_iobase = self.w_iobase_ref() if w_iobase is not None: - space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + try: + space.call_method(w_iobase, 'flush') + except OperationError, e: + # if it's an IOError, ignore it + if not e.match(space, space.w_IOError): + raise class AutoFlusher(object): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -177,3 +177,20 @@ """) space.finish() assert tmpfile.read() == '42' + +def test_flush_at_exit_IOError(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([], """(): + import io + class MyStream(io.IOBase): + def flush(self): + raise IOError + + s = MyStream() + import sys; sys._keepalivesomewhereobscure = s + """) + space.finish() # the IOError has been ignored diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -384,6 +384,7 @@ "Tuple": "space.w_tuple", "List": "space.w_list", "Set": "space.w_set", + "FrozenSet": "space.w_frozenset", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -1,16 +1,24 @@ from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler import consts from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, fread, feof, Py_ssize_tP, cpython_struct) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.pyerrors import PyErr_SetFromErrno +from pypy.module.cpyext.funcobject import PyCodeObject from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( - "PyCompilerFlags", ()) + "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) +PyCF_MASK = (consts.CO_FUTURE_DIVISION | + consts.CO_FUTURE_ABSOLUTE_IMPORT | + consts.CO_FUTURE_WITH_STATEMENT | + consts.CO_FUTURE_PRINT_FUNCTION | + consts.CO_FUTURE_UNICODE_LITERALS) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyEval_CallObjectWithKeywords(space, w_obj, w_arg, w_kwds): return space.call(w_obj, w_arg, w_kwds) @@ -48,6 +56,17 @@ return None return borrow_from(None, caller.w_globals) + at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) +def PyEval_EvalCode(space, w_code, w_globals, w_locals): + """This is a simplified interface to PyEval_EvalCodeEx(), with just + the code object, and the dictionaries of global and local variables. + The other arguments are set to NULL.""" + if w_globals is None: + w_globals = space.w_None + if w_locals is None: + w_locals = space.w_None + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([PyObject, PyObject], PyObject) def PyObject_CallObject(space, w_obj, w_arg): """ @@ -74,7 +93,7 @@ Py_file_input = 257 Py_eval_input = 258 -def compile_string(space, source, filename, start): +def compile_string(space, source, filename, start, flags=0): w_source = space.wrap(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: @@ -86,7 +105,7 @@ else: raise OperationError(space.w_ValueError, space.wrap( "invalid mode parameter for compilation")) - return compiling.compile(space, w_source, filename, mode) + return compiling.compile(space, w_source, filename, mode, flags) def run_string(space, source, filename, start, w_globals, w_locals): w_code = compile_string(space, source, filename, start) @@ -109,6 +128,24 @@ filename = "" return run_string(space, source, filename, start, w_globals, w_locals) + at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, + PyCompilerFlagsPtr], PyObject) +def PyRun_StringFlags(space, source, start, w_globals, w_locals, flagsptr): + """Execute Python source code from str in the context specified by the + dictionaries globals and locals with the compiler flags specified by + flags. The parameter start specifies the start token that should be used to + parse the source code. + + Returns the result of executing the code as a Python object, or NULL if an + exception was raised.""" + source = rffi.charp2str(source) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + w_code = compile_string(space, source, "", start, flags) + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([FILEP, CONST_STRING, rffi.INT_real, PyObject, PyObject], PyObject) def PyRun_File(space, fp, filename, start, w_globals, w_locals): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -150,7 +187,7 @@ @cpython_api([rffi.CCHARP, rffi.CCHARP, rffi.INT_real, PyCompilerFlagsPtr], PyObject) -def Py_CompileStringFlags(space, source, filename, start, flags): +def Py_CompileStringFlags(space, source, filename, start, flagsptr): """Parse and compile the Python source code in str, returning the resulting code object. The start token is given by start; this can be used to constrain the code which can be compiled and should @@ -160,7 +197,30 @@ returns NULL if the code cannot be parsed or compiled.""" source = rffi.charp2str(source) filename = rffi.charp2str(filename) - if flags: - raise OperationError(space.w_NotImplementedError, space.wrap( - "cpyext Py_CompileStringFlags does not accept flags")) - return compile_string(space, source, filename, start) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + return compile_string(space, source, filename, start, flags) + + at cpython_api([PyCompilerFlagsPtr], rffi.INT_real, error=CANNOT_FAIL) +def PyEval_MergeCompilerFlags(space, cf): + """This function changes the flags of the current evaluation + frame, and returns true on success, false on failure.""" + flags = rffi.cast(lltype.Signed, cf.c_cf_flags) + result = flags != 0 + current_frame = space.getexecutioncontext().gettopframe_nohidden() + if current_frame: + codeflags = current_frame.pycode.co_flags + compilerflags = codeflags & PyCF_MASK + if compilerflags: + result = 1 + flags |= compilerflags + # No future keyword at the moment + # if codeflags & CO_GENERATOR_ALLOWED: + # result = 1 + # flags |= CO_GENERATOR_ALLOWED + cf.c_cf_flags = rffi.cast(rffi.INT, flags) + return result + + diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - PyObjectFields, generic_cpy_call, CONST_STRING, + PyObjectFields, generic_cpy_call, CONST_STRING, CANNOT_FAIL, cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) @@ -48,6 +48,7 @@ PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) +PyCode_Check, PyCode_CheckExact = build_type_checkers("Code", PyCode) def function_attach(space, py_obj, w_obj): py_func = rffi.cast(PyFunctionObject, py_obj) @@ -160,3 +161,9 @@ freevars=[], cellvars=[])) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyCode_GetNumFree(space, w_co): + """Return the number of free variables in co.""" + co = space.interp_w(PyCode, w_co) + return len(co.co_freevars) + diff --git a/pypy/module/cpyext/include/Python.h b/pypy/module/cpyext/include/Python.h --- a/pypy/module/cpyext/include/Python.h +++ b/pypy/module/cpyext/include/Python.h @@ -113,6 +113,7 @@ #include "compile.h" #include "frameobject.h" #include "eval.h" +#include "pymath.h" #include "pymem.h" #include "pycobject.h" #include "pycapsule.h" diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -13,13 +13,19 @@ /* Masks for co_flags above */ /* These values are also in funcobject.py */ -#define CO_OPTIMIZED 0x0001 -#define CO_NEWLOCALS 0x0002 -#define CO_VARARGS 0x0004 -#define CO_VARKEYWORDS 0x0008 +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 #define CO_NESTED 0x0010 #define CO_GENERATOR 0x0020 +#define CO_FUTURE_DIVISION 0x02000 +#define CO_FUTURE_ABSOLUTE_IMPORT 0x04000 +#define CO_FUTURE_WITH_STATEMENT 0x08000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pymath.h b/pypy/module/cpyext/include/pymath.h new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/include/pymath.h @@ -0,0 +1,20 @@ +#ifndef Py_PYMATH_H +#define Py_PYMATH_H + +/************************************************************************** +Symbols and macros to supply platform-independent interfaces to mathematical +functions and constants +**************************************************************************/ + +/* HUGE_VAL is supposed to expand to a positive double infinity. Python + * uses Py_HUGE_VAL instead because some platforms are broken in this + * respect. We used to embed code in pyport.h to try to worm around that, + * but different platforms are broken in conflicting ways. If you're on + * a platform where HUGE_VAL is defined incorrectly, fiddle your Python + * config to #define Py_HUGE_VAL to something that works on your platform. + */ +#ifndef Py_HUGE_VAL +#define Py_HUGE_VAL HUGE_VAL +#endif + +#endif /* Py_PYMATH_H */ diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,6 +19,14 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) +#define PyCF_MASK_OBSOLETE (CO_NESTED) +#define PyCF_SOURCE_IS_UTF8 0x0100 +#define PyCF_DONT_IMPLY_DEDENT 0x0200 +#define PyCF_ONLY_AST 0x0400 + #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) #ifdef __cplusplus diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -182,16 +182,6 @@ used as the positional and keyword parameters to the object's constructor.""" raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_Check(space, co): - """Return true if co is a code object""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_GetNumFree(space, co): - """Return the number of free variables in co.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=-1) def PyCodec_Register(space, search_function): """Register a new codec search function. @@ -1853,26 +1843,6 @@ """ raise NotImplementedError - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISTITLE(space, ch): - """Return 1 or 0 depending on whether ch is a titlecase character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISDIGIT(space, ch): - """Return 1 or 0 depending on whether ch is a digit character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISNUMERIC(space, ch): - """Return 1 or 0 depending on whether ch is a numeric character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISALPHA(space, ch): - """Return 1 or 0 depending on whether ch is an alphabetic character.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP], PyObject) def PyUnicode_FromFormat(space, format): """Take a C printf()-style format string and a variable number of @@ -2317,17 +2287,6 @@ use the default error handling.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], rffi.INT_real, error=-1) -def PyUnicode_Tailmatch(space, str, substr, start, end, direction): - """Return 1 if substr matches str*[*start:end] at the given tail end - (direction == -1 means to do a prefix match, direction == 1 a suffix match), - 0 otherwise. Return -1 if an error occurred. - - This function used an int type for start and end. This - might require changes in your code for properly supporting 64-bit - systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], Py_ssize_t, error=-2) def PyUnicode_Find(space, str, substr, start, end, direction): """Return the first position of substr in str*[*start:end] using the given @@ -2524,17 +2483,6 @@ source code is read from fp instead of an in-memory string.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, PyCompilerFlags], PyObject) -def PyRun_StringFlags(space, str, start, globals, locals, flags): - """Execute Python source code from str in the context specified by the - dictionaries globals and locals with the compiler flags specified by - flags. The parameter start specifies the start token that should be used to - parse the source code. - - Returns the result of executing the code as a Python object, or NULL if an - exception was raised.""" - raise NotImplementedError - @cpython_api([FILE, rffi.CCHARP, rffi.INT_real, PyObject, PyObject, rffi.INT_real], PyObject) def PyRun_FileEx(space, fp, filename, start, globals, locals, closeit): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -2555,13 +2503,6 @@ returns.""" raise NotImplementedError - at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) -def PyEval_EvalCode(space, co, globals, locals): - """This is a simplified interface to PyEval_EvalCodeEx(), with just - the code object, and the dictionaries of global and local variables. - The other arguments are set to NULL.""" - raise NotImplementedError - @cpython_api([PyCodeObject, PyObject, PyObject, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObject], PyObject) def PyEval_EvalCodeEx(space, co, globals, locals, args, argcount, kws, kwcount, defs, defcount, closure): """Evaluate a precompiled code object, given a particular environment for its @@ -2586,12 +2527,6 @@ throw() methods of generator objects.""" raise NotImplementedError - at cpython_api([PyCompilerFlags], rffi.INT_real, error=CANNOT_FAIL) -def PyEval_MergeCompilerFlags(space, cf): - """This function changes the flags of the current evaluation frame, and returns - true on success, false on failure.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyWeakref_Check(space, ob): """Return true if ob is either a reference or proxy object. diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -2,9 +2,10 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.eval import ( - Py_single_input, Py_file_input, Py_eval_input) + Py_single_input, Py_file_input, Py_eval_input, PyCompilerFlags) from pypy.module.cpyext.api import fopen, fclose, fileno, Py_ssize_tP from pypy.interpreter.gateway import interp2app +from pypy.interpreter.astcompiler import consts from pypy.tool.udir import udir import sys, os @@ -63,6 +64,22 @@ assert space.int_w(w_res) == 10 + def test_evalcode(self, space, api): + w_f = space.appexec([], """(): + def f(*args): + assert isinstance(args, tuple) + return len(args) + 8 + return f + """) + + w_t = space.newtuple([space.wrap(1), space.wrap(2)]) + w_globals = space.newdict() + w_locals = space.newdict() + space.setitem(w_locals, space.wrap("args"), w_t) + w_res = api.PyEval_EvalCode(w_f.code, w_globals, w_locals) + + assert space.int_w(w_res) == 10 + def test_run_simple_string(self, space, api): def run(code): buf = rffi.str2charp(code) @@ -96,6 +113,16 @@ assert 42 * 43 == space.unwrap( api.PyObject_GetItem(w_globals, space.wrap("a"))) + def test_run_string_flags(self, space, api): + flags = lltype.malloc(PyCompilerFlags, flavor='raw') + flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) + w_globals = space.newdict() + api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, + w_globals, w_globals, flags) + w_a = space.getitem(w_globals, space.wrap("a")) + assert space.unwrap(w_a) == u'caf\xe9' + lltype.free(flags, flavor='raw') + def test_run_file(self, space, api): filepath = udir / "cpyext_test_runfile.py" filepath.write("raise ZeroDivisionError") @@ -256,3 +283,21 @@ print dir(mod) print mod.__dict__ assert mod.f(42) == 47 + + def test_merge_compiler_flags(self): + module = self.import_extension('foo', [ + ("get_flags", "METH_NOARGS", + """ + PyCompilerFlags flags; + flags.cf_flags = 0; + int result = PyEval_MergeCompilerFlags(&flags); + return Py_BuildValue("ii", result, flags.cf_flags); + """), + ]) + assert module.get_flags() == (0, 0) + + ns = {'module':module} + exec """from __future__ import division \nif 1: + def nested_flags(): + return module.get_flags()""" in ns + assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -78,6 +78,14 @@ rffi.free_charp(filename) rffi.free_charp(funcname) + def test_getnumfree(self, space, api): + w_function = space.appexec([], """(): + a = 5 + def method(x): return a, x + return method + """) + assert api.PyCode_GetNumFree(w_function.code) == 1 + def test_classmethod(self, space, api): w_function = space.appexec([], """(): def method(x): return x diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -239,8 +239,18 @@ assert api.Py_UNICODE_ISSPACE(unichr(char)) assert not api.Py_UNICODE_ISSPACE(u'a') + assert api.Py_UNICODE_ISALPHA(u'a') + assert not api.Py_UNICODE_ISALPHA(u'0') + assert api.Py_UNICODE_ISALNUM(u'a') + assert api.Py_UNICODE_ISALNUM(u'0') + assert not api.Py_UNICODE_ISALNUM(u'+') + assert api.Py_UNICODE_ISDECIMAL(u'\u0660') assert not api.Py_UNICODE_ISDECIMAL(u'a') + assert api.Py_UNICODE_ISDIGIT(u'9') + assert not api.Py_UNICODE_ISDIGIT(u'@') + assert api.Py_UNICODE_ISNUMERIC(u'9') + assert not api.Py_UNICODE_ISNUMERIC(u'@') for char in [0x0a, 0x0d, 0x1c, 0x1d, 0x1e, 0x85, 0x2028, 0x2029]: assert api.Py_UNICODE_ISLINEBREAK(unichr(char)) @@ -251,6 +261,9 @@ assert not api.Py_UNICODE_ISUPPER(u'a') assert not api.Py_UNICODE_ISLOWER(u'�') assert api.Py_UNICODE_ISUPPER(u'�') + assert not api.Py_UNICODE_ISTITLE(u'A') + assert api.Py_UNICODE_ISTITLE( + u'\N{LATIN CAPITAL LETTER L WITH SMALL LETTER J}') def test_TOLOWER(self, space, api): assert api.Py_UNICODE_TOLOWER(u'�') == u'�' @@ -472,3 +485,10 @@ api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) assert u"zbzbzbzb" == space.unwrap( api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) + + def test_tailmatch(self, space, api): + w_str = space.wrap(u"abcdef") + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -11,7 +11,8 @@ PyObject, PyObjectP, Py_DecRef, make_ref, from_ref, track_reference, make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check -from pypy.objspace.std import unicodeobject, unicodetype +from pypy.module.sys.interp_encoding import setdefaultencoding +from pypy.objspace.std import unicodeobject, unicodetype, stringtype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer import sys @@ -91,6 +92,11 @@ return unicodedb.isspace(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISALPHA(space, ch): + """Return 1 or 0 depending on whether ch is an alphabetic character.""" + return unicodedb.isalpha(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISALNUM(space, ch): """Return 1 or 0 depending on whether ch is an alphanumeric character.""" return unicodedb.isalnum(ord(ch)) @@ -106,6 +112,16 @@ return unicodedb.isdecimal(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISDIGIT(space, ch): + """Return 1 or 0 depending on whether ch is a digit character.""" + return unicodedb.isdigit(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISNUMERIC(space, ch): + """Return 1 or 0 depending on whether ch is a numeric character.""" + return unicodedb.isnumeric(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISLOWER(space, ch): """Return 1 or 0 depending on whether ch is a lowercase character.""" return unicodedb.islower(ord(ch)) @@ -115,6 +131,11 @@ """Return 1 or 0 depending on whether ch is an uppercase character.""" return unicodedb.isupper(ord(ch)) + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISTITLE(space, ch): + """Return 1 or 0 depending on whether ch is a titlecase character.""" + return unicodedb.istitle(ord(ch)) + @cpython_api([Py_UNICODE], Py_UNICODE, error=CANNOT_FAIL) def Py_UNICODE_TOLOWER(space, ch): """Return the character ch converted to lower case.""" @@ -157,6 +178,11 @@ except KeyError: return -1.0 + at cpython_api([], Py_UNICODE, error=CANNOT_FAIL) +def PyUnicode_GetMax(space): + """Get the maximum ordinal for a Unicode character.""" + return unichr(runicode.MAXUNICODE) + @cpython_api([PyObject], rffi.CCHARP, error=CANNOT_FAIL) def PyUnicode_AS_DATA(space, ref): """Return a pointer to the internal buffer of the object. o has to be a @@ -564,3 +590,16 @@ return space.call_method(w_str, "replace", w_substr, w_replstr, space.wrap(maxcount)) + at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], + rffi.INT_real, error=-1) +def PyUnicode_Tailmatch(space, w_str, w_substr, start, end, direction): + """Return 1 if substr matches str[start:end] at the given tail end + (direction == -1 means to do a prefix match, direction == 1 a + suffix match), 0 otherwise. Return -1 if an error occurred.""" + str = space.unicode_w(w_str) + substr = space.unicode_w(w_substr) + if rffi.cast(lltype.Signed, direction) >= 0: + return stringtype.stringstartswith(str, substr, start, end) + else: + return stringtype.stringendswith(str, substr, start, end) + diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -26,6 +26,7 @@ llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True +fatalerror._jit_look_inside_ = False fatalerror._annenforceargs_ = [str] def fatalerror_notb(msg): @@ -34,6 +35,7 @@ from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) fatalerror_notb._dont_inline_ = True +fatalerror_notb._jit_look_inside_ = False fatalerror_notb._annenforceargs_ = [str] diff --git a/pypy/translator/sandbox/test/test_sandbox.py b/pypy/translator/sandbox/test/test_sandbox.py --- a/pypy/translator/sandbox/test/test_sandbox.py +++ b/pypy/translator/sandbox/test/test_sandbox.py @@ -145,9 +145,9 @@ g = pipe.stdin f = pipe.stdout expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GENERATIONGC_NURSERY",), None) - if sys.platform.startswith('linux'): # on Mac, uses another (sandboxsafe) approach - expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), - OSError(5232, "xyz")) + #if sys.platform.startswith('linux'): + # expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), + # OSError(5232, "xyz")) expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GC_DEBUG",), None) g.close() tail = f.read() From noreply at buildbot.pypy.org Fri Feb 24 10:30:53 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 10:30:53 +0100 (CET) Subject: [pypy-commit] pypy py3k: be a bit less strict in what to expect in the output. E.g., I get some more [platform:execute] lines after the tb Message-ID: <20120224093053.3984C82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52838:7520b36b4010 Date: 2012-02-24 10:30 +0100 http://bitbucket.org/pypy/pypy/changeset/7520b36b4010/ Log: be a bit less strict in what to expect in the output. E.g., I get some more [platform:execute] lines after the tb diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -101,7 +101,7 @@ tmpfile.write(TB_NORMALIZATION_CHK) tmpfile.close() - popen = subprocess.Popen([sys.executable, str(pypypath), tmpfilepath], + popen = subprocess.Popen([sys.executable, str(pypypath), '-S', tmpfilepath], stderr=subprocess.PIPE) _, stderr = popen.communicate() - assert stderr.endswith('KeyError: \n') + assert 'KeyError: \n' in stderr From noreply at buildbot.pypy.org Fri Feb 24 10:30:54 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 10:30:54 +0100 (CET) Subject: [pypy-commit] pypy default: add the -S also here Message-ID: <20120224093054.6B20C82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52839:576ec23995a0 Date: 2012-02-24 10:30 +0100 http://bitbucket.org/pypy/pypy/changeset/576ec23995a0/ Log: add the -S also here diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -95,7 +95,7 @@ tmpfile.write(TB_NORMALIZATION_CHK) tmpfile.close() - popen = subprocess.Popen([sys.executable, str(pypypath), tmpfilepath], + popen = subprocess.Popen([sys.executable, str(pypypath), '-S', tmpfilepath], stderr=subprocess.PIPE) _, stderr = popen.communicate() assert stderr.endswith('KeyError: \n') From noreply at buildbot.pypy.org Fri Feb 24 11:19:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 11:19:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: because of 46769341e0eb, __doc__ is now stored as a normal property, and thus we wrap it twice. Check for equality, not identity Message-ID: <20120224101905.9BD2782366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52840:c23a682f1a95 Date: 2012-02-24 11:08 +0100 http://bitbucket.org/pypy/pypy/changeset/c23a682f1a95/ Log: because of 46769341e0eb, __doc__ is now stored as a normal property, and thus we wrap it twice. Check for equality, not identity diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -128,7 +128,8 @@ w = self.space.wrap w_object_doc = self.space.getattr(self.space.w_object, w("__doc__")) w_instance = self.space.appexec([], "(): return object()") - assert self.space.lookup(w_instance, "__doc__") == w_object_doc + w_doc = self.space.lookup(w_instance, "__doc__") + assert self.space.str_w(w_doc) == self.space.str_w(w_object_doc) assert self.space.lookup(w_instance, "gobbledygook") is None w_instance = self.space.appexec([], """(): class Lookup(object): From noreply at buildbot.pypy.org Fri Feb 24 11:19:07 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 11:19:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120224101907.AB41782366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52841:f69cdd01e917 Date: 2012-02-24 11:09 +0100 http://bitbucket.org/pypy/pypy/changeset/f69cdd01e917/ Log: fix syntax diff --git a/pypy/interpreter/test/test_typedef.py b/pypy/interpreter/test/test_typedef.py --- a/pypy/interpreter/test/test_typedef.py +++ b/pypy/interpreter/test/test_typedef.py @@ -119,7 +119,7 @@ x = X() import __pypy__ irepr = __pypy__.internal_repr(x) - print irepr + print(irepr) %s %s %s From noreply at buildbot.pypy.org Fri Feb 24 11:19:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 11:19:10 +0100 (CET) Subject: [pypy-commit] pypy default: explicitly specify the encoding. It seems that at least on tannit it cannot find a default one Message-ID: <20120224101910.109CA82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52842:69da974cc0af Date: 2012-02-24 11:18 +0100 http://bitbucket.org/pypy/pypy/changeset/69da974cc0af/ Log: explicitly specify the encoding. It seems that at least on tannit it cannot find a default one diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -170,7 +170,7 @@ space = make_objspace(config) space.appexec([space.wrap(str(tmpfile))], """(tmpfile): import io - f = io.open(tmpfile, 'w') + f = io.open(tmpfile, 'w', encoding='ascii') f.write('42') # no flush() and no close() import sys; sys._keepalivesomewhereobscure = f From noreply at buildbot.pypy.org Fri Feb 24 11:32:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 24 Feb 2012 11:32:27 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Add this. Message-ID: <20120224103227.32B6B82366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52843:3a8110f537ee Date: 2012-02-24 11:27 +0100 http://bitbucket.org/pypy/pypy/changeset/3a8110f537ee/ Log: Add this. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -5,6 +5,8 @@ COPIES_POINTER = set([ 'force_cast', 'cast_pointer', 'same_as', 'cast_opaque_ptr', + 'jit_force_virtual', + # as well as most 'hint' operations, but not all --- see below ]) From noreply at buildbot.pypy.org Fri Feb 24 11:32:29 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 24 Feb 2012 11:32:29 +0100 (CET) Subject: [pypy-commit] pypy default: Move the _check_sse2() call out of assembler.setup() and into the Message-ID: <20120224103229.4406682366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52844:e2ced9ddf804 Date: 2012-02-24 11:32 +0100 http://bitbucket.org/pypy/pypy/changeset/e2ced9ddf804/ Log: Move the _check_sse2() call out of assembler.setup() and into the very early phases of running the process. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -104,7 +104,6 @@ self._debug = v def setup_once(self): - self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -165,6 +164,9 @@ _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) def _check_sse2(self): + """This function is called early in the execution of the program. + It checks if the CPU really supports SSE2. It is only invoked in + translated versions for now.""" if WORD == 8: return # all x86-64 CPUs support SSE2 if not self.cpu.supports_floats: diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -63,6 +63,9 @@ self.assembler.finish_once() self.profile_agent.shutdown() + def early_initialization(self): + self.assembler._check_sse2() + def dump_loop_token(self, looptoken): """ NOT_RPYTHON diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -13,6 +13,7 @@ from pypy.rlib.debug import fatalerror from pypy.rlib.rstackovf import StackOverflow from pypy.translator.simplify import get_functype +from pypy.translator.unsimplify import call_initial_function from pypy.translator.unsimplify import call_final_function from pypy.jit.metainterp import history, pyjitpl, gc, memmgr @@ -850,6 +851,9 @@ checkgraph(origportalgraph) def add_finish(self): + def start(): + self.cpu.early_initialization() + def finish(): if self.metainterp_sd.profiler.initialized: self.metainterp_sd.profiler.finish() @@ -858,6 +862,9 @@ if self.cpu.translate_support_code: call_final_function(self.translator, finish, annhelper = self.annhelper) + if hasattr(self.cpu, 'early_initialization'): + call_initial_function(self.translator, start, + annhelper = self.annhelper) def rewrite_set_param(self): from pypy.rpython.lltypesystem.rstr import STR From noreply at buildbot.pypy.org Fri Feb 24 11:37:08 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 11:37:08 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: handle force_spill operation in llgraph backend Message-ID: <20120224103708.066BC82366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52845:7a2349cfc51e Date: 2012-02-24 10:08 +0000 http://bitbucket.org/pypy/pypy/changeset/7a2349cfc51e/ Log: handle force_spill operation in llgraph backend diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -179,6 +179,8 @@ def _compile_operations(self, c, operations, var2index, clt): for op in operations: + if op.getopnum() == -124: # force_spill + continue llimpl.compile_add(c, op.getopnum()) descr = op.getdescr() if isinstance(descr, Descr): From noreply at buildbot.pypy.org Fri Feb 24 11:37:10 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 11:37:10 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Support math_sqrt operation in llgraph Message-ID: <20120224103710.78ADE82366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52846:521049834e9b Date: 2012-02-24 10:35 +0000 http://bitbucket.org/pypy/pypy/changeset/521049834e9b/ Log: Support math_sqrt operation in llgraph diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -20,6 +20,7 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong +from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.rlib import libffi, clibffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated @@ -929,6 +930,11 @@ raise NotImplementedError def op_call(self, calldescr, func, *args): + effectinfo = calldescr.get_extra_info() + if effectinfo is not None: + oopspecindex = effectinfo.oopspecindex + if oopspecindex == EffectInfo.OS_MATH_SQRT: + return do_math_sqrt(args[0]) return self._do_call(calldescr, func, args, call_with_llptr=False) def op_call_release_gil(self, calldescr, func, *args): @@ -1626,6 +1632,12 @@ assert 0 <= dststart <= dststart + length <= len(dst.chars) rstr.copy_unicode_contents(src, dst, srcstart, dststart, length) +def do_math_sqrt(value): + import math + y = cast_from_floatstorage(lltype.Float, value) + x = math.sqrt(y) + return cast_to_floatstorage(x) + # ---------- call ---------- _call_args_i = [] From noreply at buildbot.pypy.org Fri Feb 24 11:57:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 24 Feb 2012 11:57:02 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Test and fix Message-ID: <20120224105702.BDD0882366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52847:329e75aa0c0e Date: 2012-02-24 11:56 +0100 http://bitbucket.org/pypy/pypy/changeset/329e75aa0c0e/ Log: Test and fix diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -72,6 +72,11 @@ if res is not arg: debug_print("ERROR: bogus pointer equality") raise AssertionError + raw1 = rffi.cast(rffi.CCHARP, retry_counter) + raw2 = rffi.cast(rffi.CCHARP, -1) + if raw1 == raw2: + debug_print("ERROR: retry_counter == -1") + raise AssertionError def run_me(): rstm.descriptor_init() diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -182,6 +182,10 @@ newoperations.append(op) def pointer_comparison(self, newoperations, op): + T = op.args[0].concretetype.TO + if T._gckind == 'raw': + newoperations.append(op) + return if (self.localtracker.is_local(op.args[0]) and self.localtracker.is_local(op.args[1])): newoperations.append(op) From noreply at buildbot.pypy.org Fri Feb 24 13:35:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 24 Feb 2012 13:35:21 +0100 (CET) Subject: [pypy-commit] pypy default: Backout 4320ef8d1ab2 and e2ced9ddf804. It's too late anyway if Message-ID: <20120224123521.CBBDA82366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52848:de8777d0b83c Date: 2012-02-24 13:06 +0100 http://bitbucket.org/pypy/pypy/changeset/de8777d0b83c/ Log: Backout 4320ef8d1ab2 and e2ced9ddf804. It's too late anyway if floats are needed as early as the initialization of the GC. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints, fatalerror_notb) + have_debug_prints) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -161,31 +161,6 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') - _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) - - def _check_sse2(self): - """This function is called early in the execution of the program. - It checks if the CPU really supports SSE2. It is only invoked in - translated versions for now.""" - if WORD == 8: - return # all x86-64 CPUs support SSE2 - if not self.cpu.supports_floats: - return # the CPU doesn't support float, so we don't need SSE2 - # - from pypy.jit.backend.x86.detect_sse2 import INSNS - mc = codebuf.MachineCodeBlockWrapper() - for c in INSNS: - mc.writechar(c) - rawstart = mc.materialize(self.cpu.asmmemmgr, []) - fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) - features = fnptr() - if bool(features & (1<<25)) and bool(features & (1<<26)): - return # CPU supports SSE2 - fatalerror_notb( - "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" - "Your CPU is too old. Please translate a PyPy with the option:\n" - "--jit-backend=x86-without-sse2") - def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,18 +1,17 @@ import autopath +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib.rmmap import alloc, free -INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3") # RET def detect_sse2(): - from pypy.rpython.lltypesystem import lltype, rffi - from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in INSNS: + for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3"): # RET data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -63,9 +63,6 @@ self.assembler.finish_once() self.profile_agent.shutdown() - def early_initialization(self): - self.assembler._check_sse2() - def dump_loop_token(self, looptoken): """ NOT_RPYTHON diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -13,7 +13,6 @@ from pypy.rlib.debug import fatalerror from pypy.rlib.rstackovf import StackOverflow from pypy.translator.simplify import get_functype -from pypy.translator.unsimplify import call_initial_function from pypy.translator.unsimplify import call_final_function from pypy.jit.metainterp import history, pyjitpl, gc, memmgr @@ -851,9 +850,6 @@ checkgraph(origportalgraph) def add_finish(self): - def start(): - self.cpu.early_initialization() - def finish(): if self.metainterp_sd.profiler.initialized: self.metainterp_sd.profiler.finish() @@ -862,9 +858,6 @@ if self.cpu.translate_support_code: call_final_function(self.translator, finish, annhelper = self.annhelper) - if hasattr(self.cpu, 'early_initialization'): - call_initial_function(self.translator, start, - annhelper = self.annhelper) def rewrite_set_param(self): from pypy.rpython.lltypesystem.rstr import STR From noreply at buildbot.pypy.org Fri Feb 24 13:35:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 24 Feb 2012 13:35:23 +0100 (CET) Subject: [pypy-commit] pypy default: Hack differently to have it written as C code running Message-ID: <20120224123523.2861682366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52849:8d6ee57401fc Date: 2012-02-24 13:35 +0100 http://bitbucket.org/pypy/pypy/changeset/8d6ee57401fc/ Log: Hack differently to have it written as C code running very early. Hopefully fixes issue1068 on gcc. The same could be added for MSVC. diff --git a/pypy/jit/backend/x86/support.py b/pypy/jit/backend/x86/support.py --- a/pypy/jit/backend/x86/support.py +++ b/pypy/jit/backend/x86/support.py @@ -1,6 +1,7 @@ import sys from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.x86.arch import WORD def values_array(TP, size): @@ -37,8 +38,13 @@ if sys.platform == 'win32': ensure_sse2_floats = lambda : None + # XXX check for SSE2 on win32 too else: + if WORD == 4: + extra = ['-DPYPY_X86_CHECK_SSE2'] + else: + extra = [] ensure_sse2_floats = rffi.llexternal_use_eci(ExternalCompilationInfo( compile_extra = ['-msse2', '-mfpmath=sse', - '-DPYPY_CPU_HAS_STANDARD_PRECISION'], + '-DPYPY_CPU_HAS_STANDARD_PRECISION'] + extra, )) diff --git a/pypy/translator/c/src/asm_gcc_x86.h b/pypy/translator/c/src/asm_gcc_x86.h --- a/pypy/translator/c/src/asm_gcc_x86.h +++ b/pypy/translator/c/src/asm_gcc_x86.h @@ -102,6 +102,12 @@ #endif /* !PYPY_CPU_HAS_STANDARD_PRECISION */ +#ifdef PYPY_X86_CHECK_SSE2 +#define PYPY_X86_CHECK_SSE2_DEFINED +extern void pypy_x86_check_sse2(void); +#endif + + /* implementations */ #ifndef PYPY_NOT_MAIN_FILE @@ -113,4 +119,25 @@ } # endif +# ifdef PYPY_X86_CHECK_SSE2 +void pypy_x86_check_sse2(void) +{ + //Read the CPU features. + int features; + asm("mov $1, %%eax\n" + "cpuid\n" + "mov %%edx, %0" + : "=g"(features) : : "eax", "ebx", "edx", "ecx"); + + //Check bits 25 and 26, this indicates SSE2 support + if (((features & (1 << 25)) == 0) || ((features & (1 << 26)) == 0)) + { + fprintf(stderr, "Old CPU with no SSE2 support, cannot continue.\n" + "You need to re-translate with " + "'--jit-backend=x86-without-sse2'\n"); + abort(); + } +} +# endif + #endif diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -1,3 +1,4 @@ +#define PYPY_NOT_MAIN_FILE #include #include diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -36,6 +36,9 @@ RPyListOfString *list; pypy_asm_stack_bottom(); +#ifdef PYPY_X86_CHECK_SSE2_DEFINED + pypy_x86_check_sse2(); +#endif instrument_setup(); if (sizeof(void*) != SIZEOF_LONG) { From noreply at buildbot.pypy.org Fri Feb 24 13:53:16 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 13:53:16 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120224125316.0A37882366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52850:e0cdb9fff2bf Date: 2012-02-16 19:14 +0100 http://bitbucket.org/pypy/pypy/changeset/e0cdb9fff2bf/ Log: merge default diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -1697,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Fri Feb 24 13:53:17 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 13:53:17 +0100 (CET) Subject: [pypy-commit] pypy default: (arigo, bivab) Rename _Py_dg_* functions to __Py_dg_* to avoid name conflicts with python2.7 header files Message-ID: <20120224125317.423B682366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r52851:ef6c79fe35b0 Date: 2012-02-24 13:50 +0100 http://bitbucket.org/pypy/pypy/changeset/ef6c79fe35b0/ Log: (arigo, bivab) Rename _Py_dg_* functions to __Py_dg_* to avoid name conflicts with python2.7 header files diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ From noreply at buildbot.pypy.org Fri Feb 24 13:53:18 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 13:53:18 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) Rename _Py_dg_* functions to __Py_dg_* to avoid name conflicts with python2.7 header files Message-ID: <20120224125318.7836E82366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52852:31fd148ff4be Date: 2012-02-24 13:50 +0100 http://bitbucket.org/pypy/pypy/changeset/31fd148ff4be/ Log: (arigo, bivab) Rename _Py_dg_* functions to __Py_dg_* to avoid name conflicts with python2.7 header files diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ From noreply at buildbot.pypy.org Fri Feb 24 13:53:19 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 13:53:19 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge heads Message-ID: <20120224125319.B099782366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52853:e63805bdcc74 Date: 2012-02-24 13:52 +0100 http://bitbucket.org/pypy/pypy/changeset/e63805bdcc74/ Log: merge heads diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -1697,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', From noreply at buildbot.pypy.org Fri Feb 24 14:01:04 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 14:01:04 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: extend timeout in these tests too Message-ID: <20120224130104.58E6D82366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52854:0b6b1844ab77 Date: 2012-02-24 10:43 +0000 http://bitbucket.org/pypy/pypy/changeset/0b6b1844ab77/ Log: extend timeout in these tests too diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -1049,6 +1049,7 @@ def _spawn(self, *args, **kwds): import pexpect + kwds.setdefault('timeout', 600) print 'SPAWN:', args, kwds child = pexpect.spawn(*args, **kwds) child.logfile = sys.stdout From noreply at buildbot.pypy.org Fri Feb 24 14:01:05 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 24 Feb 2012 14:01:05 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge upstream Message-ID: <20120224130105.9544082366@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52855:6b901041dabf Date: 2012-02-24 12:55 +0000 http://bitbucket.org/pypy/pypy/changeset/6b901041dabf/ Log: merge upstream diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -1697,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ From noreply at buildbot.pypy.org Fri Feb 24 15:12:00 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:00 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix Module.__reduce__, the name now is an unicode string Message-ID: <20120224141200.B86BC82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52856:99bb29709f3f Date: 2012-02-24 11:31 +0100 http://bitbucket.org/pypy/pypy/changeset/99bb29709f3f/ Log: fix Module.__reduce__, the name now is an unicode string diff --git a/pypy/interpreter/module.py b/pypy/interpreter/module.py --- a/pypy/interpreter/module.py +++ b/pypy/interpreter/module.py @@ -80,7 +80,7 @@ def descr__reduce__(self, space): w_name = space.finditem(self.w_dict, space.wrap('__name__')) if (w_name is None or - not space.is_true(space.isinstance(w_name, space.w_str))): + not space.is_true(space.isinstance(w_name, space.w_unicode))): # maybe raise exception here (XXX this path is untested) return space.w_None w_modules = space.sys.get('modules') From noreply at buildbot.pypy.org Fri Feb 24 15:12:02 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: split this test into two parts; fix the first, and skip the second (maybe we should just kill it?) Message-ID: <20120224141202.401D082366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52857:64d3ae567ff2 Date: 2012-02-24 11:34 +0100 http://bitbucket.org/pypy/pypy/changeset/64d3ae567ff2/ Log: split this test into two parts; fix the first, and skip the second (maybe we should just kill it?) diff --git a/pypy/interpreter/test/test_zzpickle_and_slow.py b/pypy/interpreter/test/test_zzpickle_and_slow.py --- a/pypy/interpreter/test/test_zzpickle_and_slow.py +++ b/pypy/interpreter/test/test_zzpickle_and_slow.py @@ -459,8 +459,10 @@ meth1(1) meth2(2) assert a_list == [1, 1] - assert meth2.im_self == [1, 2] + assert meth2.__self__ == [1, 2] + def test_pickle_builtin_method_unbound(self): + skip('we no longer have unbound methods in py3k: is this test still valid?') unbound_meth = list.append unbound_meth2 = pickle.loads(pickle.dumps(unbound_meth)) l = [] From noreply at buildbot.pypy.org Fri Feb 24 15:12:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:03 +0100 (CET) Subject: [pypy-commit] pypy py3k: replace new.module with types.ModuleType Message-ID: <20120224141203.787AE82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52858:10a54e1fe177 Date: 2012-02-24 11:40 +0100 http://bitbucket.org/pypy/pypy/changeset/10a54e1fe177/ Log: replace new.module with types.ModuleType diff --git a/pypy/interpreter/test/test_zzpickle_and_slow.py b/pypy/interpreter/test/test_zzpickle_and_slow.py --- a/pypy/interpreter/test/test_zzpickle_and_slow.py +++ b/pypy/interpreter/test/test_zzpickle_and_slow.py @@ -90,8 +90,8 @@ assert code == result def test_pickle_global_func(self): - import new - mod = new.module('mod') + import types + mod = types.ModuleType('mod') import sys sys.modules['mod'] = mod try: @@ -107,8 +107,8 @@ del sys.modules['mod'] def test_pickle_not_imported_module(self): - import new - mod = new.module('mod') + import types + mod = types.ModuleType('mod') mod.__dict__['a'] = 1 import pickle pckl = pickle.dumps(mod) @@ -286,10 +286,10 @@ return 42 def __reduce__(self): return (myclass, ()) - import pickle, sys, new + import pickle, sys, types myclass.__module__ = 'mod' myclass_inst = myclass() - mod = new.module('mod') + mod = types.ModuleType('mod') mod.myclass = myclass sys.modules['mod'] = mod try: @@ -317,9 +317,9 @@ def f(cls): return cls f = classmethod(f) - import pickle, sys, new + import pickle, sys, types myclass.__module__ = 'mod' - mod = new.module('mod') + mod = types.ModuleType('mod') mod.myclass = myclass sys.modules['mod'] = mod try: @@ -403,8 +403,8 @@ assert list(result) == [2,3,4] def test_pickle_generator(self): - import new - mod = new.module('mod') + import types + mod = types.ModuleType('mod') import sys sys.modules['mod'] = mod try: @@ -427,8 +427,8 @@ def test_pickle_generator_blk(self): # same as above but with the generator inside a block - import new - mod = new.module('mod') + import types + mod = types.ModuleType('mod') import sys sys.modules['mod'] = mod try: @@ -471,11 +471,11 @@ def test_pickle_submodule(self): import pickle - import sys, new + import sys, types - mod = new.module('pack.mod') + mod = types.ModuleType('pack.mod') sys.modules['pack.mod'] = mod - pack = new.module('pack') + pack = types.ModuleType('pack') pack.mod = mod sys.modules['pack'] = pack From noreply at buildbot.pypy.org Fri Feb 24 15:12:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: this test already fails, add a smaller failing case Message-ID: <20120224141204.B0E1A82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52859:07761ed93f55 Date: 2012-02-24 13:37 +0100 http://bitbucket.org/pypy/pypy/changeset/07761ed93f55/ Log: this test already fails, add a smaller failing case diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -68,6 +68,9 @@ # Issue #9804: surrogates should be joined even for printable # wide characters (UCS-2 builds). assert ascii('\U0001d121') == "'\\U0001d121'" + # another buggy case + x = ascii("\U00012fff") + assert x == r"'\U00012fff'" # All together s = "'\0\"\n\r\t abcd\x85é\U00012fff\uD800\U0001D121xxx." assert ascii(s) == \ From noreply at buildbot.pypy.org Fri Feb 24 15:12:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix bin() and its test Message-ID: <20120224141205.E88E882366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52860:7ca5d4566fbf Date: 2012-02-24 14:59 +0100 http://bitbucket.org/pypy/pypy/changeset/7ca5d4566fbf/ Log: fix bin() and its test diff --git a/pypy/module/__builtin__/app_operation.py b/pypy/module/__builtin__/app_operation.py --- a/pypy/module/__builtin__/app_operation.py +++ b/pypy/module/__builtin__/app_operation.py @@ -1,4 +1,4 @@ def bin(x): - if not isinstance(x, (int, long)): - raise TypeError("must be int or long") + if not isinstance(x, int): + raise TypeError("%s object cannot be interpreted as an integer" % type(x)) return x.__format__("#b") diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -79,8 +79,8 @@ def test_bin(self): assert bin(0) == "0b0" assert bin(-1) == "-0b1" - assert bin(2L) == "0b10" - assert bin(-2L) == "-0b10" + assert bin(2) == "0b10" + assert bin(-2) == "-0b10" raises(TypeError, bin, 0.) def test_chr(self): From noreply at buildbot.pypy.org Fri Feb 24 15:12:07 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120224141207.2D54D82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52861:4b826a2943d6 Date: 2012-02-24 15:00 +0100 http://bitbucket.org/pypy/pypy/changeset/4b826a2943d6/ Log: fix syntax diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -412,8 +412,8 @@ assert cmp(9,9) == 0 assert cmp(0,9) < 0 assert cmp(9,0) > 0 + assert cmp(b"abc", 12) != 0 assert cmp("abc", 12) != 0 - assert cmp(u"abc", 12) != 0 def test_cmp_more(self): class C(object): From noreply at buildbot.pypy.org Fri Feb 24 15:12:08 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 15:12:08 +0100 (CET) Subject: [pypy-commit] pypy py3k: completely remove support for coerce() and __coerce__. I hope I didn't break anything :-) Message-ID: <20120224141208.7A86C82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52862:67d5616698f2 Date: 2012-02-24 15:11 +0100 http://bitbucket.org/pypy/pypy/changeset/67d5616698f2/ Log: completely remove support for coerce() and __coerce__. I hope I didn't break anything :-) diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1569,7 +1569,6 @@ ('gt', '>', 2, ['__gt__', '__lt__']), ('ge', '>=', 2, ['__ge__', '__le__']), ('cmp', 'cmp', 2, ['__cmp__']), # rich cmps preferred - ('coerce', 'coerce', 2, ['__coerce__', '__coerce__']), ('contains', 'contains', 2, ['__contains__']), ('iter', 'iter', 1, ['__iter__']), ('next', 'next', 1, ['__next__']), diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -58,7 +58,6 @@ 'hex' : 'operation.hex', 'round' : 'operation.round', 'cmp' : 'operation.cmp', - 'coerce' : 'operation.coerce', 'divmod' : 'operation.divmod', 'format' : 'operation.format', 'issubclass' : 'abstractinst.app_issubclass', diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -107,14 +107,6 @@ """return 0 when x == y, -1 when x < y and 1 when x > y """ return space.cmp(w_x, w_y) -def coerce(space, w_x, w_y): - """coerce(x, y) -> (x1, y1) - -Return a tuple consisting of the two numeric arguments converted to -a common type, using the same rules as used by arithmetic operations. -If coercion is not possible, raise TypeError.""" - return space.coerce(w_x, w_y) - def divmod(space, w_x, w_y): """Return the tuple ((x-x%y)/y, x%y). Invariant: div*y + mod == x.""" return space.divmod(w_x, w_y) diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -440,17 +440,6 @@ raises(RuntimeError, cmp, a, c) # okay, now break the cycles a.pop(); b.pop(); c.pop() - - def test_coerce(self): - assert coerce(1, 2) == (1, 2) - assert coerce(1L, 2L) == (1L, 2L) - assert coerce(1, 2L) == (1L, 2L) - assert coerce(1L, 2) == (1L, 2L) - assert coerce(1, 2.0) == (1.0, 2.0) - assert coerce(1.0, 2L) == (1.0, 2.0) - assert coerce(1L, 2.0) == (1.0, 2.0) - raises(TypeError,coerce, 1 , 'a') - raises(TypeError,coerce, u'a' , 'a') def test_return_None(self): class X(object): pass diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -513,8 +513,6 @@ RBINSLOT("__rxor__", nb_xor, slot_nb_xor, "^"), BINSLOT("__or__", nb_or, slot_nb_or, "|"), RBINSLOT("__ror__", nb_or, slot_nb_or, "|"), - NBSLOT("__coerce__", nb_coerce, slot_nb_coerce, wrap_coercefunc, - "x.__coerce__(y) <==> coerce(x, y)"), UNSLOT("__int__", nb_int, slot_nb_int, wrap_unaryfunc, "int(x)"), UNSLOT("__long__", nb_long, slot_nb_long, wrap_unaryfunc, diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1571,24 +1571,6 @@ """ raise NotImplementedError - at cpython_api([PyObjectP, PyObjectP], rffi.INT_real, error=-1) -def PyNumber_Coerce(space, p1, p2): - """This function takes the addresses of two variables of type PyObject*. If - the objects pointed to by *p1 and *p2 have the same type, increment their - reference count and return 0 (success). If the objects can be converted to a - common numeric type, replace *p1 and *p2 by their converted value (with - 'new' reference counts), and return 0. If no conversion is possible, or if - some other error occurs, return -1 (failure) and don't increment the - reference counts. The call PyNumber_Coerce(&o1, &o2) is equivalent to the - Python statement o1, o2 = coerce(o1, o2).""" - raise NotImplementedError - - at cpython_api([PyObjectP, PyObjectP], rffi.INT_real, error=-1) -def PyNumber_CoerceEx(space, p1, p2): - """This function is similar to PyNumber_Coerce(), except that it returns - 1 when the conversion is not possible and when no error is raised. - Reference counts are still not increased in this case.""" - raise NotImplementedError @cpython_api([PyObject, rffi.INT_real], PyObject) def PyNumber_ToBase(space, n, base): diff --git a/pypy/module/cpyext/typeobjectdefs.py b/pypy/module/cpyext/typeobjectdefs.py --- a/pypy/module/cpyext/typeobjectdefs.py +++ b/pypy/module/cpyext/typeobjectdefs.py @@ -33,7 +33,6 @@ ternaryfunc = P(FT([PyO, PyO, PyO], PyO)) inquiry = P(FT([PyO], rffi.INT_real)) lenfunc = P(FT([PyO], Py_ssize_t)) -coercion = P(FT([PyOPtr, PyOPtr], rffi.INT_real)) intargfunc = P(FT([PyO, rffi.INT_real], PyO)) intintargfunc = P(FT([PyO, rffi.INT_real, rffi.INT], PyO)) ssizeargfunc = P(FT([PyO, Py_ssize_t], PyO)) @@ -89,7 +88,6 @@ ("nb_and", binaryfunc), ("nb_xor", binaryfunc), ("nb_or", binaryfunc), - ("nb_coerce", coercion), ("nb_int", unaryfunc), ("nb_long", unaryfunc), ("nb_float", unaryfunc), diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -459,36 +459,6 @@ return space.wrap(-1) return space.wrap(1) - def coerce(space, w_obj1, w_obj2): - w_typ1 = space.type(w_obj1) - w_typ2 = space.type(w_obj2) - w_left_src, w_left_impl = space.lookup_in_type_where(w_typ1, '__coerce__') - if space.is_w(w_typ1, w_typ2): - w_right_impl = None - else: - w_right_src, w_right_impl = space.lookup_in_type_where(w_typ2, '__coerce__') - if (w_left_src is not w_right_src - and space.is_true(space.issubtype(w_typ2, w_typ1))): - w_obj1, w_obj2 = w_obj2, w_obj1 - w_left_impl, w_right_impl = w_right_impl, w_left_impl - - w_res = _invoke_binop(space, w_left_impl, w_obj1, w_obj2) - if w_res is None or space.is_w(w_res, space.w_None): - w_res = _invoke_binop(space, w_right_impl, w_obj2, w_obj1) - if w_res is None or space.is_w(w_res, space.w_None): - raise OperationError(space.w_TypeError, - space.wrap("coercion failed")) - if (not space.is_true(space.isinstance(w_res, space.w_tuple)) or - space.len_w(w_res) != 2): - raise OperationError(space.w_TypeError, - space.wrap("coercion should return None or 2-tuple")) - w_res = space.newtuple([space.getitem(w_res, space.wrap(1)), space.getitem(w_res, space.wrap(0))]) - elif (not space.is_true(space.isinstance(w_res, space.w_tuple)) or - space.len_w(w_res) != 2): - raise OperationError(space.w_TypeError, - space.wrap("coercion should return None or 2-tuple")) - return w_res - def issubtype(space, w_sub, w_type): return space._type_issubtype(w_sub, w_type) diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -217,7 +217,6 @@ ('inplace_or', inplace_or), ('inplace_xor', inplace_xor), ('cmp', cmp), - ('coerce', coerce), ('iter', iter), ('next', next), ('get', get), @@ -302,10 +301,9 @@ for _name in 'getattr', 'delattr': _add_exceptions(_name, AttributeError) -for _name in 'iter', 'coerce': - _add_exceptions(_name, TypeError) del _name +_add_exceptions('iter', TypeError) _add_exceptions("""div mod divmod truediv floordiv pow inplace_div inplace_mod inplace_divmod inplace_truediv inplace_floordiv inplace_pow""", ZeroDivisionError) diff --git a/pypy/objspace/std/builtinshortcut.py b/pypy/objspace/std/builtinshortcut.py --- a/pypy/objspace/std/builtinshortcut.py +++ b/pypy/objspace/std/builtinshortcut.py @@ -37,7 +37,7 @@ 'delitem', 'trunc', # rare stuff? 'abs', 'hex', 'oct', # rare stuff? 'pos', 'divmod', 'cmp', # rare stuff? - 'float', 'long', 'coerce', # rare stuff? + 'float', 'long', # rare stuff? 'isinstance', 'issubtype', ] diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -247,9 +247,6 @@ return space.newbool((w_complex.realval != 0.0) or (w_complex.imagval != 0.0)) -def coerce__Complex_Complex(space, w_complex1, w_complex2): - return space.newtuple([w_complex1, w_complex2]) - def float__Complex(space, w_complex): raise OperationError(space.w_TypeError, space.wrap("can't convert complex to float; use abs(z)")) diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -339,10 +339,6 @@ return x -# coerce -def coerce__Float_Float(space, w_float1, w_float2): - return space.newtuple([w_float1, w_float2]) - def add__Float_Float(space, w_float1, w_float2): x = w_float1.floatval diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -102,10 +102,6 @@ # Make sure this is consistent with the hash of floats and longs. return get_integer(space, w_int1) -# coerce -def coerce__Int_Int(space, w_int1, w_int2): - return space.newtuple([w_int1, w_int2]) - def add__Int_Int(space, w_int1, w_int2): x = w_int1.intval diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -170,11 +170,6 @@ def hash__Long(space, w_value): return space.wrap(w_value.num.hash()) -# coerce -def coerce__Long_Long(space, w_long1, w_long2): - return space.newtuple([w_long1, w_long2]) - - def add__Long_Long(space, w_long1, w_long2): return W_LongObject(w_long1.num.add(w_long2.num)) diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -165,9 +165,6 @@ def test_floordiv(self): raises(TypeError, "3+0j // 0+0j") - def test_coerce(self): - raises(OverflowError, complex.__coerce__, 1+1j, 1L<<10000) - def test_richcompare(self): assert complex.__lt__(1+1j, None) is NotImplemented assert complex.__eq__(1+1j, 2+2j) is False diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -249,11 +249,8 @@ ('__and__', 'x & y', 'x &= y'), ('__or__', 'x | y', 'x |= y'), ('__xor__', 'x ^ y', 'x ^= y'), - ('__coerce__', 'coerce(x, y)', None)]: - if name == '__coerce__': - rname = name - else: - rname = '__r' + name[2:] + ]: + rname = '__r' + name[2:] A = metaclass('A', (), {name: specialmethod}) B = metaclass('B', (), {rname: specialmethod}) a = A() From noreply at buildbot.pypy.org Fri Feb 24 16:22:20 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 24 Feb 2012 16:22:20 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-type-pure-python: Accidentally combined a merge from numpy-record-dtypes, as well as make nonnative types work again. Message-ID: <20120224152220.0F9EE82366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-record-type-pure-python Changeset: r52863:cd3f2d5351ae Date: 2012-02-24 10:22 -0500 http://bitbucket.org/pypy/pypy/changeset/cd3f2d5351ae/ Log: Accidentally combined a merge from numpy-record-dtypes, as well as make nonnative types work again. diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib_pypy/_subprocess.py b/lib_pypy/_subprocess.py --- a/lib_pypy/_subprocess.py +++ b/lib_pypy/_subprocess.py @@ -87,7 +87,7 @@ # Now the _subprocess module implementation -from ctypes import c_int as _c_int, byref as _byref +from ctypes import c_int as _c_int, byref as _byref, WinError as _WinError class _handle: def __init__(self, handle): @@ -116,7 +116,7 @@ res = _CreatePipe(_byref(read), _byref(write), None, size) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(read.value), _handle(write.value) @@ -132,7 +132,7 @@ access, inherit, options) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(target.value) DUPLICATE_SAME_ACCESS = 2 @@ -165,7 +165,7 @@ start_dir, _byref(si), _byref(pi)) if not res: - raise WindowsError("Error") + raise _WinError() return _handle(pi.hProcess), _handle(pi.hThread), pi.dwProcessID, pi.dwThreadID STARTF_USESHOWWINDOW = 0x001 @@ -178,7 +178,7 @@ res = _WaitForSingleObject(int(handle), milliseconds) if res < 0: - raise WindowsError("Error") + raise _WinError() return res INFINITE = 0xffffffff @@ -190,7 +190,7 @@ res = _GetExitCodeProcess(int(handle), _byref(code)) if not res: - raise WindowsError("Error") + raise _WinError() return code.value @@ -198,7 +198,7 @@ res = _TerminateProcess(int(handle), exitcode) if not res: - raise WindowsError("Error") + raise _WinError() def GetStdHandle(stdhandle): res = _GetStdHandle(stdhandle) diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -1520,7 +1522,7 @@ def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." t, frac = divmod(t, 1.0) - us = round(frac * 1e6) + us = int(round(frac * 1e6)) # If timestamp is less than one microsecond smaller than a # full second, us can be rounded up to 1000000. In this case, diff --git a/lib_pypy/numpy.py b/lib_pypy/numpy.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpy.py @@ -0,0 +1,5 @@ +raise ImportError( + "The 'numpy' module of PyPy is in-development and not complete. " + "To try it out anyway, you can either import from 'numpypy', " + "or just write 'import numpypy' first in your program and then " + "import from 'numpy' as usual.") diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,5 @@ from _numpypy import * from .core import * + +import sys +sys.modules.setdefault('numpy', sys.modules['numpypy']) diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,6 +1,7 @@ -from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +from _numpypy import array, ndarray, int_, float_, bool_ #, complex_# , longlong from _numpypy import concatenate +import math import sys import _numpypy as multiarray # ARGH from numpypy.core.arrayprint import array2string @@ -309,3 +310,13 @@ set_string_function(array_repr, 1) little_endian = (sys.byteorder == 'little') + +Inf = inf = infty = Infinity = PINF = float('inf') +NINF = float('-inf') +PZERO = 0.0 +NZERO = -0.0 +nan = NaN = NAN = float('nan') +False_ = bool_(False) +True_ = bool_(True) +e = math.e +pi = math.pi \ No newline at end of file diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -271,16 +271,24 @@ ('srWindow', SMALL_RECT), ('dwMaximumWindowSize', COORD)] + _GetStdHandle = ctypes.windll.kernel32.GetStdHandle + _GetStdHandle.argtypes = [wintypes.DWORD] + _GetStdHandle.restype = wintypes.HANDLE def GetStdHandle(kind): - return ctypes.windll.kernel32.GetStdHandle(kind) + return _GetStdHandle(kind) - SetConsoleTextAttribute = \ - ctypes.windll.kernel32.SetConsoleTextAttribute - + SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute + SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD] + SetConsoleTextAttribute.restype = wintypes.BOOL + + _GetConsoleScreenBufferInfo = \ + ctypes.windll.kernel32.GetConsoleScreenBufferInfo + _GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE, + ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)] + _GetConsoleScreenBufferInfo.restype = wintypes.BOOL def GetConsoleInfo(handle): info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo(\ - handle, ctypes.byref(info)) + _GetConsoleScreenBufferInfo(handle, ctypes.byref(info)) return info def _getdimensions(): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -81,6 +81,7 @@ "run these through (pdf)latex." man: + python config/generate.py $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man" diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -388,7 +388,9 @@ In a few cases (e.g. hash table manipulation), we need machine-sized unsigned arithmetic. For these cases there is the r_uint class, which is a pure Python implementation of word-sized unsigned integers that silently wrap - around. The purpose of this class (as opposed to helper functions as above) + around. ("word-sized" and "machine-sized" are used equivalently and mean + the native size, which you get using "unsigned long" in C.) + The purpose of this class (as opposed to helper functions as above) is consistent typing: both Python and the annotator will propagate r_uint instances in the program and interpret all the operations between them as unsigned. Instances of r_uint are special-cased by the code generators to diff --git a/pypy/doc/commandline_ref.rst b/pypy/doc/commandline_ref.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/commandline_ref.rst @@ -0,0 +1,10 @@ +Command line reference +====================== + +Manual pages +------------ + +.. toctree:: + :maxdepth: 1 + + man/pypy.1.rst diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.7' +version = '1.8' # The full version, including alpha/beta/rc tags. -release = '1.7' +release = '1.8' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt --- a/pypy/doc/config/translation.log.txt +++ b/pypy/doc/config/translation.log.txt @@ -2,4 +2,4 @@ These must be enabled by setting the PYPYLOG environment variable. The exact set of features supported by PYPYLOG is described in -pypy/translation/c/src/debug.h. +pypy/translation/c/src/debug_print.h. diff --git a/pypy/doc/garbage_collection.rst b/pypy/doc/garbage_collection.rst --- a/pypy/doc/garbage_collection.rst +++ b/pypy/doc/garbage_collection.rst @@ -142,10 +142,9 @@ So as a first approximation, when compared to the Hybrid GC, the Minimark GC saves one word of memory per old object. -There are a number of environment variables that can be tweaked to -influence the GC. (Their default value should be ok for most usages.) -You can read more about them at the start of -`pypy/rpython/memory/gc/minimark.py`_. +There are :ref:`a number of environment variables +` that can be tweaked to influence the +GC. (Their default value should be ok for most usages.) In more detail: @@ -211,5 +210,4 @@ are preserved. If the object dies then the pre-reserved location becomes free garbage, to be collected at the next major collection. - .. include:: _ref.txt diff --git a/pypy/doc/gc_info.rst b/pypy/doc/gc_info.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/gc_info.rst @@ -0,0 +1,53 @@ +Garbage collector configuration +=============================== + +.. _minimark-environment-variables: + +Minimark +-------- + +PyPy's default ``minimark`` garbage collector is configurable through +several environment variables: + +``PYPY_GC_NURSERY`` + The nursery size. + Defaults to ``4MB``. + Small values (like 1 or 1KB) are useful for debugging. + +``PYPY_GC_MAJOR_COLLECT`` + Major collection memory factor. + Default is ``1.82``, which means trigger a major collection when the + memory consumed equals 1.82 times the memory really used at the end + of the previous major collection. + +``PYPY_GC_GROWTH`` + Major collection threshold's max growth rate. + Default is ``1.4``. + Useful to collect more often than normally on sudden memory growth, + e.g. when there is a temporary peak in memory usage. + +``PYPY_GC_MAX`` + The max heap size. + If coming near this limit, it will first collect more often, then + raise an RPython MemoryError, and if that is not enough, crash the + program with a fatal error. + Try values like ``1.6GB``. + +``PYPY_GC_MAX_DELTA`` + The major collection threshold will never be set to more than + ``PYPY_GC_MAX_DELTA`` the amount really used after a collection. + Defaults to 1/8th of the total RAM size (which is constrained to be + at most 2/3/4GB on 32-bit systems). + Try values like ``200MB``. + +``PYPY_GC_MIN`` + Don't collect while the memory size is below this limit. + Useful to avoid spending all the time in the GC in very small + programs. + Defaults to 8 times the nursery. + +``PYPY_GC_DEBUG`` + Enable extra checks around collections that are too slow for normal + use. + Values are ``0`` (off), ``1`` (on major collections) or ``2`` (also + on minor collections). diff --git a/pypy/doc/getting-started-python.rst b/pypy/doc/getting-started-python.rst --- a/pypy/doc/getting-started-python.rst +++ b/pypy/doc/getting-started-python.rst @@ -103,18 +103,22 @@ executable. The executable behaves mostly like a normal Python interpreter:: $ ./pypy-c - Python 2.7.0 (61ef2a11b56a, Mar 02 2011, 03:00:11) - [PyPy 1.6.0 with GCC 4.4.3] on linux2 + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``this sentence is false'' >>>> 46 - 4 42 >>>> from test import pystone >>>> pystone.main() - Pystone(1.1) time for 50000 passes = 0.280017 - This machine benchmarks at 178561 pystones/second - >>>> + Pystone(1.1) time for 50000 passes = 0.220015 + This machine benchmarks at 227257 pystones/second + >>>> pystone.main() + Pystone(1.1) time for 50000 passes = 0.060004 + This machine benchmarks at 833278 pystones/second + >>>> +Note that pystone gets faster as the JIT kicks in. This executable can be moved around or copied on other machines; see Installation_ below. diff --git a/pypy/doc/getting-started.rst b/pypy/doc/getting-started.rst --- a/pypy/doc/getting-started.rst +++ b/pypy/doc/getting-started.rst @@ -53,14 +53,15 @@ PyPy is ready to be executed as soon as you unpack the tarball or the zip file, with no need to install it in any specific location:: - $ tar xf pypy-1.7-linux.tar.bz2 - - $ ./pypy-1.7/bin/pypy - Python 2.7.1 (?, Apr 27 2011, 12:44:21) - [PyPy 1.7.0 with GCC 4.4.3] on linux2 + $ tar xf pypy-1.8-linux.tar.bz2 + $ ./pypy-1.8/bin/pypy + Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 19:41:03) + [PyPy 1.8.0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. - And now for something completely different: ``implementing LOGO in LOGO: - "turtles all the way down"'' + And now for something completely different: ``it seems to me that once you + settle on an execution / object model and / or bytecode format, you've already + decided what languages (where the 's' seems superfluous) support is going to be + first class for'' >>>> If you want to make PyPy available system-wide, you can put a symlink to the @@ -75,14 +76,14 @@ $ curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py - $ ./pypy-1.7/bin/pypy distribute_setup.py + $ ./pypy-1.8/bin/pypy distribute_setup.py - $ ./pypy-1.7/bin/pypy get-pip.py + $ ./pypy-1.8/bin/pypy get-pip.py - $ ./pypy-1.7/bin/pip install pygments # for example + $ ./pypy-1.8/bin/pip install pygments # for example -3rd party libraries will be installed in ``pypy-1.7/site-packages``, and -the scripts in ``pypy-1.7/bin``. +3rd party libraries will be installed in ``pypy-1.8/site-packages``, and +the scripts in ``pypy-1.8/bin``. Installing using virtualenv --------------------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.7`_: the latest official release +* `Release 1.8`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.7`: http://pypy.org/download.html +.. _`Release 1.8`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.7`__. +instead of the latest release, which is `1.8`__. -.. __: release-1.7.0.html +.. __: release-1.8.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix @@ -353,10 +353,12 @@ getting-started-dev.rst windows.rst faq.rst + commandline_ref.rst architecture.rst coding-guide.rst cpython_differences.rst garbage_collection.rst + gc_info.rst interpreter.rst objspace.rst __pypy__-module.rst diff --git a/pypy/doc/jit-hooks.rst b/pypy/doc/jit-hooks.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/jit-hooks.rst @@ -0,0 +1,66 @@ +JIT hooks in PyPy +================= + +There are several hooks in the `pypyjit` module that may help you with +understanding what's pypy's JIT doing while running your program. There +are three functions related to that coming from the `pypyjit` module: + +* `set_optimize_hook`:: + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + +* `set_compile_hook`:: + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + +* `set_abort_hook`:: + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. diff --git a/pypy/doc/jit/index.rst b/pypy/doc/jit/index.rst --- a/pypy/doc/jit/index.rst +++ b/pypy/doc/jit/index.rst @@ -21,6 +21,9 @@ - Notes_ about the current work in PyPy +- Hooks_ debugging facilities available to a python programmer + .. _Overview: overview.html .. _Notes: pyjitpl5.html +.. _Hooks: ../jit-hooks.html diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -24,6 +24,9 @@ -S Do not ``import site`` on initialization. +-s + Don't add the user site directory to `sys.path`. + -u Unbuffered binary ``stdout`` and ``stderr``. @@ -39,6 +42,9 @@ -E Ignore environment variables (such as ``PYTHONPATH``). +-B + Disable writing bytecode (``.pyc``) files. + --version Print the PyPy version. @@ -84,6 +90,64 @@ Optimizations to enabled or ``all``. Warning, this option is dangerous, and should be avoided. +ENVIRONMENT +=========== + +``PYTHONPATH`` + Add directories to pypy's module search path. + The format is the same as shell's ``PATH``. + +``PYTHONSTARTUP`` + A script referenced by this variable will be executed before the + first prompt is displayed, in interactive mode. + +``PYTHONDONTWRITEBYTECODE`` + If set to a non-empty value, equivalent to the ``-B`` option. + Disable writing ``.pyc`` files. + +``PYTHONINSPECT`` + If set to a non-empty value, equivalent to the ``-i`` option. + Inspect interactively after running the specified script. + +``PYTHONIOENCODING`` + If this is set, it overrides the encoding used for + *stdin*/*stdout*/*stderr*. + The syntax is *encodingname*:*errorhandler* + The *errorhandler* part is optional and has the same meaning as in + `str.encode`. + +``PYTHONNOUSERSITE`` + If set to a non-empty value, equivalent to the ``-s`` option. + Don't add the user site directory to `sys.path`. + +``PYTHONWARNINGS`` + If set, equivalent to the ``-W`` option (warning control). + The value should be a comma-separated list of ``-W`` parameters. + +``PYPYLOG`` + If set to a non-empty value, enable logging, the format is: + + *fname* + logging for profiling: includes all + ``debug_start``/``debug_stop`` but not any nested + ``debug_print``. + *fname* can be ``-`` to log to *stderr*. + + ``:``\ *fname* + Full logging, including ``debug_print``. + + *prefix*\ ``:``\ *fname* + Conditional logging. + Multiple prefixes can be specified, comma-separated. + Only sections whose name match the prefix will be logged. + + ``PYPYLOG``\ =\ ``jit-log-opt,jit-backend:``\ *logfile* will + generate a log suitable for *jitviewer*, a tool for debugging + performance issues under PyPy. + +.. include:: ../gc_info.rst + :start-line: 7 + SEE ALSO ======== diff --git a/pypy/doc/release-1.8.0.rst b/pypy/doc/release-1.8.0.rst --- a/pypy/doc/release-1.8.0.rst +++ b/pypy/doc/release-1.8.0.rst @@ -1,17 +1,22 @@ ============================ -PyPy 1.7 - business as usual +PyPy 1.8 - business as usual ============================ -We're pleased to announce the 1.8 release of PyPy. As became a habit, this -release brings a lot of bugfixes, performance and memory improvements over -the 1.7 release. The main highlight of the release is the introduction of -list strategies which makes homogenous lists more efficient both in terms -of performance and memory. Otherwise it's "business as usual" in the sense -that performance improved roughly 10% on average since the previous release. -You can download the PyPy 1.8 release here: +We're pleased to announce the 1.8 release of PyPy. As habitual this +release brings a lot of bugfixes, together with performance and memory +improvements over the 1.7 release. The main highlight of the release +is the introduction of `list strategies`_ which makes homogenous lists +more efficient both in terms of performance and memory. This release +also upgrades us from Python 2.7.1 compatibility to 2.7.2. Otherwise +it's "business as usual" in the sense that performance improved +roughly 10% on average since the previous release. + +you can download the PyPy 1.8 release here: http://pypy.org/download.html +.. _`list strategies`: http://morepypy.blogspot.com/2011/10/more-compact-lists-with-list-strategies.html + What is PyPy? ============= @@ -20,7 +25,8 @@ due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or -Windows 32. Windows 64 work is ongoing, but not yet natively supported. +Windows 32. Windows 64 work has been stalled, we would welcome a volunteer +to handle that. .. _`pypy 1.8 and cpython 2.7.1`: http://speed.pypy.org @@ -33,20 +39,60 @@ the JIT performance in places that use such lists. There are also special strategies for unicode and string lists. -* As usual, numerous performance improvements. There are too many examples - which python constructs now should behave faster to list them. +* As usual, numerous performance improvements. There are many examples + of python constructs that now should be faster; too many to list them. * Bugfixes and compatibility fixes with CPython. * Windows fixes. -* NumPy effort progress, for the exact list of things that have been done, +* NumPy effort progress; for the exact list of things that have been done, consult the `numpy status page`_. A tentative list of things that has been done: - xxxx # list it, multidim arrays in particular + * multi dimensional arrays -* Fundraising XXX + * various sizes of dtypes -.. _`numpy status page`: xxx -.. _`numpy status update blog report`: xxx + * a lot of ufuncs + + * a lot of other minor changes + + Right now the `numpy` module is available under both `numpy` and `numpypy` + names. However, because it's incomplete, you have to `import numpypy` first + before doing any imports from `numpy`. + +* New JIT hooks that allow you to hook into the JIT process from your python + program. There is a `brief overview`_ of what they offer. + +* Standard library upgrade from 2.7.1 to 2.7.2. + +Ongoing work +============ + +As usual, there is quite a bit of ongoing work that either didn't make it to +the release or is not ready yet. Highlights include: + +* Non-x86 backends for the JIT: ARMv7 (almost ready) and PPC64 (in progress) + +* Specialized type instances - allocate instances as efficient as C structs, + including type specialization + +* More numpy work + +* Since the last release there was a significant breakthrough in PyPy's + fundraising. We now have enough funds to work on first stages of `numpypy`_ + and `py3k`_. We would like to thank again to everyone who donated. + +* It's also probably worth noting, we're considering donations for the + Software Transactional Memory project. You can read more about `our plans`_ + +Cheers, +The PyPy Team + +.. _`brief overview`: http://doc.pypy.org/en/latest/jit-hooks.html +.. _`numpy status page`: http://buildbot.pypy.org/numpy-status/latest.html +.. _`numpy status update blog report`: http://morepypy.blogspot.com/2012/01/numpypy-status-update.html +.. _`numpypy`: http://pypy.org/numpydonate.html +.. _`py3k`: http://pypy.org/py3donate.html +.. _`our plans`: http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -302,8 +302,7 @@ # narrow builds will return a surrogate. In both # the cases skip the optimization in order to # produce compatible pycs. - if (self.space.isinstance_w(w_obj, self.space.w_unicode) - and + if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): unistr = self.space.unicode_w(w_const) if len(unistr) == 1: @@ -311,7 +310,7 @@ else: ch = 0 if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFFF)): + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,7 +838,7 @@ # Just checking this doesn't crash out self.count_instructions(source) - def test_const_fold_unicode_subscr(self): + def test_const_fold_unicode_subscr(self, monkeypatch): source = """def f(): return u"abc"[0] """ @@ -853,6 +853,14 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) + source = """def f(): + return u"\uE01F"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + monkeypatch.undo() + # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1340,6 +1340,15 @@ def unicode_w(self, w_obj): return w_obj.unicode_w(self) + def unicode0_w(self, w_obj): + "Like unicode_w, but rejects strings with NUL bytes." + from pypy.rlib import rstring + result = w_obj.unicode_w(self) + if u'\x00' in result: + raise OperationError(self.w_TypeError, self.wrap( + 'argument must be a unicode string without NUL characters')) + return rstring.assert_str0(result) + def realunicode_w(self, w_obj): # Like unicode_w, but only works if w_obj is really of type # 'unicode'. @@ -1638,6 +1647,9 @@ 'UnicodeEncodeError', 'UnicodeDecodeError', ] + +if sys.platform.startswith("win"): + ObjSpace.ExceptionTable += ['WindowsError'] ## Irregular part of the interface: # diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -178,6 +178,14 @@ res = self.space.interp_w(Function, w(None), can_be_None=True) assert res is None + def test_str0_w(self): + space = self.space + w = space.wrap + assert space.str0_w(w("123")) == "123" + exc = space.raises_w(space.w_TypeError, space.str0_w, w("123\x004")) + assert space.unicode0_w(w(u"123")) == u"123" + exc = space.raises_w(space.w_TypeError, space.unicode0_w, w(u"123\x004")) + def test_getindex_w(self): w_instance1 = self.space.appexec([], """(): class X(object): diff --git a/pypy/jit/codewriter/flatten.py b/pypy/jit/codewriter/flatten.py --- a/pypy/jit/codewriter/flatten.py +++ b/pypy/jit/codewriter/flatten.py @@ -162,7 +162,9 @@ if len(block.exits) == 1: # A single link, fall-through link = block.exits[0] - assert link.exitcase is None + assert link.exitcase in (None, False, True) + # the cases False or True should not really occur, but can show + # up in the manually hacked graphs for generators... self.make_link(link) # elif block.exitswitch is c_last_exception: diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -48,7 +48,7 @@ mod = func.__module__ or '?' if mod.startswith('pypy.rpython.module.'): return True - if mod.startswith('pypy.translator.'): # XXX wtf? + if mod == 'pypy.translator.goal.nanos': # more helpers return True return False diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3706,6 +3706,18 @@ # here it works again self.check_operations_history(guard_class=0, record_known_class=1) + def test_generator(self): + def g(n): + yield n+1 + yield n+2 + yield n+3 + def f(n): + gen = g(n) + return gen.next() * gen.next() * gen.next() + res = self.interp_operations(f, [10]) + assert res == 11 * 12 * 13 + self.check_operations_history(int_add=3, int_mul=2) + class TestLLtype(BaseLLtypeTests, LLJitMixin): def test_tagged(self): diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -190,6 +190,7 @@ def test_convert_strings_to_char_p(self): """ + DLLEXPORT long mystrlen(char* s) { long len = 0; @@ -215,6 +216,7 @@ def test_convert_unicode_to_unichar_p(self): """ #include + DLLEXPORT long mystrlen_u(wchar_t* s) { long len = 0; @@ -241,6 +243,7 @@ def test_keepalive_temp_buffer(self): """ + DLLEXPORT char* do_nothing(char* s) { return s; @@ -525,5 +528,7 @@ from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) raises(AttributeError, "libfoo.getfunc('I_do_not_exist', [], types.void)") + if self.iswin32: + skip("unix specific") libnone = CDLL(None) raises(AttributeError, "libnone.getfunc('I_do_not_exist', [], types.void)") diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,6 +265,13 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") + import platform + if platform.system() == 'Windows': + # XXX This test crashes until someone implements something like + # XXX verify_fd from + # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 + # XXX and adds it to fopen + assert False state = [0] def read(fd, n=None): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -134,7 +134,10 @@ assert a == 'a\nbxxxxxxx' def test_nonblocking_read(self): - import os, fcntl + try: + import os, fcntl + except ImportError: + skip("need fcntl to set nonblocking mode") r_fd, w_fd = os.pipe() # set nonblocking fcntl.fcntl(r_fd, fcntl.F_SETFL, os.O_NONBLOCK) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -397,6 +398,7 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -21,12 +21,12 @@ /* Version parsed out into numeric values */ #define PY_MAJOR_VERSION 2 #define PY_MINOR_VERSION 7 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "2.7.1" +#define PY_VERSION "2.7.2" /* PyPy version as a string */ #define PYPY_VERSION "1.8.1" diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -114,6 +114,7 @@ ("tan", "tan"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), + ('bitwise_xor', 'bitwise_xor'), ('bitwise_not', 'invert'), ('isnan', 'isnan'), ('isinf', 'isinf'), @@ -130,8 +131,5 @@ 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', 'max': 'app_numpy.max', - 'inf': 'app_numpy.inf', - 'e': 'app_numpy.e', - 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -3,11 +3,6 @@ import _numpypy -inf = float("inf") -e = math.e -pi = math.pi - - def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! @@ -59,7 +54,7 @@ if not hasattr(a, "max"): a = _numpypy.array(a) return a.max(axis) - + def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) Generate values in the half-interval [start, stop). diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -51,6 +51,8 @@ w_long = "long" w_tuple = 'tuple' w_slice = "slice" + w_str = "str" + w_unicode = "unicode" def __init__(self): """NOT_RPYTHON""" @@ -91,8 +93,12 @@ return BoolObject(obj) elif isinstance(obj, int): return IntObject(obj) + elif isinstance(obj, long): + return LongObject(obj) elif isinstance(obj, W_Root): return obj + elif isinstance(obj, str): + return StringObject(obj) raise NotImplementedError def newlist(self, items): @@ -120,6 +126,11 @@ return int(w_obj.floatval) raise NotImplementedError + def str_w(self, w_obj): + if isinstance(w_obj, StringObject): + return w_obj.v + raise NotImplementedError + def int(self, w_obj): if isinstance(w_obj, IntObject): return w_obj @@ -151,7 +162,13 @@ return instantiate(klass) def newtuple(self, list_w): - raise ValueError + return ListObject(list_w) + + def newdict(self): + return {} + + def setitem(self, dict, item, value): + dict[item] = value def len_w(self, w_obj): if isinstance(w_obj, ListObject): @@ -178,6 +195,11 @@ def __init__(self, intval): self.intval = intval +class LongObject(W_Root): + tp = FakeSpace.w_long + def __init__(self, intval): + self.intval = intval + class ListObject(W_Root): tp = FakeSpace.w_list def __init__(self, items): @@ -190,6 +212,11 @@ self.stop = stop self.step = step +class StringObject(W_Root): + tp = FakeSpace.w_str + def __init__(self, v): + self.v = v + class InterpreterState(object): def __init__(self, code): self.code = code diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.error import operationerrfmt, OperationError +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.stringtype import str_typedef @@ -13,13 +13,13 @@ MIXIN_64 = (int_typedef,) if LONG_BIT == 64 else () def new_dtype_getter(name): - def get_dtype(space): + def _get_dtype(space): from pypy.module.micronumpy.interp_dtype import get_dtype_cache return getattr(get_dtype_cache(space), "w_%sdtype" % name) def new(space, w_subtype, w_value): - dtype = get_dtype(space) + dtype = _get_dtype(space) return dtype.itemtype.coerce_subtype(space, w_subtype, w_value) - return func_with_new_name(new, name + "_box_new"), staticmethod(get_dtype) + return func_with_new_name(new, name + "_box_new"), staticmethod(_get_dtype) class PrimitiveBox(object): _mixin_ = True @@ -30,7 +30,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -39,6 +38,9 @@ w_subtype.getname(space, '?') ) + def get_dtype(self, space): + return self._get_dtype(space) + def descr_str(self, space): return self.descr_repr(space) @@ -46,12 +48,12 @@ return space.wrap(self.get_dtype(space).itemtype.str_format(self)) def descr_int(self, space): - box = self.convert_to(W_LongBox.get_dtype(space)) + box = self.convert_to(W_LongBox._get_dtype(space)) assert isinstance(box, W_LongBox) return space.wrap(box.value) def descr_float(self, space): - box = self.convert_to(W_Float64Box.get_dtype(space)) + box = self.convert_to(W_Float64Box._get_dtype(space)) assert isinstance(box, W_Float64Box) return space.wrap(box.value) @@ -81,7 +83,15 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") + descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -92,16 +102,37 @@ descr_radd = _binop_right_impl("add") descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") + descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") + descr_rpow = _binop_right_impl("power") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + descr_pos = _unaryop_impl("positive") descr_neg = _unaryop_impl("negative") descr_abs = _unaryop_impl("absolute") + descr_invert = _unaryop_impl("invert") + + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def item(self, space): return self.get_dtype(space).itemtype.to_builtin_type(space, self) class W_BoolBox(W_GenericBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("bool") + descr__new__, _get_dtype = new_dtype_getter("bool") class W_NumberBox(W_GenericBox): _attrs_ = () @@ -117,40 +148,40 @@ pass class W_Int8Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int8") + descr__new__, _get_dtype = new_dtype_getter("int8") class W_UInt8Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint8") + descr__new__, _get_dtype = new_dtype_getter("uint8") class W_Int16Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int16") + descr__new__, _get_dtype = new_dtype_getter("int16") class W_UInt16Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint16") + descr__new__, _get_dtype = new_dtype_getter("uint16") class W_Int32Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int32") + descr__new__, _get_dtype = new_dtype_getter("int32") class W_UInt32Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint32") + descr__new__, _get_dtype = new_dtype_getter("uint32") class W_LongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("long") + descr__new__, _get_dtype = new_dtype_getter("long") class W_ULongBox(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("ulong") + descr__new__, _get_dtype = new_dtype_getter("ulong") class W_Int64Box(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("int64") + descr__new__, _get_dtype = new_dtype_getter("int64") class W_LongLongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter('longlong') + descr__new__, _get_dtype = new_dtype_getter('longlong') class W_UInt64Box(W_UnsignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("uint64") + descr__new__, _get_dtype = new_dtype_getter("uint64") class W_ULongLongBox(W_SignedIntegerBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter('ulonglong') + descr__new__, _get_dtype = new_dtype_getter('ulonglong') class W_InexactBox(W_NumberBox): _attrs_ = () @@ -159,17 +190,41 @@ _attrs_ = () class W_Float32Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float32") + descr__new__, _get_dtype = new_dtype_getter("float32") class W_Float64Box(W_FloatingBox, PrimitiveBox): - descr__new__, get_dtype = new_dtype_getter("float64") + descr__new__, _get_dtype = new_dtype_getter("float64") class W_FlexibleBox(W_GenericBox): pass class W_VoidBox(W_FlexibleBox): - pass + def __init__(self, arr, ofs): + self.arr = arr # we have to keep array alive + self.ofs = ofs + + def get_dtype(self, space): + return self.arr.dtype + + @unwrap_spec(item=str) + def descr_getitem(self, space, item): + try: + ofs, dtype = self.arr.dtype.fields[item] + except KeyError: + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + return dtype.itemtype.read(self.arr, 1, self.ofs, ofs) + + @unwrap_spec(item=str) + def descr_setitem(self, space, item, w_value): + try: + ofs, dtype = self.arr.dtype.fields[item] + except KeyError: + raise OperationError(space.w_IndexError, + space.wrap("Field %s does not exist" % item)) + dtype.itemtype.store(self.arr, 1, self.ofs, ofs, + dtype.coerce(space, w_value)) class W_CharacterBox(W_FlexibleBox): pass @@ -195,11 +250,29 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __truediv__ = interp2app(W_GenericBox.descr_truediv), + __mod__ = interp2app(W_GenericBox.descr_mod), + __divmod__ = interp2app(W_GenericBox.descr_divmod), __pow__ = interp2app(W_GenericBox.descr_pow), + __lshift__ = interp2app(W_GenericBox.descr_lshift), + __rshift__ = interp2app(W_GenericBox.descr_rshift), + __and__ = interp2app(W_GenericBox.descr_and), + __or__ = interp2app(W_GenericBox.descr_or), + __xor__ = interp2app(W_GenericBox.descr_xor), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), __rmul__ = interp2app(W_GenericBox.descr_rmul), + __rdiv__ = interp2app(W_GenericBox.descr_rdiv), + __rtruediv__ = interp2app(W_GenericBox.descr_rtruediv), + __rmod__ = interp2app(W_GenericBox.descr_rmod), + __rdivmod__ = interp2app(W_GenericBox.descr_rdivmod), + __rpow__ = interp2app(W_GenericBox.descr_rpow), + __rlshift__ = interp2app(W_GenericBox.descr_rlshift), + __rrshift__ = interp2app(W_GenericBox.descr_rrshift), + __rand__ = interp2app(W_GenericBox.descr_rand), + __ror__ = interp2app(W_GenericBox.descr_ror), + __rxor__ = interp2app(W_GenericBox.descr_rxor), __eq__ = interp2app(W_GenericBox.descr_eq), __ne__ = interp2app(W_GenericBox.descr_ne), @@ -208,8 +281,10 @@ __gt__ = interp2app(W_GenericBox.descr_gt), __ge__ = interp2app(W_GenericBox.descr_ge), + __pos__ = interp2app(W_GenericBox.descr_pos), __neg__ = interp2app(W_GenericBox.descr_neg), __abs__ = interp2app(W_GenericBox.descr_abs), + __invert__ = interp2app(W_GenericBox.descr_invert), tolist = interp2app(W_GenericBox.item), ) @@ -309,6 +384,8 @@ W_VoidBox.typedef = TypeDef("void", W_FlexibleBox.typedef, __module__ = "numpypy", + __getitem__ = interp2app(W_VoidBox.descr_getitem), + __setitem__ = interp2app(W_VoidBox.descr_setitem), ) W_CharacterBox.typedef = TypeDef("character", W_FlexibleBox.typedef, diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -9,7 +9,6 @@ from pypy.module.micronumpy import types, interp_boxes from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import LONG_BIT, r_longlong, r_ulonglong -from pypy.rpython.lltypesystem import lltype UNSIGNEDLTR = "u" @@ -20,8 +19,6 @@ STRINGLTR = 'S' UNICODELTR = 'U' -VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, 'render_as_void': True}) - class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] @@ -38,32 +35,24 @@ self.fields = fields self.fieldnames = fieldnames - def malloc(self, length): - # XXX find out why test_zjit explodes with tracking of allocations - return lltype.malloc(VOID_STORAGE, self.itemtype.get_element_size() * length, - zero=True, flavor="raw", - track_allocation=False, add_memory_pressure=True - ) - @specialize.argtype(1) def box(self, value): return self.itemtype.box(value) def coerce(self, space, w_item): - return self.itemtype.coerce(space, w_item) + return self.itemtype.coerce(space, self, w_item) - def getitem(self, storage, i): - return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem(self, arr, i): + return self.itemtype.read(arr, 1, i, 0) - def getitem_bool(self, storage, i): - isize = self.itemtype.get_element_size() - return self.itemtype.read_bool(storage, isize, i, 0) + def getitem_bool(self, arr, i): + return self.itemtype.read_bool(arr, 1, i, 0) - def setitem(self, storage, i, box): - self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) + def setitem(self, arr, i, box): + self.itemtype.store(arr, 1, i, 0, box) def fill(self, storage, box, start, stop): - self.itemtype.fill(storage, self.itemtype.get_element_size(), box, start, stop, 0) + self.itemtype.fill(storage, self.get_size(), box, start, stop, 0) def descr_str(self, space): return space.wrap(self.name) @@ -129,6 +118,17 @@ def is_bool_type(self): return self.kind == BOOLLTR + def is_record_type(self): + return self.fields is not None + + def __repr__(self): + if self.fields is not None: + return '' % self.fields + return '' % self.itemtype + + def get_size(self): + return self.itemtype.get_element_size() + def invalid_dtype(space, w_obj): if space.isinstance_w(w_obj, space.w_str): @@ -183,10 +183,11 @@ return dtype raise invalid_dtype(space, w_obj) - if (space.isinstance_w(w_obj, space.w_str) or + elif (space.isinstance_w(w_obj, space.w_str) or space.isinstance_w(w_obj, space.w_unicode)): typestr = space.str_w(w_obj) + w_base_dtype = None if not typestr: raise invalid_dtype(space, w_obj) @@ -199,13 +200,15 @@ if endian == "|": endian = "=" typestr = typestr[1:] + else: + endian = "=" if not typestr: raise invalid_dtype(space, w_obj) if len(typestr) == 1: try: - return cache.dtypes_by_name[typestr] + w_base_dtype = cache.dtypes_by_name[typestr] except KeyError: raise invalid_dtype(space, w_obj) else: @@ -226,18 +229,38 @@ for dtype in cache.builtin_dtypes: if (dtype.kind == kind and dtype.itemtype.get_element_size() == elsize): - return dtype - raise invalid_dtype(space, w_obj) + w_base_dtype = dtype + break + else: + raise invalid_dtype(space, w_obj) - if space.isinstance_w(w_obj, space.w_tuple): + if w_base_dtype is not None: + if elsize is not None: + if endian != "=" and endian != nonnative_byteorder_prefix: + endian = "=" + if (endian != "=" and w_base_dtype.byteorder != "|" and + w_base_dtype.byteorder != endian): + return W_Dtype( + cache.nonnative_dtypes[w_base_dtype], w_base_dtype.num, + w_base_dtype.kind, w_base_dtype.name, w_base_dtype.char, + w_base_dtype.w_box_type, byteorder=endian, + builtin_type=w_base_dtype.builtin_type + ) + else: + return w_base_dtype + + elif space.isinstance_w(w_obj, space.w_tuple): return dtype_from_tuple(space, space.listview(w_obj)) - if space.isinstance_w(w_obj, space.w_list): + elif space.isinstance_w(w_obj, space.w_list): return dtype_from_list(space, space.listview(w_obj)) - if space.isinstance_w(w_obj, space.w_dict): + elif space.isinstance_w(w_obj, space.w_dict): return dtype_from_dict(space, w_obj) + else: + raise invalid_dtype(space, w_obj) + w_type_dict = cache.w_type_dict w_result = None if w_type_dict is not None: @@ -248,13 +271,16 @@ raise if space.isinstance_w(w_obj, space.w_str): w_key = space.call_method(w_obj, "decode", space.wrap("ascii")) - w_result = space.getitem(w_type_dict, w_key) + try: + w_result = space.getitem(w_type_dict, w_key) + except OperationError, e: + if not e.match(space, space.w_KeyError): + raise if w_result is not None: return dtype_from_object(space, w_result) raise invalid_dtype(space, w_obj) - def descr__new__(space, w_subtype, w_dtype): w_dtype = dtype_from_object(space, w_dtype) return w_dtype @@ -472,6 +498,12 @@ for dtype in self.builtin_dtypes: self.dtypes_by_name[dtype.char] = dtype self.dtypes_by_name["p"] = self.w_longdtype + self.nonnative_dtypes = { + self.w_booldtype: types.NonNativeBool(), + self.w_int16dtype: types.NonNativeInt16(), + self.w_int32dtype: types.NonNativeInt32(), + self.w_longdtype: types.NonNativeLong(), + } typeinfo_full = { 'LONGLONG': self.w_int64dtype, diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -42,24 +42,65 @@ we can go faster. All the calculations happen in next() -next_step_x() tries to do the iteration for a number of steps at once, +next_skip_x() tries to do the iteration for a number of steps at once, but then we cannot gaurentee that we only overflow one single shape dimension, perhaps we could overflow times in one big step. """ # structures to describe slicing -class Chunk(object): +class BaseChunk(object): + pass + +class RecordChunk(BaseChunk): + def __init__(self, name): + self.name = name + + def apply(self, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice + + arr = arr.get_concrete() + ofs, subdtype = arr.dtype.fields[self.name] + # strides backstrides are identical, ofs only changes start + return W_NDimSlice(arr.start + ofs, arr.strides[:], arr.backstrides[:], + arr.shape[:], arr, subdtype) + +class Chunks(BaseChunk): + def __init__(self, l): + self.l = l + + @jit.unroll_safe + def extend_shape(self, old_shape): + shape = [] + i = -1 + for i, c in enumerate(self.l): + if c.step != 0: + shape.append(c.lgt) + s = i + 1 + assert s >= 0 + return shape[:] + old_shape[s:] + + def apply(self, arr): + from pypy.module.micronumpy.interp_numarray import W_NDimSlice,\ + VirtualSlice, ConcreteArray + + shape = self.extend_shape(arr.shape) + if not isinstance(arr, ConcreteArray): + return VirtualSlice(arr, self, shape) + r = calculate_slice_strides(arr.shape, arr.start, arr.strides, + arr.backstrides, self.l) + _, start, strides, backstrides = r + return W_NDimSlice(start, strides[:], backstrides[:], + shape[:], arr) + + +class Chunk(BaseChunk): def __init__(self, start, stop, step, lgt): self.start = start self.stop = stop self.step = step self.lgt = lgt - def extend_shape(self, shape): - if self.step != 0: - shape.append(self.lgt) - def __repr__(self): return 'Chunk(%d, %d, %d, %d)' % (self.start, self.stop, self.step, self.lgt) @@ -95,17 +136,19 @@ raise NotImplementedError class ArrayIterator(BaseIterator): - def __init__(self, size): + def __init__(self, size, element_size): self.offset = 0 self.size = size + self.element_size = element_size def next(self, shapelen): return self.next_skip_x(1) - def next_skip_x(self, ofs): + def next_skip_x(self, x): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + ofs + arr.offset = self.offset + x * self.element_size + arr.element_size = self.element_size return arr def next_no_increase(self, shapelen): @@ -152,7 +195,7 @@ elif isinstance(t, ViewTransform): r = calculate_slice_strides(self.res_shape, self.offset, self.strides, - self.backstrides, t.chunks) + self.backstrides, t.chunks.l) return ViewIterator(r[1], r[2], r[3], r[0]) @jit.unroll_safe diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -4,17 +4,16 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import (interp_ufuncs, interp_dtype, interp_boxes, signature, support, loop) -from pypy.module.micronumpy.strides import (calculate_slice_strides, - shape_agreement, find_shape_and_elems, get_shape_from_iterable, - calc_new_strides, to_coords) -from dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.appbridge import get_appbridge_cache +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes +from pypy.module.micronumpy.interp_iter import (ArrayIterator, + SkipLastAxisIterator, Chunk, ViewIterator, Chunks, RecordChunk) +from pypy.module.micronumpy.strides import (shape_agreement, + find_shape_and_elems, get_shape_from_iterable, calc_new_strides, to_coords) from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import (ArrayIterator, - SkipLastAxisIterator, Chunk, ViewIterator) -from pypy.module.micronumpy.appbridge import get_appbridge_cache count_driver = jit.JitDriver( @@ -47,7 +46,7 @@ ) flat_set_driver = jit.JitDriver( greens=['shapelen', 'base'], - reds=['step', 'ai', 'lngth', 'arr', 'basei'], + reds=['step', 'lngth', 'ri', 'arr', 'basei'], name='numpy_flatset', ) @@ -79,8 +78,8 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + shape = _find_shape(space, w_size) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def _unaryop_impl(ufunc_name): def impl(self, space): @@ -101,8 +100,14 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_truediv = _binop_impl("true_divide") + descr_mod = _binop_impl("mod") descr_pow = _binop_impl("power") - descr_mod = _binop_impl("mod") + descr_lshift = _binop_impl("left_shift") + descr_rshift = _binop_impl("right_shift") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + descr_xor = _binop_impl("bitwise_xor") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") @@ -111,8 +116,10 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") - descr_and = _binop_impl("bitwise_and") - descr_or = _binop_impl("bitwise_or") + def descr_divmod(self, space, w_other): + w_quotient = self.descr_div(space, w_other) + w_remainder = self.descr_mod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _binop_right_impl(ufunc_name): def impl(self, space, w_other): @@ -127,8 +134,19 @@ descr_rsub = _binop_right_impl("subtract") descr_rmul = _binop_right_impl("multiply") descr_rdiv = _binop_right_impl("divide") + descr_rtruediv = _binop_right_impl("true_divide") + descr_rmod = _binop_right_impl("mod") descr_rpow = _binop_right_impl("power") - descr_rmod = _binop_right_impl("mod") + descr_rlshift = _binop_right_impl("left_shift") + descr_rrshift = _binop_right_impl("right_shift") + descr_rand = _binop_right_impl("bitwise_and") + descr_ror = _binop_right_impl("bitwise_or") + descr_rxor = _binop_right_impl("bitwise_xor") + + def descr_rdivmod(self, space, w_other): + w_quotient = self.descr_rdiv(space, w_other) + w_remainder = self.descr_rmod(space, w_other) + return space.newtuple([w_quotient, w_remainder]) def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): @@ -204,8 +222,7 @@ return scalar_w(space, dtype, space.wrap(0)) # Do the dims match? out_shape, other_critical_dim = match_dot_shapes(space, self, other) - out_size = support.product(out_shape) - result = W_NDimArray(out_size, out_shape, dtype) + result = W_NDimArray(out_shape, dtype) # This is the place to add fpypy and blas return multidim_dot(space, self.get_concrete(), other.get_concrete(), result, dtype, @@ -224,7 +241,7 @@ return space.wrap(self.find_dtype().itemtype.get_element_size()) def descr_get_nbytes(self, space): - return space.wrap(self.size * self.find_dtype().itemtype.get_element_size()) + return space.wrap(self.size) @jit.unroll_safe def descr_get_shape(self, space): @@ -232,13 +249,16 @@ def descr_set_shape(self, space, w_iterable): new_shape = get_shape_from_iterable(space, - self.size, w_iterable) + support.product(self.shape), w_iterable) if isinstance(self, Scalar): return self.get_concrete().setshape(space, new_shape) def descr_get_size(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) + + def get_size(self): + return self.size // self.find_dtype().get_size() def descr_copy(self, space): return self.copy(space) @@ -258,7 +278,7 @@ def empty_copy(self, space, dtype): shape = self.shape - return W_NDimArray(support.product(shape), shape[:], dtype, 'C') + return W_NDimArray(shape[:], dtype, 'C') def descr_len(self, space): if len(self.shape): @@ -299,6 +319,8 @@ """ The result of getitem/setitem is a single item if w_idx is a list of scalars that match the size of shape """ + if space.isinstance_w(w_idx, space.w_str): + return False shape_len = len(self.shape) if shape_len == 0: raise OperationError(space.w_IndexError, space.wrap( @@ -324,34 +346,41 @@ @jit.unroll_safe def _prepare_slice_args(self, space, w_idx): + if space.isinstance_w(w_idx, space.w_str): + idx = space.str_w(w_idx) + dtype = self.find_dtype() + if not dtype.is_record_type() or idx not in dtype.fields: + raise OperationError(space.w_ValueError, space.wrap( + "field named %s not defined" % idx)) + return RecordChunk(idx) if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] - return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in - enumerate(space.fixedview(w_idx))] + return Chunks([Chunk(*space.decode_index4(w_idx, self.shape[0]))]) + return Chunks([Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in + enumerate(space.fixedview(w_idx))]) - def count_all_true(self, arr): - sig = arr.find_sig() - frame = sig.create_frame(arr) - shapelen = len(arr.shape) + def count_all_true(self): + sig = self.find_sig() + frame = sig.create_frame(self) + shapelen = len(self.shape) s = 0 iter = None while not frame.done(): - count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + count_driver.jit_merge_point(arr=self, frame=frame, iter=iter, s=s, shapelen=shapelen) iter = frame.get_final_iter() - s += arr.dtype.getitem_bool(arr.storage, iter.offset) + s += self.dtype.getitem_bool(self, iter.offset) frame.next(shapelen) return s def getitem_filter(self, space, arr): concr = arr.get_concrete() - if concr.size > self.size: + if concr.get_size() > self.get_size(): raise OperationError(space.w_IndexError, space.wrap("index out of range for array")) - size = self.count_all_true(concr) - res = W_NDimArray(size, [size], self.find_dtype()) - ri = ArrayIterator(size) + size = concr.count_all_true() + res = W_NDimArray([size], self.find_dtype()) + ri = res.create_iter() shapelen = len(self.shape) argi = concr.create_iter() sig = self.find_sig() @@ -361,7 +390,7 @@ filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, frame=frame, v=v, res=res, sig=sig, shapelen=shapelen, self=self) - if concr.dtype.getitem_bool(concr.storage, argi.offset): + if concr.dtype.getitem_bool(concr, argi.offset): v = sig.eval(frame, self) res.setitem(ri.offset, v) ri = ri.next(1) @@ -371,23 +400,6 @@ frame.next(shapelen) return res - def setitem_filter(self, space, idx, val): - size = self.count_all_true(idx) - arr = SliceArray([size], self.dtype, self, val) - sig = arr.find_sig() - shapelen = len(self.shape) - frame = sig.create_frame(arr) - idxi = idx.create_iter() - while not frame.done(): - filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, - frame=frame, arr=arr, - shapelen=shapelen) - if idx.dtype.getitem_bool(idx.storage, idxi.offset): - sig.eval(frame, arr) - frame.next_from_second(1) - frame.next_first(shapelen) - idxi = idxi.next(shapelen) - def descr_getitem(self, space, w_idx): if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and w_idx.find_dtype().is_bool_type()): @@ -397,7 +409,24 @@ item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item) chunks = self._prepare_slice_args(space, w_idx) - return self.create_slice(chunks) + return chunks.apply(self) + + def setitem_filter(self, space, idx, val): + size = idx.count_all_true() + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) def descr_setitem(self, space, w_idx, w_value): self.invalidated() @@ -415,26 +444,9 @@ if not isinstance(w_value, BaseArray): w_value = convert_to_array(space, w_value) chunks = self._prepare_slice_args(space, w_idx) - view = self.create_slice(chunks).get_concrete() + view = chunks.apply(self).get_concrete() view.setslice(space, w_value) - @jit.unroll_safe - def create_slice(self, chunks): - shape = [] - i = -1 - for i, chunk in enumerate(chunks): - chunk.extend_shape(shape) - s = i + 1 - assert s >= 0 - shape += self.shape[s:] - if not isinstance(self, ConcreteArray): - return VirtualSlice(self, chunks, shape) - r = calculate_slice_strides(self.shape, self.start, self.strides, - self.backstrides, chunks) - _, start, strides, backstrides = r - return W_NDimSlice(start, strides[:], backstrides[:], - shape[:], self) - def descr_reshape(self, space, args_w): """reshape(...) a.reshape(shape) @@ -451,12 +463,13 @@ w_shape = args_w[0] else: w_shape = space.newtuple(args_w) - new_shape = get_shape_from_iterable(space, self.size, w_shape) + new_shape = get_shape_from_iterable(space, support.product(self.shape), + w_shape) return self.reshape(space, new_shape) def reshape(self, space, new_shape): concrete = self.get_concrete() - # Since we got to here, prod(new_shape) == self.size + # Since we got to here, prod(new_shape) == self.get_size() new_strides = calc_new_strides(new_shape, concrete.shape, concrete.strides, concrete.order) if new_strides: @@ -487,7 +500,7 @@ def descr_mean(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): w_axis = space.wrap(-1) - w_denom = space.wrap(self.size) + w_denom = space.wrap(support.product(self.shape)) else: dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) @@ -506,7 +519,7 @@ concr.fill(space, w_value) def descr_nonzero(self, space): - if self.size > 1: + if self.get_size() > 1: raise OperationError(space.w_ValueError, space.wrap( "The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()")) concr = self.get_concrete_or_scalar() @@ -585,8 +598,7 @@ space.wrap("axis unsupported for take")) index_i = index.create_iter() res_shape = index.shape - size = support.product(res_shape) - res = W_NDimArray(size, res_shape[:], concr.dtype, concr.order) + res = W_NDimArray(res_shape[:], concr.dtype, concr.order) res_i = res.create_iter() shapelen = len(index.shape) sig = concr.find_sig() @@ -651,8 +663,7 @@ """ Intermediate class representing a literal. """ - size = 1 - _attrs_ = ["dtype", "value", "shape"] + _attrs_ = ["dtype", "value", "shape", "size"] def __init__(self, dtype, value): self.shape = [] @@ -660,6 +671,7 @@ self.dtype = dtype assert isinstance(value, interp_boxes.W_GenericBox) self.value = value + self.size = dtype.get_size() def find_dtype(self): return self.dtype @@ -677,8 +689,7 @@ return self def reshape(self, space, new_shape): - size = support.product(new_shape) - res = W_NDimArray(size, new_shape, self.dtype, 'C') + res = W_NDimArray(new_shape, self.dtype, 'C') res.setitem(0, self.value) return res @@ -691,6 +702,7 @@ self.forced_result = None self.res_dtype = res_dtype self.name = name + self.size = support.product(self.shape) * res_dtype.get_size() def _del_sources(self): # Function for deleting references to source arrays, @@ -698,7 +710,7 @@ raise NotImplementedError def compute(self): - ra = ResultArray(self, self.size, self.shape, self.res_dtype) + ra = ResultArray(self, self.shape, self.res_dtype) loop.compute(ra) return ra.left @@ -726,7 +738,6 @@ def __init__(self, child, chunks, shape): self.child = child self.chunks = chunks - self.size = support.product(shape) VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) def create_sig(self): @@ -738,7 +749,7 @@ def force_if_needed(self): if self.forced_result is None: concr = self.child.get_concrete() - self.forced_result = concr.create_slice(self.chunks) + self.forced_result = self.chunks.apply(concr) def _del_sources(self): self.child = None @@ -773,7 +784,6 @@ self.left = left self.right = right self.calc_dtype = calc_dtype - self.size = support.product(self.shape) def _del_sources(self): self.left = None @@ -801,9 +811,9 @@ self.left.create_sig(), self.right.create_sig()) class ResultArray(Call2): - def __init__(self, child, size, shape, dtype, res=None, order='C'): + def __init__(self, child, shape, dtype, res=None, order='C'): if res is None: - res = W_NDimArray(size, shape, dtype, order) + res = W_NDimArray(shape, dtype, order) Call2.__init__(self, None, 'assign', shape, dtype, dtype, res, child) def create_sig(self): @@ -817,7 +827,7 @@ self.s = StringBuilder(child.size * self.itemsize) Call1.__init__(self, None, 'tostring', child.shape, dtype, dtype, child) - self.res = W_NDimArray(1, [1], dtype, 'C') + self.res = W_NDimArray([1], dtype, 'C') self.res_casted = rffi.cast(rffi.CArrayPtr(lltype.Char), self.res.storage) @@ -898,13 +908,13 @@ """ _immutable_fields_ = ['storage'] - def __init__(self, size, shape, dtype, order='C', parent=None): - self.size = size + def __init__(self, shape, dtype, order='C', parent=None): self.parent = parent + self.size = support.product(shape) * dtype.get_size() if parent is not None: self.storage = parent.storage else: - self.storage = dtype.malloc(size) + self.storage = dtype.itemtype.malloc(self.size) self.order = order self.dtype = dtype if self.strides is None: @@ -923,13 +933,14 @@ return self.dtype def getitem(self, item): - return self.dtype.getitem(self.storage, item) + return self.dtype.getitem(self, item) def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def calc_strides(self, shape): + dtype = self.find_dtype() strides = [] backstrides = [] s = 1 @@ -937,8 +948,8 @@ if self.order == 'C': shape_rev.reverse() for sh in shape_rev: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -986,9 +997,9 @@ shapelen = len(self.shape) if shapelen == 1: rffi.c_memcpy( - rffi.ptradd(self.storage, self.start * itemsize), - rffi.ptradd(w_value.storage, w_value.start * itemsize), - self.size * itemsize + rffi.ptradd(self.storage, self.start), + rffi.ptradd(w_value.storage, w_value.start), + self.size ) else: dest = SkipLastAxisIterator(self) @@ -1003,7 +1014,7 @@ dest.next() def copy(self, space): - array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) + array = W_NDimArray(self.shape[:], self.dtype, self.order) array.setslice(space, self) return array @@ -1017,14 +1028,15 @@ class W_NDimSlice(ViewArray): - def __init__(self, start, strides, backstrides, shape, parent): + def __init__(self, start, strides, backstrides, shape, parent, dtype=None): assert isinstance(parent, ConcreteArray) if isinstance(parent, W_NDimSlice): parent = parent.parent self.strides = strides self.backstrides = backstrides - ViewArray.__init__(self, support.product(shape), shape, parent.dtype, - parent.order, parent) + if dtype is None: + dtype = parent.dtype + ViewArray.__init__(self, shape, dtype, parent.order, parent) self.start = start def create_iter(self, transforms=None): @@ -1039,12 +1051,13 @@ # but then calc_strides would have to accept a stepping factor strides = [] backstrides = [] - s = self.strides[0] + dtype = self.find_dtype() + s = self.strides[0] // dtype.get_size() if self.order == 'C': new_shape.reverse() for sh in new_shape: - strides.append(s) - backstrides.append(s * (sh - 1)) + strides.append(s * dtype.get_size()) + backstrides.append(s * (sh - 1) * dtype.get_size()) s *= sh if self.order == 'C': strides.reverse() @@ -1072,14 +1085,16 @@ """ def setitem(self, item, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self, item, value) def setshape(self, space, new_shape): self.shape = new_shape self.calc_strides(new_shape) def create_iter(self, transforms=None): - return ArrayIterator(self.size).apply_transformations(self, transforms) + esize = self.find_dtype().get_size() + return ArrayIterator(self.size, esize).apply_transformations(self, + transforms) def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1087,18 +1102,13 @@ def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) -def _find_size_and_shape(space, w_size): +def _find_shape(space, w_size): if space.isinstance_w(w_size, space.w_int): - size = space.int_w(w_size) - shape = [size] - else: - size = 1 - shape = [] - for w_item in space.fixedview(w_size): - item = space.int_w(w_item) - size *= item - shape.append(item) - return size, shape + return [space.int_w(w_size)] + shape = [] + for w_item in space.fixedview(w_size): + shape.append(space.int_w(w_item)) + return shape @unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) def array(space, w_item_or_iterable, w_dtype=None, w_order=None, @@ -1132,28 +1142,28 @@ if copy: return w_item_or_iterable.copy(space) return w_item_or_iterable - shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) + if w_dtype is None or space.is_w(w_dtype, space.w_None): + dtype = None + else: + dtype = space.interp_w(interp_dtype.W_Dtype, + space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype)) + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable, dtype) # they come back in C order - size = len(elems_w) - if w_dtype is None or space.is_w(w_dtype, space.w_None): - w_dtype = None + if dtype is None: for w_elem in elems_w: - w_dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, - w_dtype) - if w_dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: + dtype = interp_ufuncs.find_dtype_for_scalar(space, w_elem, + dtype) + if dtype is interp_dtype.get_dtype_cache(space).w_float64dtype: break - if w_dtype is None: - w_dtype = space.w_None - dtype = space.interp_w(interp_dtype.W_Dtype, - space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) - ) - arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) + if dtype is None: + dtype = interp_dtype.get_dtype_cache(space).w_float64dtype + arr = W_NDimArray(shape[:], dtype=dtype, order=order) shapelen = len(shape) - arr_iter = ArrayIterator(arr.size) + arr_iter = arr.create_iter() # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] - dtype.setitem(arr.storage, arr_iter.offset, + dtype.setitem(arr, arr_iter.offset, dtype.coerce(space, w_elem)) arr_iter = arr_iter.next(shapelen) return arr @@ -1162,22 +1172,22 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(0)) - return space.wrap(W_NDimArray(size, shape[:], dtype=dtype)) + return space.wrap(W_NDimArray(shape[:], dtype=dtype)) def ones(space, w_size, w_dtype=None): dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - size, shape = _find_size_and_shape(space, w_size) + shape = _find_shape(space, w_size) if not shape: return scalar_w(space, dtype, space.wrap(1)) - arr = W_NDimArray(size, shape[:], dtype=dtype) + arr = W_NDimArray(shape[:], dtype=dtype) one = dtype.box(1) - arr.dtype.fill(arr.storage, one, 0, size) + arr.dtype.fill(arr.storage, one, 0, arr.size) return space.wrap(arr) @unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) @@ -1225,13 +1235,13 @@ "array dimensions must agree except for axis being concatenated")) elif i == axis: shape[i] += axis_size - res = W_NDimArray(support.product(shape), shape, dtype, 'C') + res = W_NDimArray(shape, dtype, 'C') chunks = [Chunk(0, i, 1, i) for i in shape] axis_start = 0 for arr in args_w: chunks[axis] = Chunk(axis_start, axis_start + arr.shape[axis], 1, arr.shape[axis]) - res.create_slice(chunks).setslice(space, arr) + Chunks(chunks).apply(res).setslice(space, arr) axis_start += arr.shape[axis] return res @@ -1247,21 +1257,36 @@ __pos__ = interp2app(BaseArray.descr_pos), __neg__ = interp2app(BaseArray.descr_neg), __abs__ = interp2app(BaseArray.descr_abs), + __invert__ = interp2app(BaseArray.descr_invert), __nonzero__ = interp2app(BaseArray.descr_nonzero), __add__ = interp2app(BaseArray.descr_add), __sub__ = interp2app(BaseArray.descr_sub), __mul__ = interp2app(BaseArray.descr_mul), __div__ = interp2app(BaseArray.descr_div), + __truediv__ = interp2app(BaseArray.descr_truediv), + __mod__ = interp2app(BaseArray.descr_mod), + __divmod__ = interp2app(BaseArray.descr_divmod), __pow__ = interp2app(BaseArray.descr_pow), - __mod__ = interp2app(BaseArray.descr_mod), + __lshift__ = interp2app(BaseArray.descr_lshift), + __rshift__ = interp2app(BaseArray.descr_rshift), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __xor__ = interp2app(BaseArray.descr_xor), __radd__ = interp2app(BaseArray.descr_radd), __rsub__ = interp2app(BaseArray.descr_rsub), __rmul__ = interp2app(BaseArray.descr_rmul), __rdiv__ = interp2app(BaseArray.descr_rdiv), + __rtruediv__ = interp2app(BaseArray.descr_rtruediv), + __rmod__ = interp2app(BaseArray.descr_rmod), + __rdivmod__ = interp2app(BaseArray.descr_rdivmod), __rpow__ = interp2app(BaseArray.descr_rpow), - __rmod__ = interp2app(BaseArray.descr_rmod), + __rlshift__ = interp2app(BaseArray.descr_rlshift), + __rrshift__ = interp2app(BaseArray.descr_rrshift), + __rand__ = interp2app(BaseArray.descr_rand), + __ror__ = interp2app(BaseArray.descr_ror), + __rxor__ = interp2app(BaseArray.descr_rxor), __eq__ = interp2app(BaseArray.descr_eq), __ne__ = interp2app(BaseArray.descr_ne), @@ -1270,10 +1295,6 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), - __and__ = interp2app(BaseArray.descr_and), - __or__ = interp2app(BaseArray.descr_or), - __invert__ = interp2app(BaseArray.descr_invert), - __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1287,6 +1308,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), @@ -1328,7 +1350,7 @@ self.iter = sig.create_frame(arr).get_final_iter() self.base = arr self.index = 0 - ViewArray.__init__(self, arr.size, [arr.size], arr.dtype, arr.order, + ViewArray.__init__(self, [arr.get_size()], arr.dtype, arr.order, arr) def descr_next(self, space): @@ -1343,7 +1365,7 @@ return self def descr_len(self, space): - return space.wrap(self.size) + return space.wrap(self.get_size()) def descr_index(self, space): return space.wrap(self.index) @@ -1361,28 +1383,26 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) # setslice would have been better, but flat[u:v] for arbitrary # shapes of array a cannot be represented as a[x1:x2, y1:y2] basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) if lngth <2: return base.getitem(basei.offset) - ri = ArrayIterator(lngth) - res = W_NDimArray(lngth, [lngth], base.dtype, - base.order) + res = W_NDimArray([lngth], base.dtype, base.order) + ri = res.create_iter() while not ri.done(): flat_get_driver.jit_merge_point(shapelen=shapelen, base=base, basei=basei, step=step, res=res, - ri=ri, - ) + ri=ri) w_val = base.getitem(basei.offset) - res.setitem(ri.offset,w_val) + res.setitem(ri.offset, w_val) basei = basei.next_skip_x(shapelen, step) ri = ri.next(shapelen) return res @@ -1393,27 +1413,28 @@ raise OperationError(space.w_IndexError, space.wrap('unsupported iterator index')) base = self.base - start, stop, step, lngth = space.decode_index4(w_idx, base.size) + start, stop, step, lngth = space.decode_index4(w_idx, base.get_size()) arr = convert_to_array(space, w_value) - ai = 0 + ri = arr.create_iter() basei = ViewIterator(base.start, base.strides, - base.backstrides,base.shape) + base.backstrides, base.shape) shapelen = len(base.shape) basei = basei.next_skip_x(shapelen, start) while lngth > 0: flat_set_driver.jit_merge_point(shapelen=shapelen, - basei=basei, - base=base, - step=step, - arr=arr, - ai=ai, - lngth=lngth, - ) - v = arr.getitem(ai).convert_to(base.dtype) + basei=basei, + base=base, + step=step, + arr=arr, + lngth=lngth, + ri=ri) + v = arr.getitem(ri.offset).convert_to(base.dtype) base.setitem(basei.offset, v) # need to repeat input values until all assignments are done - ai = (ai + 1) % arr.size basei = basei.next_skip_x(shapelen, step) + ri = ri.next(shapelen) + # WTF is numpy thinking? + ri.offset %= arr.size lngth -= 1 def create_sig(self): @@ -1421,9 +1442,9 @@ def create_iter(self, transforms=None): return ViewIterator(self.base.start, self.base.strides, - self.base.backstrides, - self.base.shape).apply_transformations(self.base, - transforms) + self.base.backstrides, + self.base.shape).apply_transformations(self.base, + transforms) def descr_base(self, space): return space.wrap(self.base) diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -51,9 +51,11 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(num_items, [num_items], dtype=dtype) - for i, val in enumerate(items): - a.dtype.setitem(a.storage, i, val) + a = W_NDimArray([num_items], dtype=dtype) + ai = a.create_iter() + for val in items: + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) @@ -61,6 +63,7 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray itemsize = dtype.itemtype.get_element_size() + assert itemsize >= 0 if count == -1: count = length / itemsize if length % itemsize != 0: @@ -71,10 +74,14 @@ raise OperationError(space.w_ValueError, space.wrap( "string is smaller than requested size")) - a = W_NDimArray(count, [count], dtype=dtype) + a = W_NDimArray([count], dtype=dtype) + ai = a.create_iter() for i in range(count): - val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) - a.dtype.setitem(a.storage, i, val) + start = i*itemsize + assert start >= 0 + val = dtype.itemtype.runpack_str(s[start:start + itemsize]) + a.dtype.setitem(a, ai.offset, val) + ai = ai.next(1) return space.wrap(a) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -156,7 +156,7 @@ shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] else: shape = obj.shape[:dim] + obj.shape[dim + 1:] - result = W_NDimArray(support.product(shape), shape, dtype) + result = W_NDimArray(shape, dtype) arr = AxisReduce(self.func, self.name, self.identity, obj.shape, dtype, result, obj, dim) loop.compute(arr) @@ -383,14 +383,17 @@ ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), ("bitwise_and", "bitwise_and", 2, {"identity": 1, - 'int_only': True}), + "int_only": True}), ("bitwise_or", "bitwise_or", 2, {"identity": 0, - 'int_only': True}), + "int_only": True}), + ("bitwise_xor", "bitwise_xor", 2, {"int_only": True}), ("invert", "invert", 1, {"int_only": True}), ("divide", "div", 2, {"promote_bools": True}), ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), + ("left_shift", "lshift", 2, {"int_only": True}), + ("right_shift", "rshift", 2, {"int_only": True}), ("equal", "eq", 2, {"comparison_func": True}), ("not_equal", "ne", 2, {"comparison_func": True}), diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -142,11 +142,10 @@ from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) - storage = concr.storage if self.iter_no >= len(iterlist): iterlist.append(concr.create_iter(transforms)) if self.array_no >= len(arraylist): - arraylist.append(storage) + arraylist.append(concr) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -38,22 +38,31 @@ rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides -def find_shape_and_elems(space, w_iterable): +def is_single_elem(space, w_elem, is_rec_type): + if (is_rec_type and space.isinstance_w(w_elem, space.w_tuple)): + return True + if space.issequence_w(w_elem): + return False + return True + +def find_shape_and_elems(space, w_iterable, dtype): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) + is_rec_type = dtype is not None and dtype.is_record_type() while True: new_batch = [] if not batch: return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): + if is_single_elem(space, batch[0], is_rec_type): + for w_elem in batch: + if not is_single_elem(space, w_elem, is_rec_type): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) return shape, batch size = space.len_w(batch[0]) for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + if (is_single_elem(space, w_elem, is_rec_type) or + space.len_w(w_elem) != size): raise OperationError(space.w_ValueError, space.wrap( "setting an array element with a sequence")) new_batch += space.listview(w_elem) diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -21,8 +21,8 @@ float64_dtype = get_dtype_cache(space).w_float64dtype bool_dtype = get_dtype_cache(space).w_booldtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) - ar2 = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) + ar2 = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) v2 = ar.descr_add(space, Scalar(float64_dtype, W_Float64Box(2.0))) sig1 = v1.find_sig() @@ -40,7 +40,7 @@ v4 = ar.descr_add(space, ar) assert v1.find_sig() is v4.find_sig() - bool_ar = W_NDimArray(10, [10], dtype=bool_dtype) + bool_ar = W_NDimArray([10], dtype=bool_dtype) v5 = ar.descr_add(space, bool_ar) assert v5.find_sig() is not v1.find_sig() assert v5.find_sig() is not v2.find_sig() @@ -57,7 +57,7 @@ def test_slice_signature(self, space): float64_dtype = get_dtype_cache(space).w_float64dtype - ar = W_NDimArray(10, [10], dtype=float64_dtype) + ar = W_NDimArray([10], dtype=float64_dtype) v1 = ar.descr_getitem(space, space.wrap(slice(1, 3, 1))) v2 = ar.descr_getitem(space, space.wrap(slice(4, 6, 1))) assert v1.find_sig() is v2.find_sig() diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -445,6 +445,32 @@ numpy.generic, object) assert numpy.bool_.__mro__ == (numpy.bool_, numpy.generic, object) + def test_operators(self): + from operator import truediv + from _numpypy import float64, int_, True_, False_ + assert 5 / int_(2) == int_(2) + assert truediv(int_(3), int_(2)) == float64(1.5) + assert truediv(3, int_(2)) == float64(1.5) + assert int_(8) % int_(3) == int_(2) + assert 8 % int_(3) == int_(2) + assert divmod(int_(8), int_(3)) == (int_(2), int_(2)) + assert divmod(8, int_(3)) == (int_(2), int_(2)) + assert 2 ** int_(3) == int_(8) + assert int_(3) << int_(2) == int_(12) + assert 3 << int_(2) == int_(12) + assert int_(8) >> int_(2) == int_(2) + assert 8 >> int_(2) == int_(2) + assert int_(3) & int_(1) == int_(1) + assert 2 & int_(3) == int_(2) + assert int_(2) | int_(1) == int_(3) + assert 2 | int_(1) == int_(3) + assert int_(3) ^ int_(5) == int_(6) + assert True_ ^ False_ is True_ + assert 5 ^ int_(3) == int_(6) + assert +int_(3) == int_(3) + assert ~int_(3) == int_(-4) + raises(TypeError, lambda: float64(3) & 1) + def test_alternate_constructs(self): from _numpypy import dtype assert dtype('i8') == dtype('> 2 == [0, 0, 0, 0, 1, 1, 1, 1, 2, 2]).all() + a = array([True, False]) + assert (a >> 1 == [0, 0]).all() + a = arange(3, dtype=float) + raises(TypeError, lambda: a >> 1) + + def test_rrshift(self): + from _numpypy import arange + + a = arange(5) + assert (2 >> a == [2, 1, 0, 0, 0]).all() + def test_pow(self): from _numpypy import array a = array(range(5), float) @@ -678,6 +739,30 @@ for i in range(5): assert b[i] == i % 2 + def test_rand(self): + from _numpypy import arange + + a = arange(5) + assert (3 & a == [0, 1, 2, 3, 0]).all() + + def test_ror(self): + from _numpypy import arange + + a = arange(5) + assert (3 | a == [3, 3, 3, 3, 7]).all() + + def test_xor(self): + from _numpypy import arange + + a = arange(5) + assert (a ^ 3 == [3, 2, 1, 0, 7]).all() + + def test_rxor(self): + from _numpypy import arange + + a = arange(5) + assert (3 ^ a == [3, 2, 1, 0, 7]).all() + def test_pos(self): from _numpypy import array a = array([1., -2., 3., -4., -5.]) @@ -1410,6 +1495,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange @@ -1466,6 +1552,7 @@ a = arange(12).reshape(3,4) b = a.T.flat b[6::2] = [-1, -2] + print a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]] assert (a == [[0, 1, -1, 3], [4, 5, 6, -1], [8, 9, -2, 11]]).all() b[0:2] = [[[100]]] assert(a[0,0] == 100) @@ -1791,3 +1878,44 @@ cache = get_appbridge_cache(cls.space) cache.w_array_repr = cls.old_array_repr cache.w_array_str = cls.old_array_str + +class AppTestRecordDtype(BaseNumpyAppTest): + def test_zeros(self): + from _numpypy import zeros + a = zeros(2, dtype=[('x', int), ('y', float)]) + raises(IndexError, 'a[0]["xyz"]') + assert a[0]['x'] == 0 + assert a[0]['y'] == 0 + raises(ValueError, "a[0] = (1, 2, 3)") + a[0]['x'] = 13 + assert a[0]['x'] == 13 + a[1] = (1, 2) + assert a[1]['y'] == 2 + b = zeros(2, dtype=[('x', int), ('y', float)]) + b[1] = a[1] + assert a[1]['y'] == 2 + + def test_views(self): + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + raises(ValueError, 'array([1])["x"]') + raises(ValueError, 'a["z"]') + assert a['x'][1] == 3 + assert a['y'][1] == 4 + a['x'][0] = 15 + assert a['x'][0] == 15 + b = a['x'] + a['y'] + assert (b == [15+2, 3+4]).all() + assert b.dtype == float + + def test_assign_tuple(self): + from _numpypy import zeros + a = zeros((2, 3), dtype=[('x', int), ('y', float)]) + a[1, 2] = (1, 2) + assert a['x'][1, 2] == 1 + assert a['y'][1, 2] == 2 + + def test_creation_and_repr(self): + from _numpypy import array + a = array([(1, 2), (3, 4)], dtype=[('x', int), ('y', float)]) + assert repr(a[0]) == '(1, 2.0)' diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -312,9 +312,9 @@ def test_arcsinh(self): import math - from _numpypy import arcsinh, inf + from _numpypy import arcsinh - for v in [inf, -inf, 1.0, math.e]: + for v in [float('inf'), float('-inf'), 1.0, math.e]: assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) @@ -367,15 +367,15 @@ b = add.reduce(a, 0, keepdims=True) assert b.shape == (1, 4) assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() - def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange, array + from _numpypy import bitwise_and, bitwise_or, bitwise_xor, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + assert (a ^ 3 == bitwise_xor(a, 3)).all() raises(TypeError, 'array([1.0]) & 1') def test_unary_bitops(self): @@ -416,7 +416,7 @@ assert count_reduce_items(a) == 24 assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 - + def test_true_divide(self): from _numpypy import arange, array, true_divide assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -503,7 +503,7 @@ dtype = float64_dtype else: dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) + ar = W_NDimArray([n], dtype=dtype) i = 0 while i < n: ar.get_concrete().setitem(i, int32_dtype.box(7)) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -6,11 +6,15 @@ from pypy.module.micronumpy import interp_boxes from pypy.objspace.std.floatobject import float2string from pypy.rlib import rfloat, libffi, clibffi -from pypy.rlib.objectmodel import specialize +from pypy.rlib.objectmodel import specialize, we_are_translated from pypy.rlib.rarithmetic import widen, byteswap from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rstruct.runpack import runpack from pypy.tool.sourcetools import func_with_new_name +from pypy.rlib import jit + +VOID_STORAGE = lltype.Array(lltype.Char, hints={'nolength': True, + 'render_as_void': True}) def simple_unary_op(func): specialize.argtype(1)(func) @@ -60,10 +64,15 @@ class BaseType(object): def _unimplemented_ufunc(self, *args): raise NotImplementedError - # add = sub = mul = div = mod = pow = eq = ne = lt = le = gt = ge = max = \ - # min = copysign = pos = neg = abs = sign = reciprocal = fabs = floor = \ - # exp = sin = cos = tan = arcsin = arccos = arctan = arcsinh = \ - # arctanh = _unimplemented_ufunc + + def malloc(self, size): + # XXX find out why test_zjit explodes with tracking of allocations + return lltype.malloc(VOID_STORAGE, size, + zero=True, flavor="raw", + track_allocation=False, add_memory_pressure=True) + + def __repr__(self): + return self.__class__.__name__ class Primitive(object): _mixin_ = True @@ -79,7 +88,7 @@ assert isinstance(box, self.BoxType) return box.value - def coerce(self, space, w_item): + def coerce(self, space, dtype, w_item): if isinstance(w_item, self.BoxType): return w_item return self.coerce_subtype(space, space.gettypefor(self.BoxType), w_item) @@ -101,27 +110,33 @@ raise NotImplementedError def _read(self, storage, width, i, offset): - return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset) + if we_are_translated(): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + else: + return libffi.array_getitem_T(self.T, width, storage, i, offset) - def read(self, storage, width, i, offset): - return self.box(self._read(storage, width, i, offset)) + def read(self, arr, width, i, offset): + return self.box(self._read(arr.storage, width, i, offset)) - def read_bool(self, storage, width, i, offset): - return bool(self.for_computation(self._read(storage, width, i, offset))) + def read_bool(self, arr, width, i, offset): + return bool(self.for_computation(self._read(arr.storage, width, i, offset))) def _write(self, storage, width, i, offset, value): - libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset, value) + if we_are_translated(): + libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset, value) + else: + libffi.array_setitem_T(self.T, width, storage, i, offset, value) - def store(self, storage, width, i, offset, box): - self._write(storage, width, i, offset, self.unbox(box)) + def store(self, arr, width, i, offset, box): + self._write(arr.storage, width, i, offset, self.unbox(box)) def fill(self, storage, width, box, start, stop, offset): value = self.unbox(box) - for i in xrange(start, stop): - self._write(storage, width, i, offset, value) + for i in xrange(start, stop, width): + self._write(storage, 1, i, offset, value) def runpack_str(self, s): return self.box(runpack(self.format_code, s)) @@ -251,8 +266,7 @@ return space.wrap(self.unbox(w_item)) def str_format(self, box): - value = self.unbox(box) - return "True" if value else "False" + return "True" if self.unbox(box) else "False" def for_computation(self, v): return int(v) @@ -268,6 +282,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -283,8 +301,7 @@ return self._base_coerce(space, w_item) def str_format(self, box): - value = self.unbox(box) - return str(self.for_computation(value)) + return str(self.for_computation(self.unbox(box))) def for_computation(self, v): return widen(v) @@ -314,6 +331,14 @@ v1 *= v1 return res + @simple_binary_op + def lshift(self, v1, v2): + return v1 << v2 + + @simple_binary_op + def rshift(self, v1, v2): + return v1 >> v2 + @simple_unary_op def sign(self, v): if v > 0: @@ -332,6 +357,10 @@ def bitwise_or(self, v1, v2): return v1 | v2 + @simple_binary_op + def bitwise_xor(self, v1, v2): + return v1 ^ v2 + @simple_unary_op def invert(self, v): return ~v @@ -455,8 +484,8 @@ return self.box(space.float_w(space.call_function(space.w_float, w_item))) def str_format(self, box): - value = self.unbox(box) - return float2string(self.for_computation(value), "g", rfloat.DTSF_STR_PRECISION) + return float2string(self.for_computation(self.unbox(box)), "g", + rfloat.DTSF_STR_PRECISION) def for_computation(self, v): return float(v) @@ -595,10 +624,9 @@ format_code = "d" class CompositeType(BaseType): - def __init__(self, offsets_and_types): - self.offsets_and_types = offsets_and_types - last_item = offsets_and_types[-1] - self.size = last_item[0] + last_item[1].get_element_size() + def __init__(self, offsets_and_fields, size): + self.offsets_and_fields = offsets_and_fields + self.size = size def get_element_size(self): return self.size @@ -614,7 +642,10 @@ class StringType(BaseType, BaseStringType): T = lltype.Char -VoidType = StringType # why not? + +class VoidType(BaseType, BaseStringType): + T = lltype.Char + NonNativeVoidType = VoidType NonNativeStringType = StringType @@ -624,7 +655,56 @@ NonNativeUnicodeType = UnicodeType class RecordType(CompositeType): - pass + T = lltype.Char + + def read(self, arr, width, i, offset): + return interp_boxes.W_VoidBox(arr, i) + + @jit.unroll_safe + def coerce(self, space, dtype, w_item): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + + if isinstance(w_item, interp_boxes.W_VoidBox): + return w_item + # we treat every sequence as sequence, no special support + # for arrays + if not space.issequence_w(w_item): + raise OperationError(space.w_TypeError, space.wrap( + "expected sequence")) + if len(self.offsets_and_fields) != space.int_w(space.len(w_item)): + raise OperationError(space.w_ValueError, space.wrap( + "wrong length")) + items_w = space.fixedview(w_item) + # XXX optimize it out one day, but for now we just allocate an + # array + arr = W_NDimArray([1], dtype) + for i in range(len(items_w)): + subdtype = dtype.fields[dtype.fieldnames[i]][1] + ofs, itemtype = self.offsets_and_fields[i] + w_item = items_w[i] + w_box = itemtype.coerce(space, subdtype, w_item) + itemtype.store(arr, 1, 0, ofs, w_box) + return interp_boxes.W_VoidBox(arr, 0) + + @jit.unroll_safe + def store(self, arr, _, i, ofs, box): + assert isinstance(box, interp_boxes.W_VoidBox) + for k in range(self.get_element_size()): + arr.storage[k + i] = box.arr.storage[k + box.ofs] + + @jit.unroll_safe + def str_format(self, box): + assert isinstance(box, interp_boxes.W_VoidBox) + pieces = ["("] + first = True + for ofs, tp in self.offsets_and_fields: + if first: + first = False + else: + pieces.append(", ") + pieces.append(tp.str_format(tp.read(box.arr, 1, box.ofs, ofs))) + pieces.append(")") + return "".join(pieces) for tp in [Int32, Int64]: if tp.T == lltype.Signed: diff --git a/pypy/module/posix/interp_posix.py b/pypy/module/posix/interp_posix.py --- a/pypy/module/posix/interp_posix.py +++ b/pypy/module/posix/interp_posix.py @@ -48,7 +48,7 @@ return fsencode_w(self.space, self.w_obj) def as_unicode(self): - return self.space.unicode_w(self.w_obj) + return self.space.unicode0_w(self.w_obj) class FileDecoder(object): def __init__(self, space, w_obj): @@ -62,7 +62,7 @@ space = self.space w_unicode = space.call_method(self.w_obj, 'decode', getfilesystemencoding(space)) - return space.unicode_w(w_unicode) + return space.unicode0_w(w_unicode) @specialize.memo() def dispatch_filename(func, tag=0): @@ -543,10 +543,16 @@ dirname = FileEncoder(space, w_dirname) result = rposix.listdir(dirname) w_fs_encoding = getfilesystemencoding(space) - result_w = [ - space.call_method(space.wrap(s), "decode", w_fs_encoding) - for s in result - ] + len_result = len(result) + result_w = [None] * len_result + for i in range(len_result): + w_bytes = space.wrap(result[i]) + try: + result_w[i] = space.call_method(w_bytes, + "decode", w_fs_encoding) + except OperationError, e: + # fall back to the original byte string + result_w[i] = w_bytes else: dirname = space.str0_w(w_dirname) result = rposix.listdir(dirname) diff --git a/pypy/module/posix/test/test_posix2.py b/pypy/module/posix/test/test_posix2.py --- a/pypy/module/posix/test/test_posix2.py +++ b/pypy/module/posix/test/test_posix2.py @@ -29,6 +29,7 @@ mod.pdir = pdir unicode_dir = udir.ensure('fi\xc5\x9fier.txt', dir=True) unicode_dir.join('somefile').write('who cares?') + unicode_dir.join('caf\xe9').write('who knows?') mod.unicode_dir = unicode_dir # in applevel tests, os.stat uses the CPython os.stat. @@ -308,14 +309,22 @@ 'file2'] def test_listdir_unicode(self): + import sys unicode_dir = self.unicode_dir if unicode_dir is None: skip("encoding not good enough") posix = self.posix result = posix.listdir(unicode_dir) - result.sort() - assert result == [u'somefile'] - assert type(result[0]) is unicode + typed_result = [(type(x), x) for x in result] + assert (unicode, u'somefile') in typed_result + try: + u = "caf\xe9".decode(sys.getfilesystemencoding()) + except UnicodeDecodeError: + # Could not decode, listdir returned the byte string + assert (str, "caf\xe9") in typed_result + else: + assert (unicode, u) in typed_result + def test_access(self): pdir = self.pdir + '/file1' diff --git a/pypy/module/posix/test/test_ztranslation.py b/pypy/module/posix/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/posix/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_posix_translates(): + checkmodule('posix') \ No newline at end of file diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -13,6 +13,7 @@ 'ResOperation': 'interp_resop.WrappedOp', 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', + 'PARAMETER_DOCS': 'space.wrap(pypy.rlib.jit.PARAMETER_DOCS)', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/test/test_jit_setup.py b/pypy/module/pypyjit/test/test_jit_setup.py --- a/pypy/module/pypyjit/test/test_jit_setup.py +++ b/pypy/module/pypyjit/test/test_jit_setup.py @@ -45,6 +45,12 @@ pypyjit.set_compile_hook(None) pypyjit.set_param('default') + def test_doc(self): + import pypyjit + d = pypyjit.PARAMETER_DOCS + assert type(d) is dict + assert 'threshold' in d + def test_interface_residual_call(): space = gettestobjspace(usemodules=['pypyjit']) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -7,7 +7,7 @@ from pypy.interpreter import gateway #XXX # the release serial 42 is not in range(16) -CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h +CPYTHON_VERSION = (2, 7, 2, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h diff --git a/pypy/module/test_lib_pypy/numpypy/test_numpy.py b/pypy/module/test_lib_pypy/numpypy/test_numpy.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/test_numpy.py @@ -0,0 +1,13 @@ +from pypy.conftest import gettestobjspace + +class AppTestNumpy: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['micronumpy']) + + def test_imports(self): + try: + import numpy # fails if 'numpypy' was not imported so far + except ImportError: + pass + import numpypy + import numpy # works after 'numpypy' has been imported diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time import datetime +import copy import os def test_utcfromtimestamp(): @@ -22,3 +25,22 @@ del os.environ["TZ"] else: os.environ["TZ"] = prev_tz + +def test_utcfromtimestamp_microsecond(): + dt = datetime.datetime.utcfromtimestamp(0) + assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) \ No newline at end of file diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -123,7 +123,9 @@ self.prefix = prefix def getprefix(self, space): - return space.wrap(self.prefix) + if ZIPSEP == os.path.sep: + return space.wrap(self.prefix) + return space.wrap(self.prefix.replace(ZIPSEP, os.path.sep)) def _find_relative_path(self, filename): if filename.startswith(self.filename): @@ -381,7 +383,7 @@ prefix = name[len(filename):] if prefix.startswith(os.path.sep) or prefix.startswith(ZIPSEP): prefix = prefix[1:] - if prefix and not prefix.endswith(ZIPSEP): + if prefix and not prefix.endswith(ZIPSEP) and not prefix.endswith(os.path.sep): prefix += ZIPSEP w_result = space.wrap(W_ZipImporter(space, name, filename, zip_file, prefix)) zip_cache.set(filename, w_result) diff --git a/pypy/module/zipimport/test/test_undocumented.py b/pypy/module/zipimport/test/test_undocumented.py --- a/pypy/module/zipimport/test/test_undocumented.py +++ b/pypy/module/zipimport/test/test_undocumented.py @@ -119,7 +119,7 @@ zip_importer = zipimport.zipimporter(path) assert isinstance(zip_importer, zipimport.zipimporter) assert zip_importer.archive == zip_path - assert zip_importer.prefix == prefix + assert zip_importer.prefix == prefix.replace('/', os.path.sep) assert zip_path in zipimport._zip_directory_cache finally: self.cleanup_zipfile(self.created_paths) diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -15,7 +15,7 @@ cpy's regression tests """ compression = ZIP_STORED - pathsep = '/' + pathsep = os.path.sep def make_pyc(cls, space, co, mtime): data = marshal.dumps(co) @@ -129,7 +129,7 @@ self.writefile('sub/__init__.py', '') self.writefile('sub/yy.py', '') from zipimport import _zip_directory_cache, zipimporter - sub_importer = zipimporter(self.zipfile + '/sub') + sub_importer = zipimporter(self.zipfile + os.path.sep + 'sub') main_importer = zipimporter(self.zipfile) assert main_importer is not sub_importer diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first @@ -424,6 +424,11 @@ return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] assert False +def array_getitem_T(TYPE, width, addr, index, offset): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + return rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] + @specialize.call_location() @jit.oopspec("libffi_array_setitem(ffitype, width, addr, index, offset, value)") def array_setitem(ffitype, width, addr, index, offset, value): @@ -434,3 +439,8 @@ rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value return assert False + +def array_setitem_T(TYPE, width, addr, index, offset, value): + addr = rffi.ptradd(addr, index * width) + addr = rffi.ptradd(addr, offset) + rffi.cast(rffi.CArrayPtr(TYPE), addr)[0] = value diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -54,6 +54,7 @@ ASN1_STRING = lltype.Ptr(lltype.ForwardReference()) ASN1_ITEM = rffi.COpaquePtr('ASN1_ITEM') +ASN1_ITEM_EXP = lltype.Ptr(lltype.FuncType([], ASN1_ITEM)) X509_NAME = rffi.COpaquePtr('X509_NAME') class CConfig: @@ -101,12 +102,11 @@ X509_extension_st = rffi_platform.Struct( 'struct X509_extension_st', [('value', ASN1_STRING)]) - ASN1_ITEM_EXP = lltype.FuncType([], ASN1_ITEM) X509V3_EXT_D2I = lltype.FuncType([rffi.VOIDP, rffi.CCHARPP, rffi.LONG], rffi.VOIDP) v3_ext_method = rffi_platform.Struct( 'struct v3_ext_method', - [('it', lltype.Ptr(ASN1_ITEM_EXP)), + [('it', ASN1_ITEM_EXP), ('d2i', lltype.Ptr(X509V3_EXT_D2I))]) GENERAL_NAME_st = rffi_platform.Struct( 'struct GENERAL_NAME_st', @@ -118,6 +118,8 @@ ('block_size', rffi.INT)]) EVP_MD_SIZE = rffi_platform.SizeOf('EVP_MD') EVP_MD_CTX_SIZE = rffi_platform.SizeOf('EVP_MD_CTX') + OPENSSL_EXPORT_VAR_AS_FUNCTION = rffi_platform.Defined( + "OPENSSL_EXPORT_VAR_AS_FUNCTION") for k, v in rffi_platform.configure(CConfig).items(): @@ -224,7 +226,10 @@ ssl_external('i2a_ASN1_INTEGER', [BIO, ASN1_INTEGER], rffi.INT) ssl_external('ASN1_item_d2i', [rffi.VOIDP, rffi.CCHARPP, rffi.LONG, ASN1_ITEM], rffi.VOIDP) -ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) +if OPENSSL_EXPORT_VAR_AS_FUNCTION: + ssl_external('ASN1_ITEM_ptr', [ASN1_ITEM_EXP], ASN1_ITEM, macro=True) +else: + ssl_external('ASN1_ITEM_ptr', [rffi.VOIDP], ASN1_ITEM, macro=True) ssl_external('sk_GENERAL_NAME_num', [GENERAL_NAMES], rffi.INT, macro=True) diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -43,7 +43,7 @@ arglist = ['arg%d' % (i,) for i in range(len(signature))] transformed_arglist = arglist[:] for i, arg in enumerate(signature): - if arg is unicode: + if arg in (unicode, unicode0): transformed_arglist[i] = transformed_arglist[i] + '.as_unicode()' args = ', '.join(arglist) @@ -67,7 +67,7 @@ exec source.compile() in miniglobals new_func = miniglobals[func_name] specialized_args = [i for i in range(len(signature)) - if signature[i] in (unicode, None)] + if signature[i] in (unicode, unicode0, None)] new_func = specialize.argtype(*specialized_args)(new_func) # Monkeypatch the function in pypy.rlib.rposix diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -60,7 +60,8 @@ if sys.platform == 'win32': # Can't rename a DLL: it is always called 'libpypy-c.dll' for extra in ['libpypy-c.dll', - 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll']: + 'libexpat.dll', 'sqlite3.dll', 'msvcr90.dll', + 'libeay32.dll', 'ssleay32.dll']: p = pypy_c.dirpath().join(extra) if not p.check(): p = py.path.local.sysfind(extra) @@ -125,7 +126,7 @@ zf.close() else: archive = str(builddir.join(name + '.tar.bz2')) - if sys.platform == 'darwin': + if sys.platform == 'darwin' or sys.platform.startswith('freebsd'): e = os.system('tar --numeric-owner -cvjf ' + archive + " " + name) else: e = os.system('tar --owner=root --group=root --numeric-owner -cvjf ' + archive + " " + name) diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -11,7 +11,6 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo class BasicGcPolicy(object): - stores_hash_at_the_end = False def __init__(self, db, thread_enabled=False): self.db = db @@ -47,8 +46,7 @@ return ExternalCompilationInfo( pre_include_bits=['/* using %s */' % (gct.__class__.__name__,), '#define MALLOC_ZERO_FILLED %d' % (gct.malloc_zero_filled,), - ], - post_include_bits=['typedef void *GC_hidden_pointer;'] + ] ) def get_prebuilt_hash(self, obj): @@ -308,7 +306,6 @@ class FrameworkGcPolicy(BasicGcPolicy): transformerclass = framework.FrameworkGCTransformer - stores_hash_at_the_end = True def struct_setup(self, structdefnode, rtti): if rtti is not None and hasattr(rtti._obj, 'destructor_funcptr'): diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -478,6 +478,7 @@ 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', 'movap', 'movd', 'movlp', 'sqrtsd', 'movhpd', 'mins', 'minp', 'maxs', 'maxp', 'unpck', 'pxor', 'por', # sse2 + 'shufps', 'shufpd', # arithmetic operations should not produce GC pointers 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,14 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %s%s (default %s)' % ( - key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) + prefix = ' --jit %s=N %s' % (key, ' '*(18-len(key))) + doc = '%s (default %s)' % (pypyjit.PARAMETER_DOCS[key], value) + while len(doc) > 51: + i = doc[:51].rfind(' ') + print prefix + doc[:i] + doc = doc[i+1:] + prefix = ' '*len(prefix) + print prefix + doc print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Fri Feb 24 18:24:25 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 24 Feb 2012 18:24:25 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: prevent access to reflection info during annotation/translation time (this happens only with the CINT backend, as the dict and normal library are one and the same) Message-ID: <20120224172425.73E9282366@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52864:2d3f26b11661 Date: 2012-02-23 15:46 -0800 http://bitbucket.org/pypy/pypy/changeset/2d3f26b11661/ Log: prevent access to reflection info during annotation/translation time (this happens only with the CINT backend, as the dict and normal library are one and the same) diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -50,6 +50,10 @@ else: cpptype = W_CPPType(space, final_name, handle) state.cpptype_cache[name] = cpptype + + if space.config.translating and not objectmodel.we_are_translated(): + return cpptype + cpptype._find_methods() cpptype._find_data_members() return cpptype From noreply at buildbot.pypy.org Fri Feb 24 18:24:27 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 24 Feb 2012 18:24:27 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: get the casts right to allow switching C_OBJECT type Message-ID: <20120224172427.0C2DA82367@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52865:1f15486624b1 Date: 2012-02-23 16:17 -0800 http://bitbucket.org/pypy/pypy/changeset/1f15486624b1/ Log: get the casts right to allow switching C_OBJECT type diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -254,15 +254,15 @@ c_charp2stdstring = rffi.llexternal( "cppyy_charp2stdstring", - [rffi.CCHARP], rffi.VOIDP, + [rffi.CCHARP], C_OBJECT, compilation_info=backend.eci) c_stdstring2stdstring = rffi.llexternal( "cppyy_stdstring2stdstring", - [rffi.VOIDP], rffi.VOIDP, + [C_OBJECT], C_OBJECT, compilation_info=backend.eci) c_free_stdstring = rffi.llexternal( "cppyy_free_stdstring", - [rffi.VOIDP], lltype.Void, + [C_OBJECT], lltype.Void, compilation_info=backend.eci) diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -552,7 +552,7 @@ return capi.c_stdstring2stdstring(arg) def free_argument(self, arg): - capi.c_free_stdstring(rffi.cast(rffi.VOIDPP, arg)[0]) + capi.c_free_stdstring(rffi.cast(capi.C_OBJECT, rffi.cast(rffi.VOIDPP, arg)[0])) class StdStringRefConverter(InstancePtrConverter): _immutable_ = True diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -6,6 +6,7 @@ #ifdef __cplusplus extern "C" { #endif // ifdef __cplusplus + typedef void* cppyy_typehandle_t; typedef void* cppyy_object_t; typedef void* (*cppyy_methptrgetter_t)(cppyy_object_t); @@ -80,9 +81,9 @@ long long cppyy_strtoll(const char* str); unsigned long long cppyy_strtuoll(const char* str); - void* cppyy_charp2stdstring(const char* str); - void* cppyy_stdstring2stdstring(void* ptr); - void cppyy_free_stdstring(void* ptr); + cppyy_object_t cppyy_charp2stdstring(const char* str); + cppyy_object_t cppyy_stdstring2stdstring(cppyy_object_t ptr); + void cppyy_free_stdstring(cppyy_object_t ptr); #ifdef __cplusplus } diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -506,15 +506,15 @@ free(ptr); } -void* cppyy_charp2stdstring(const char* str) { - return new std::string(str); +cppyy_object_t cppyy_charp2stdstring(const char* str) { + return (cppyy_object_t)new std::string(str); } -void* cppyy_stdstring2stdstring(void* ptr) { - return new std::string(*(std::string*)ptr); +cppyy_object_t cppyy_stdstring2stdstring(cppyy_object_t ptr) { + return (cppyy_object_t)new std::string(*(std::string*)ptr); } -void cppyy_free_stdstring(void* ptr) { +void cppyy_free_stdstring(cppyy_object_t ptr) { delete (std::string*)ptr; } diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -65,7 +65,7 @@ void cppyy_deallocate(cppyy_typehandle_t handle, cppyy_object_t instance) { Reflex::Type t = type_from_handle(handle); - t.Deallocate(instance); + t.Deallocate((void*)instance); } void cppyy_destruct(cppyy_typehandle_t handle, cppyy_object_t self) { @@ -81,7 +81,7 @@ Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); if (self) { - Reflex::Object o((Reflex::Type)s, self); + Reflex::Object o((Reflex::Type)s, (void*)self); m.Invoke(o, 0, arguments); } else { m.Invoke(0, arguments); @@ -96,10 +96,10 @@ Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); if (self) { - Reflex::Object o((Reflex::Type)s, self); + Reflex::Object o((Reflex::Type)s, (void*)self); m.Invoke(o, *((long*)result), arguments); } else { - m.Invoke(*((long*)result), arguments); + m.Invoke(*((long*)result), arguments); } return (long)result; } @@ -112,7 +112,7 @@ Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); if (self) { - Reflex::Object o((Reflex::Type)s, self); + Reflex::Object o((Reflex::Type)s, (void*)self); m.Invoke(o, result, arguments); } else { m.Invoke(result, arguments); @@ -167,10 +167,10 @@ Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); if (self) { - Reflex::Object o((Reflex::Type)s, self); + Reflex::Object o((Reflex::Type)s, (void*)self); m.Invoke(o, result, arguments); } else { - m.Invoke(result, arguments); + m.Invoke(result, arguments); } return cppstring_to_cstring(result); } @@ -284,7 +284,7 @@ for (Bases_t::iterator ibase = bases->begin(); ibase != bases->end(); ++ibase) { if (ibase->first.ToType() == tb) - return (size_t)ibase->first.Offset(address); + return (size_t)ibase->first.Offset((void*)address); } // contrary to typical invoke()s, the result of the internal getbases function @@ -306,9 +306,9 @@ Reflex::Member m = s.FunctionMemberAt(method_index); std::string name; if (m.IsConstructor()) - name = s.Name(Reflex::FINAL); // to get proper name for templates + name = s.Name(Reflex::FINAL); // to get proper name for templates else - name = m.Name(); + name = m.Name(); return cppstring_to_cstring(name); } @@ -414,14 +414,14 @@ free(ptr); } -void* cppyy_charp2stdstring(const char* str) { - return new std::string(str); +cppyy_object_t cppyy_charp2stdstring(const char* str) { + return (cppyy_object_t)new std::string(str); } -void* cppyy_stdstring2stdstring(void* ptr) { - return new std::string(*(std::string*)ptr); +cppyy_object_t cppyy_stdstring2stdstring(cppyy_object_t ptr) { + return (cppyy_object_t)new std::string(*(std::string*)ptr); } -void cppyy_free_stdstring(void* ptr) { +void cppyy_free_stdstring(cppyy_object_t ptr) { delete (std::string*)ptr; } From noreply at buildbot.pypy.org Fri Feb 24 18:24:28 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 24 Feb 2012 18:24:28 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: still has to use longs for CINT backend, even with protection against using reflection info during translation ... Message-ID: <20120224172428.3696982368@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52866:208d3205efc1 Date: 2012-02-24 00:20 -0800 http://bitbucket.org/pypy/pypy/changeset/208d3205efc1/ Log: still has to use longs for CINT backend, even with protection against using reflection info during translation ... diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -6,8 +6,8 @@ identify = backend.identify -_C_OPAQUE_PTR = rffi.VOIDP -_C_OPAQUE_NULL = lltype.nullptr(_C_OPAQUE_PTR.TO) +_C_OPAQUE_PTR = rffi.LONG +_C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO C_TYPEHANDLE = _C_OPAQUE_PTR C_NULL_TYPEHANDLE = rffi.cast(C_TYPEHANDLE, _C_OPAQUE_NULL) From noreply at buildbot.pypy.org Fri Feb 24 18:24:29 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 24 Feb 2012 18:24:29 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: further protection against using reflection info during translation with CINT backend and fixup of test_zjit to handle it Message-ID: <20120224172429.615A082369@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52867:6d45bcabae5b Date: 2012-02-24 00:21 -0800 http://bitbucket.org/pypy/pypy/changeset/6d45bcabae5b/ Log: further protection against using reflection info during translation with CINT backend and fixup of test_zjit to handle it diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -428,6 +428,8 @@ self.data_members[data_member_name] = data_member def update(self): + if self.space.config.translating and not objectmodel.we_are_translated(): + return cpptype self._find_methods() self._find_data_members() diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -65,6 +65,9 @@ def __init__(self): self.fromcache = InternalSpaceCache(self).getorbuild self.user_del_action = FakeUserDelAction(self) + class dummy: pass + self.config = dummy() + self.config.translating = False def issequence_w(self, w_obj): return True From noreply at buildbot.pypy.org Fri Feb 24 18:24:30 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 24 Feb 2012 18:24:30 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120224172430.DC3858236A@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52868:e18786b4cfc4 Date: 2012-02-24 00:22 -0800 http://bitbucket.org/pypy/pypy/changeset/e18786b4cfc4/ Log: merge default into branch diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints) + have_debug_prints, fatalerror_notb) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,6 +104,7 @@ self._debug = v def setup_once(self): + self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -161,6 +162,28 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') + _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) + + def _check_sse2(self): + if WORD == 8: + return # all x86-64 CPUs support SSE2 + if not self.cpu.supports_floats: + return # the CPU doesn't support float, so we don't need SSE2 + # + from pypy.jit.backend.x86.detect_sse2 import INSNS + mc = codebuf.MachineCodeBlockWrapper() + for c in INSNS: + mc.writechar(c) + rawstart = mc.materialize(self.cpu.asmmemmgr, []) + fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) + features = fnptr() + if bool(features & (1<<25)) and bool(features & (1<<26)): + return # CPU supports SSE2 + fatalerror_notb( + "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" + "Your CPU is too old. Please translate a PyPy with the option:\n" + "--jit-backend=x86-without-sse2") + def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,17 +1,18 @@ import autopath -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib.rmmap import alloc, free +INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3") # RET def detect_sse2(): + from pypy.rpython.lltypesystem import lltype, rffi + from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3"): # RET + for c in INSNS: data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -52,6 +52,7 @@ set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) + j = float(j) while frame.i > 3: jitdriver.can_enter_jit(frame=frame, total=total, j=j) jitdriver.jit_merge_point(frame=frame, total=total, j=j) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7796,6 +7796,23 @@ """ self.optimize_loop(ops, expected) + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2943,11 +2943,18 @@ self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): - myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) + myjitdriver = JitDriver(greens = [], reds = ['n', 'a']) + class A: + pass def f(n): sa = i = rffi.cast(rffi.ULONGLONG, 1) + a = A() while i < rffi.cast(rffi.ULONGLONG, n): - myjitdriver.jit_merge_point(sa=sa, n=n, i=i) + a.sa = sa + a.i = i + myjitdriver.jit_merge_point(n=n, a=a) + sa = a.sa + i = a.i sa += sa % i i += 1 res = self.meta_interp(f, [32]) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/jit/tl/tinyframe/tinyframe.py b/pypy/jit/tl/tinyframe/tinyframe.py --- a/pypy/jit/tl/tinyframe/tinyframe.py +++ b/pypy/jit/tl/tinyframe/tinyframe.py @@ -210,7 +210,7 @@ def repr(self): return "" % (self.outer.repr(), self.inner.repr()) -driver = JitDriver(greens = ['code', 'i'], reds = ['self'], +driver = JitDriver(greens = ['i', 'code'], reds = ['self'], virtualizables = ['self']) class Frame(object): diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,9 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,6 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -98,6 +102,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -303,3 +308,52 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + + +class AutoFlusher(object): + + def __init__(self, space): + self.streams = {} + + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] + + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -160,3 +160,20 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -385,6 +385,7 @@ "Tuple": "space.w_tuple", "List": "space.w_list", "Set": "space.w_set", + "FrozenSet": "space.w_frozenset", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -184,8 +184,10 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -1,16 +1,24 @@ from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler import consts from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, fread, feof, Py_ssize_tP, cpython_struct) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.pyerrors import PyErr_SetFromErrno +from pypy.module.cpyext.funcobject import PyCodeObject from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( - "PyCompilerFlags", ()) + "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) +PyCF_MASK = (consts.CO_FUTURE_DIVISION | + consts.CO_FUTURE_ABSOLUTE_IMPORT | + consts.CO_FUTURE_WITH_STATEMENT | + consts.CO_FUTURE_PRINT_FUNCTION | + consts.CO_FUTURE_UNICODE_LITERALS) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyEval_CallObjectWithKeywords(space, w_obj, w_arg, w_kwds): return space.call(w_obj, w_arg, w_kwds) @@ -48,6 +56,17 @@ return None return borrow_from(None, caller.w_globals) + at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) +def PyEval_EvalCode(space, w_code, w_globals, w_locals): + """This is a simplified interface to PyEval_EvalCodeEx(), with just + the code object, and the dictionaries of global and local variables. + The other arguments are set to NULL.""" + if w_globals is None: + w_globals = space.w_None + if w_locals is None: + w_locals = space.w_None + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([PyObject, PyObject], PyObject) def PyObject_CallObject(space, w_obj, w_arg): """ @@ -74,7 +93,7 @@ Py_file_input = 257 Py_eval_input = 258 -def compile_string(space, source, filename, start): +def compile_string(space, source, filename, start, flags=0): w_source = space.wrap(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: @@ -86,7 +105,7 @@ else: raise OperationError(space.w_ValueError, space.wrap( "invalid mode parameter for compilation")) - return compiling.compile(space, w_source, filename, mode) + return compiling.compile(space, w_source, filename, mode, flags) def run_string(space, source, filename, start, w_globals, w_locals): w_code = compile_string(space, source, filename, start) @@ -109,6 +128,24 @@ filename = "" return run_string(space, source, filename, start, w_globals, w_locals) + at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, + PyCompilerFlagsPtr], PyObject) +def PyRun_StringFlags(space, source, start, w_globals, w_locals, flagsptr): + """Execute Python source code from str in the context specified by the + dictionaries globals and locals with the compiler flags specified by + flags. The parameter start specifies the start token that should be used to + parse the source code. + + Returns the result of executing the code as a Python object, or NULL if an + exception was raised.""" + source = rffi.charp2str(source) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + w_code = compile_string(space, source, "", start, flags) + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([FILEP, CONST_STRING, rffi.INT_real, PyObject, PyObject], PyObject) def PyRun_File(space, fp, filename, start, w_globals, w_locals): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -150,7 +187,7 @@ @cpython_api([rffi.CCHARP, rffi.CCHARP, rffi.INT_real, PyCompilerFlagsPtr], PyObject) -def Py_CompileStringFlags(space, source, filename, start, flags): +def Py_CompileStringFlags(space, source, filename, start, flagsptr): """Parse and compile the Python source code in str, returning the resulting code object. The start token is given by start; this can be used to constrain the code which can be compiled and should @@ -160,7 +197,30 @@ returns NULL if the code cannot be parsed or compiled.""" source = rffi.charp2str(source) filename = rffi.charp2str(filename) - if flags: - raise OperationError(space.w_NotImplementedError, space.wrap( - "cpyext Py_CompileStringFlags does not accept flags")) - return compile_string(space, source, filename, start) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + return compile_string(space, source, filename, start, flags) + + at cpython_api([PyCompilerFlagsPtr], rffi.INT_real, error=CANNOT_FAIL) +def PyEval_MergeCompilerFlags(space, cf): + """This function changes the flags of the current evaluation + frame, and returns true on success, false on failure.""" + flags = rffi.cast(lltype.Signed, cf.c_cf_flags) + result = flags != 0 + current_frame = space.getexecutioncontext().gettopframe_nohidden() + if current_frame: + codeflags = current_frame.pycode.co_flags + compilerflags = codeflags & PyCF_MASK + if compilerflags: + result = 1 + flags |= compilerflags + # No future keyword at the moment + # if codeflags & CO_GENERATOR_ALLOWED: + # result = 1 + # flags |= CO_GENERATOR_ALLOWED + cf.c_cf_flags = rffi.cast(rffi.INT, flags) + return result + + diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - PyObjectFields, generic_cpy_call, CONST_STRING, + PyObjectFields, generic_cpy_call, CONST_STRING, CANNOT_FAIL, cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) @@ -48,6 +48,7 @@ PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) +PyCode_Check, PyCode_CheckExact = build_type_checkers("Code", PyCode) def function_attach(space, py_obj, w_obj): py_func = rffi.cast(PyFunctionObject, py_obj) @@ -167,3 +168,9 @@ freevars=[], cellvars=[])) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyCode_GetNumFree(space, w_co): + """Return the number of free variables in co.""" + co = space.interp_w(PyCode, w_co) + return len(co.co_freevars) + diff --git a/pypy/module/cpyext/include/Python.h b/pypy/module/cpyext/include/Python.h --- a/pypy/module/cpyext/include/Python.h +++ b/pypy/module/cpyext/include/Python.h @@ -113,6 +113,7 @@ #include "compile.h" #include "frameobject.h" #include "eval.h" +#include "pymath.h" #include "pymem.h" #include "pycobject.h" #include "pycapsule.h" diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -13,13 +13,19 @@ /* Masks for co_flags above */ /* These values are also in funcobject.py */ -#define CO_OPTIMIZED 0x0001 -#define CO_NEWLOCALS 0x0002 -#define CO_VARARGS 0x0004 -#define CO_VARKEYWORDS 0x0008 +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 #define CO_NESTED 0x0010 #define CO_GENERATOR 0x0020 +#define CO_FUTURE_DIVISION 0x02000 +#define CO_FUTURE_ABSOLUTE_IMPORT 0x04000 +#define CO_FUTURE_WITH_STATEMENT 0x08000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pymath.h b/pypy/module/cpyext/include/pymath.h new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/include/pymath.h @@ -0,0 +1,20 @@ +#ifndef Py_PYMATH_H +#define Py_PYMATH_H + +/************************************************************************** +Symbols and macros to supply platform-independent interfaces to mathematical +functions and constants +**************************************************************************/ + +/* HUGE_VAL is supposed to expand to a positive double infinity. Python + * uses Py_HUGE_VAL instead because some platforms are broken in this + * respect. We used to embed code in pyport.h to try to worm around that, + * but different platforms are broken in conflicting ways. If you're on + * a platform where HUGE_VAL is defined incorrectly, fiddle your Python + * config to #define Py_HUGE_VAL to something that works on your platform. + */ +#ifndef Py_HUGE_VAL +#define Py_HUGE_VAL HUGE_VAL +#endif + +#endif /* Py_PYMATH_H */ diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,6 +19,14 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) +#define PyCF_MASK_OBSOLETE (CO_NESTED) +#define PyCF_SOURCE_IS_UTF8 0x0100 +#define PyCF_DONT_IMPLY_DEDENT 0x0200 +#define PyCF_ONLY_AST 0x0400 + #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) #ifdef __cplusplus diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -182,16 +182,6 @@ used as the positional and keyword parameters to the object's constructor.""" raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_Check(space, co): - """Return true if co is a code object""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_GetNumFree(space, co): - """Return the number of free variables in co.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=-1) def PyCodec_Register(space, search_function): """Register a new codec search function. @@ -1293,28 +1283,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -1875,26 +1843,6 @@ """ raise NotImplementedError - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISTITLE(space, ch): - """Return 1 or 0 depending on whether ch is a titlecase character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISDIGIT(space, ch): - """Return 1 or 0 depending on whether ch is a digit character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISNUMERIC(space, ch): - """Return 1 or 0 depending on whether ch is a numeric character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISALPHA(space, ch): - """Return 1 or 0 depending on whether ch is an alphabetic character.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP], PyObject) def PyUnicode_FromFormat(space, format): """Take a C printf()-style format string and a variable number of @@ -2339,17 +2287,6 @@ use the default error handling.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], rffi.INT_real, error=-1) -def PyUnicode_Tailmatch(space, str, substr, start, end, direction): - """Return 1 if substr matches str*[*start:end] at the given tail end - (direction == -1 means to do a prefix match, direction == 1 a suffix match), - 0 otherwise. Return -1 if an error occurred. - - This function used an int type for start and end. This - might require changes in your code for properly supporting 64-bit - systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], Py_ssize_t, error=-2) def PyUnicode_Find(space, str, substr, start, end, direction): """Return the first position of substr in str*[*start:end] using the given @@ -2373,16 +2310,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: @@ -2556,17 +2483,6 @@ source code is read from fp instead of an in-memory string.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, PyCompilerFlags], PyObject) -def PyRun_StringFlags(space, str, start, globals, locals, flags): - """Execute Python source code from str in the context specified by the - dictionaries globals and locals with the compiler flags specified by - flags. The parameter start specifies the start token that should be used to - parse the source code. - - Returns the result of executing the code as a Python object, or NULL if an - exception was raised.""" - raise NotImplementedError - @cpython_api([FILE, rffi.CCHARP, rffi.INT_real, PyObject, PyObject, rffi.INT_real], PyObject) def PyRun_FileEx(space, fp, filename, start, globals, locals, closeit): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -2587,13 +2503,6 @@ returns.""" raise NotImplementedError - at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) -def PyEval_EvalCode(space, co, globals, locals): - """This is a simplified interface to PyEval_EvalCodeEx(), with just - the code object, and the dictionaries of global and local variables. - The other arguments are set to NULL.""" - raise NotImplementedError - @cpython_api([PyCodeObject, PyObject, PyObject, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObject], PyObject) def PyEval_EvalCodeEx(space, co, globals, locals, args, argcount, kws, kwcount, defs, defcount, closure): """Evaluate a precompiled code object, given a particular environment for its @@ -2618,12 +2527,6 @@ throw() methods of generator objects.""" raise NotImplementedError - at cpython_api([PyCompilerFlags], rffi.INT_real, error=CANNOT_FAIL) -def PyEval_MergeCompilerFlags(space, cf): - """This function changes the flags of the current evaluation frame, and returns - true on success, false on failure.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyWeakref_Check(space, ob): """Return true if ob is either a reference or proxy object. diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -112,6 +112,37 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + def test_dictproxy(self, space, api): w_dict = space.sys.get('modules') w_proxy = api.PyDictProxy_New(w_dict) diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -2,9 +2,10 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.eval import ( - Py_single_input, Py_file_input, Py_eval_input) + Py_single_input, Py_file_input, Py_eval_input, PyCompilerFlags) from pypy.module.cpyext.api import fopen, fclose, fileno, Py_ssize_tP from pypy.interpreter.gateway import interp2app +from pypy.interpreter.astcompiler import consts from pypy.tool.udir import udir import sys, os @@ -63,6 +64,22 @@ assert space.int_w(w_res) == 10 + def test_evalcode(self, space, api): + w_f = space.appexec([], """(): + def f(*args): + assert isinstance(args, tuple) + return len(args) + 8 + return f + """) + + w_t = space.newtuple([space.wrap(1), space.wrap(2)]) + w_globals = space.newdict() + w_locals = space.newdict() + space.setitem(w_locals, space.wrap("args"), w_t) + w_res = api.PyEval_EvalCode(w_f.code, w_globals, w_locals) + + assert space.int_w(w_res) == 10 + def test_run_simple_string(self, space, api): def run(code): buf = rffi.str2charp(code) @@ -96,6 +113,16 @@ assert 42 * 43 == space.unwrap( api.PyObject_GetItem(w_globals, space.wrap("a"))) + def test_run_string_flags(self, space, api): + flags = lltype.malloc(PyCompilerFlags, flavor='raw') + flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) + w_globals = space.newdict() + api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, + w_globals, w_globals, flags) + w_a = space.getitem(w_globals, space.wrap("a")) + assert space.unwrap(w_a) == u'caf\xe9' + lltype.free(flags, flavor='raw') + def test_run_file(self, space, api): filepath = udir / "cpyext_test_runfile.py" filepath.write("raise ZeroDivisionError") @@ -256,3 +283,21 @@ print dir(mod) print mod.__dict__ assert mod.f(42) == 47 + + def test_merge_compiler_flags(self): + module = self.import_extension('foo', [ + ("get_flags", "METH_NOARGS", + """ + PyCompilerFlags flags; + flags.cf_flags = 0; + int result = PyEval_MergeCompilerFlags(&flags); + return Py_BuildValue("ii", result, flags.cf_flags); + """), + ]) + assert module.get_flags() == (0, 0) + + ns = {'module':module} + exec """from __future__ import division \nif 1: + def nested_flags(): + return module.get_flags()""" in ns + assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -81,6 +81,14 @@ rffi.free_charp(filename) rffi.free_charp(funcname) + def test_getnumfree(self, space, api): + w_function = space.appexec([], """(): + a = 5 + def method(x): return a, x + return method + """) + assert api.PyCode_GetNumFree(w_function.code) == 1 + def test_classmethod(self, space, api): w_function = space.appexec([], """(): def method(x): return x diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -204,8 +204,18 @@ assert api.Py_UNICODE_ISSPACE(unichr(char)) assert not api.Py_UNICODE_ISSPACE(u'a') + assert api.Py_UNICODE_ISALPHA(u'a') + assert not api.Py_UNICODE_ISALPHA(u'0') + assert api.Py_UNICODE_ISALNUM(u'a') + assert api.Py_UNICODE_ISALNUM(u'0') + assert not api.Py_UNICODE_ISALNUM(u'+') + assert api.Py_UNICODE_ISDECIMAL(u'\u0660') assert not api.Py_UNICODE_ISDECIMAL(u'a') + assert api.Py_UNICODE_ISDIGIT(u'9') + assert not api.Py_UNICODE_ISDIGIT(u'@') + assert api.Py_UNICODE_ISNUMERIC(u'9') + assert not api.Py_UNICODE_ISNUMERIC(u'@') for char in [0x0a, 0x0d, 0x1c, 0x1d, 0x1e, 0x85, 0x2028, 0x2029]: assert api.Py_UNICODE_ISLINEBREAK(unichr(char)) @@ -216,6 +226,9 @@ assert not api.Py_UNICODE_ISUPPER(u'a') assert not api.Py_UNICODE_ISLOWER(u'�') assert api.Py_UNICODE_ISUPPER(u'�') + assert not api.Py_UNICODE_ISTITLE(u'A') + assert api.Py_UNICODE_ISTITLE( + u'\N{LATIN CAPITAL LETTER L WITH SMALL LETTER J}') def test_TOLOWER(self, space, api): assert api.Py_UNICODE_TOLOWER(u'�') == u'�' @@ -429,3 +442,18 @@ w_char = api.PyUnicode_FromOrdinal(0xFFFF) assert space.unwrap(w_char) == u'\uFFFF' + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) + + def test_tailmatch(self, space, api): + w_str = space.wrap(u"abcdef") + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -12,7 +12,7 @@ make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check from pypy.module.sys.interp_encoding import setdefaultencoding -from pypy.objspace.std import unicodeobject, unicodetype +from pypy.objspace.std import unicodeobject, unicodetype, stringtype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer import sys @@ -89,6 +89,11 @@ return unicodedb.isspace(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISALPHA(space, ch): + """Return 1 or 0 depending on whether ch is an alphabetic character.""" + return unicodedb.isalpha(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISALNUM(space, ch): """Return 1 or 0 depending on whether ch is an alphanumeric character.""" return unicodedb.isalnum(ord(ch)) @@ -104,6 +109,16 @@ return unicodedb.isdecimal(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISDIGIT(space, ch): + """Return 1 or 0 depending on whether ch is a digit character.""" + return unicodedb.isdigit(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISNUMERIC(space, ch): + """Return 1 or 0 depending on whether ch is a numeric character.""" + return unicodedb.isnumeric(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISLOWER(space, ch): """Return 1 or 0 depending on whether ch is a lowercase character.""" return unicodedb.islower(ord(ch)) @@ -113,6 +128,11 @@ """Return 1 or 0 depending on whether ch is an uppercase character.""" return unicodedb.isupper(ord(ch)) + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISTITLE(space, ch): + """Return 1 or 0 depending on whether ch is a titlecase character.""" + return unicodedb.istitle(ord(ch)) + @cpython_api([Py_UNICODE], Py_UNICODE, error=CANNOT_FAIL) def Py_UNICODE_TOLOWER(space, ch): """Return the character ch converted to lower case.""" @@ -155,6 +175,11 @@ except KeyError: return -1.0 + at cpython_api([], Py_UNICODE, error=CANNOT_FAIL) +def PyUnicode_GetMax(space): + """Get the maximum ordinal for a Unicode character.""" + return unichr(runicode.MAXUNICODE) + @cpython_api([PyObject], rffi.CCHARP, error=CANNOT_FAIL) def PyUnicode_AS_DATA(space, ref): """Return a pointer to the internal buffer of the object. o has to be a @@ -548,6 +573,28 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + + at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], + rffi.INT_real, error=-1) +def PyUnicode_Tailmatch(space, w_str, w_substr, start, end, direction): + """Return 1 if substr matches str[start:end] at the given tail end + (direction == -1 means to do a prefix match, direction == 1 a + suffix match), 0 otherwise. Return -1 if an error occurred.""" + str = space.unicode_w(w_str) + substr = space.unicode_w(w_substr) + if rffi.cast(lltype.Signed, direction) >= 0: + return stringtype.stringstartswith(str, substr, start, end) + else: + return stringtype.stringendswith(str, substr, start, end) + diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +92,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -39,10 +38,10 @@ ) def descr_str(self, space): - return self.descr_repr(space) + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) - def descr_repr(self, space): - return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + def descr_format(self, space, w_spec): + return space.format(self.item(space), w_spec) def descr_int(self, space): box = self.convert_to(W_LongBox.get_dtype(space)) @@ -187,6 +186,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -194,7 +197,8 @@ __new__ = interp2app(W_GenericBox.descr__new__.im_func), __str__ = interp2app(W_GenericBox.descr_str), - __repr__ = interp2app(W_GenericBox.descr_repr), + __repr__ = interp2app(W_GenericBox.descr_str), + __format__ = interp2app(W_GenericBox.descr_format), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), __nonzero__ = interp2app(W_GenericBox.descr_nonzero), @@ -245,6 +249,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -266,36 +272,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -308,6 +321,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,20 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) - for i in range(count): + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 's', 'a']) + +def fromstring_loop(a, count, dtype, itemsize, s): + i = 0 + while i < count: + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -435,7 +435,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{:3f}".format(numpy.float64(3)) == "3.000000" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') @@ -387,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -3,7 +3,7 @@ import py import time -import datetime +from lib_pypy import datetime import copy import os @@ -43,4 +43,4 @@ dt = datetime.datetime.utcnow() assert type(dt.microsecond) is int - copy.copy(dt) \ No newline at end of file + copy.copy(dt) diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,24 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' +fatalerror._jit_look_inside_ = False +fatalerror._annenforceargs_ = [str] + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True +fatalerror_notb._jit_look_inside_ = False +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,61 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong, r_uint) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if (r_ulonglong is not r_uint and + isinstance(value, (r_longlong, r_ulonglong))): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it needs " + "to be ordered as a FLOAT but on 64-bit " + "machines as an INT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -2,6 +2,7 @@ from pypy.conftest import option from pypy.rlib.jit import hint, we_are_jitted, JitDriver, elidable_promote from pypy.rlib.jit import JitHintError, oopspec, isconstant +from pypy.rlib.rarithmetic import r_uint from pypy.translator.translator import TranslationContext, graphof from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin from pypy.rpython.lltypesystem import lltype @@ -146,6 +147,43 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + + def test_argument_order_accept_r_uint(self): + # this used to fail on 64-bit, because r_uint == r_ulonglong + myjitdriver = JitDriver(greens=['i1'], reds=[]) + myjitdriver.jit_merge_point(i1=r_uint(42)) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -472,7 +472,7 @@ IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwde', 'prefetch', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -484,7 +484,7 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', + 'paddq', 'pinsr', 'pmul', 'psrl', # sign-extending moves should not produce GC pointers 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -559,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) diff --git a/pypy/translator/sandbox/test/test_sandbox.py b/pypy/translator/sandbox/test/test_sandbox.py --- a/pypy/translator/sandbox/test/test_sandbox.py +++ b/pypy/translator/sandbox/test/test_sandbox.py @@ -145,9 +145,9 @@ g = pipe.stdin f = pipe.stdout expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GENERATIONGC_NURSERY",), None) - if sys.platform.startswith('linux'): # on Mac, uses another (sandboxsafe) approach - expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), - OSError(5232, "xyz")) + #if sys.platform.startswith('linux'): + # expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), + # OSError(5232, "xyz")) expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GC_DEBUG",), None) g.close() tail = f.read() From noreply at buildbot.pypy.org Fri Feb 24 19:41:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Fri, 24 Feb 2012 19:41:44 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-type-pure-python: handle string dtypes Message-ID: <20120224184144.C91D882366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-record-type-pure-python Changeset: r52869:e716ec8de597 Date: 2012-02-24 13:41 -0500 http://bitbucket.org/pypy/pypy/changeset/e716ec8de597/ Log: handle string dtypes diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -138,6 +138,9 @@ else: raise OperationError(space.w_TypeError, space.wrap("data type not understood")) +def dtype_from_list(space, items_w): + pass + def is_byteorder(ch): return ch == ">" or ch == "<" or ch == "|" or ch == "=" @@ -188,6 +191,7 @@ typestr = space.str_w(w_obj) w_base_dtype = None + elsize = -1 if not typestr: raise invalid_dtype(space, w_obj) @@ -230,12 +234,21 @@ if (dtype.kind == kind and dtype.itemtype.get_element_size() == elsize): w_base_dtype = dtype + elsize = -1 break else: raise invalid_dtype(space, w_obj) if w_base_dtype is not None: - if elsize is not None: + if elsize != -1: + itemtype = w_base_dtype.itemtype.array(elsize) + w_base_dtype = W_Dtype( + itemtype, w_base_dtype.num, w_base_dtype.kind, + w_base_dtype.name + str(itemtype.get_element_size() * 8), + w_base_dtype.char, w_base_dtype.w_box_type, + byteorder=w_base_dtype.byteorder, + builtin_type=w_base_dtype.builtin_type + ) if endian != "=" and endian != nonnative_byteorder_prefix: endian = "=" if (endian != "=" and w_base_dtype.byteorder != "|" and diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -128,7 +128,7 @@ width, storage, i, offset, value) else: libffi.array_setitem_T(self.T, width, storage, i, offset, value) - + def store(self, arr, width, i, offset, box): self._write(arr.storage, width, i, offset, self.unbox(box)) @@ -229,7 +229,7 @@ class NonNativePrimitive(Primitive): _mixin_ = True - + def _read(self, storage, width, i, offset): return byteswap(Primitive._read(self, storage, width, i, offset)) @@ -448,7 +448,7 @@ class NonNativeInt64(BaseType, NonNativeInteger): T = rffi.LONGLONG BoxType = interp_boxes.W_Int64Box - format_code = "q" + format_code = "q" def _uint64_coerce(self, space, w_item): try: @@ -611,7 +611,7 @@ class NonNativeFloat32(BaseType, NonNativeFloat): T = rffi.FLOAT BoxType = interp_boxes.W_Float32Box - format_code = "f" + format_code = "f" class Float64(BaseType, Float): T = rffi.DOUBLE @@ -633,13 +633,16 @@ class BaseStringType(object): _mixin_ = True - + def __init__(self, size=0): self.size = size def get_element_size(self): return self.size * rffi.sizeof(self.T) + def array(self, size): + return type(self)(size) + class StringType(BaseType, BaseStringType): T = lltype.Char @@ -656,12 +659,12 @@ class RecordType(CompositeType): T = lltype.Char - + def read(self, arr, width, i, offset): return interp_boxes.W_VoidBox(arr, i) @jit.unroll_safe - def coerce(self, space, dtype, w_item): + def coerce(self, space, dtype, w_item): from pypy.module.micronumpy.interp_numarray import W_NDimArray if isinstance(w_item, interp_boxes.W_VoidBox): From noreply at buildbot.pypy.org Fri Feb 24 20:13:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:24 +0100 (CET) Subject: [pypy-commit] pypy py3k: the semantics of this particular case changed in python3. Adapt the test, it passes out of the box Message-ID: <20120224191324.B570982366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52870:460767db1dce Date: 2012-02-24 15:41 +0100 http://bitbucket.org/pypy/pypy/changeset/460767db1dce/ Log: the semantics of this particular case changed in python3. Adapt the test, it passes out of the box diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -131,8 +131,9 @@ def __getitem__(self, item): raise KeyError(item) def keys(self): - return 'a' # not a list! - raises(TypeError, eval, "dir()", {}, C()) + return 'abcd' # not a list! + names = eval("dir()", {}, C()) + assert names == ['a', 'b', 'c', 'd'] def test_dir_broken_module(self): import types From noreply at buildbot.pypy.org Fri Feb 24 20:13:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:25 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax error Message-ID: <20120224191325.EA42482367@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52871:3cf4c4babf1b Date: 2012-02-24 15:47 +0100 http://bitbucket.org/pypy/pypy/changeset/3cf4c4babf1b/ Log: fix syntax error diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -183,7 +183,6 @@ assert format(10, "o") == "12" assert format(10, "#o") == "0o12" assert format("hi") == "hi" - assert isinstance(format(4, u""), str) def test_vars(self): def f(): From noreply at buildbot.pypy.org Fri Feb 24 20:13:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: adapt to py3k bytes/text. This test still fails right now Message-ID: <20120224191327.2C47C82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52872:93c52a2ed557 Date: 2012-02-24 16:15 +0100 http://bitbucket.org/pypy/pypy/changeset/93c52a2ed557/ Log: adapt to py3k bytes/text. This test still fails right now diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -198,9 +198,8 @@ assert getattr(a, 'i') == 5 raises(AttributeError, getattr, a, 'k') assert getattr(a, 'k', 42) == 42 - assert getattr(a, u'i') == 5 - raises(AttributeError, getattr, a, u'k') - assert getattr(a, u'k', 42) == 42 + raises(TypeError, getattr, a, b'i') + raises(TypeError, getattr, a, b'k', 42) def test_getattr_typecheck(self): class A(object): From noreply at buildbot.pypy.org Fri Feb 24 20:13:28 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:28 +0100 (CET) Subject: [pypy-commit] pypy py3k: lib_pypy/datetime.py was removed by b36f48bf48f8, kill the corresponding test too Message-ID: <20120224191328.6074D82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52873:0d6bda9ad805 Date: 2012-02-24 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/0d6bda9ad805/ Log: lib_pypy/datetime.py was removed by b36f48bf48f8, kill the corresponding test too diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py deleted file mode 100644 --- a/pypy/module/test_lib_pypy/test_datetime.py +++ /dev/null @@ -1,46 +0,0 @@ -"""Additional tests for datetime.""" - -import py - -import time -from lib_pypy import datetime -import copy -import os - -def test_utcfromtimestamp(): - """Confirm that utcfromtimestamp and fromtimestamp give consistent results. - - Based on danchr's test script in https://bugs.pypy.org/issue986 - """ - try: - prev_tz = os.environ.get("TZ") - os.environ["TZ"] = "GMT" - for unused in xrange(100): - now = time.time() - delta = (datetime.datetime.utcfromtimestamp(now) - - datetime.datetime.fromtimestamp(now)) - assert delta.days * 86400 + delta.seconds == 0 - finally: - if prev_tz is None: - del os.environ["TZ"] - else: - os.environ["TZ"] = prev_tz - -def test_utcfromtimestamp_microsecond(): - dt = datetime.datetime.utcfromtimestamp(0) - assert isinstance(dt.microsecond, int) - - -def test_integer_args(): - with py.test.raises(TypeError): - datetime.datetime(10, 10, 10.) - with py.test.raises(TypeError): - datetime.datetime(10, 10, 10, 10, 10.) - with py.test.raises(TypeError): - datetime.datetime(10, 10, 10, 10, 10, 10.) - -def test_utcnow_microsecond(): - dt = datetime.datetime.utcnow() - assert type(dt.microsecond) is int - - copy.copy(dt) From noreply at buildbot.pypy.org Fri Feb 24 20:13:29 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix this test to pass with -A. Still broken on pypy Message-ID: <20120224191329.936E782366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52874:041810e6563a Date: 2012-02-24 16:23 +0100 http://bitbucket.org/pypy/pypy/changeset/041810e6563a/ Log: fix this test to pass with -A. Still broken on pypy diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -590,11 +590,11 @@ assert hasattr(x, '__class__') is True assert hasattr(x, 'foo') is True assert hasattr(x, 'bar') is False - assert hasattr(x, 'abc') is False # CPython compliance - assert hasattr(x, 'bac') is False # CPython compliance + raises(TypeError, "hasattr(x, 'abc')") + raises(TypeError, "hasattr(x, 'bac')") raises(TypeError, hasattr, x, None) raises(TypeError, hasattr, x, 42) - raises(UnicodeError, hasattr, x, u'\u5678') # cannot encode attr name + assert hasattr(x, '\u5678') is False def test_compile_leading_newlines(self): src = """ From noreply at buildbot.pypy.org Fri Feb 24 20:13:30 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:30 +0100 (CET) Subject: [pypy-commit] pypy py3k: one more hasattr test which passes with -A but fails on pypy Message-ID: <20120224191330.C930182366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52875:16ec2588021e Date: 2012-02-24 16:30 +0100 http://bitbucket.org/pypy/pypy/changeset/16ec2588021e/ Log: one more hasattr test which passes with -A but fails on pypy diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -596,6 +596,17 @@ raises(TypeError, hasattr, x, 42) assert hasattr(x, '\u5678') is False + def test_hasattr_exception(self): + class X(object): + def __getattr__(self, name): + if name == 'foo': + raise AttributeError + else: + raise KeyError + x = X() + assert hasattr(x, 'foo') is False + raises(KeyError, "hasattr(x, 'bar')") + def test_compile_leading_newlines(self): src = """ def fn(): pass From noreply at buildbot.pypy.org Fri Feb 24 20:13:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the hasattr tests: now random exceptions won't be eaten by hasattr, only AttributeError is caught Message-ID: <20120224191332.19E7682366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52876:7ac948ad8042 Date: 2012-02-24 18:06 +0100 http://bitbucket.org/pypy/pypy/changeset/7ac948ad8042/ Log: fix the hasattr tests: now random exceptions won't be eaten by hasattr, only AttributeError is caught diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -79,10 +79,14 @@ """Return whether the object has an attribute with the given name. (This is done by calling getattr(object, name) and catching exceptions.)""" w_name = checkattrname(space, w_name) - if space.findattr(w_object, w_name) is not None: + try: + space.getattr(w_object, w_name) + except OperationError, e: + if e.match(space, space.w_AttributeError): + return space.w_False + raise + else: return space.w_True - else: - return space.w_False def hash(space, w_object): """Return a hash value for the object. Two objects which compare as From noreply at buildbot.pypy.org Fri Feb 24 20:13:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: dicy.keys() no longer return a list Message-ID: <20120224191333.4C56182366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52877:82fe081fc013 Date: 2012-02-24 18:08 +0100 http://bitbucket.org/pypy/pypy/changeset/82fe081fc013/ Log: dicy.keys() no longer return a list diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -244,7 +244,7 @@ # it tests 4 calls to __iter__() in one assert. It could # be modified if better granularity on the assert is required. mydict = {'a':1,'b':2,'c':3} - assert list(iter(mydict)) ==mydict.keys() + assert list(iter(mydict)) == list(mydict.keys()) def test_iter_callable_sentinel(self): class count(object): From noreply at buildbot.pypy.org Fri Feb 24 20:13:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120224191334.7EF5382366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52878:84ac123d9774 Date: 2012-02-24 18:12 +0100 http://bitbucket.org/pypy/pypy/changeset/84ac123d9774/ Log: fix syntax diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -637,10 +637,11 @@ pr("Hello,", "person!", file=out, sep="X") assert out.getvalue() == "Hello,Xperson!\n" out = io.StringIO() - pr(u"Hello,", u"person!", file=out) + pr(b"Hello,", b"person!", file=out) result = out.getvalue() assert isinstance(result, str) - assert result == u"Hello, person!\n" + print('XXXXXX', result) + assert result == "b'Hello,' b'person!'\n" pr("Hello", file=None) # This works. out = io.StringIO() pr(None, file=out) From noreply at buildbot.pypy.org Fri Feb 24 20:13:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: modify the test so that it works on python3, still failing on pypy Message-ID: <20120224191335.B4FA982366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52879:bbc579e5e993 Date: 2012-02-24 18:18 +0100 http://bitbucket.org/pypy/pypy/changeset/bbc579e5e993/ Log: modify the test so that it works on python3, still failing on pypy diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -361,7 +361,7 @@ assert x[-7] == 20 raises(IndexError, x.__getitem__, 17) raises(IndexError, x.__getitem__, -18) - raises(TypeError, x.__getitem__, slice(0,3,1)) + assert list(x.__getitem__(slice(0,3,1))) == [0, 2, 4] def test_range_bad_args(self): raises(TypeError, range, '1') From noreply at buildbot.pypy.org Fri Feb 24 20:13:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix test_range_indexing, when we use a too big negative index Message-ID: <20120224191336.EABE882366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52880:4500a6d2fe49 Date: 2012-02-24 20:00 +0100 http://bitbucket.org/pypy/pypy/changeset/4500a6d2fe49/ Log: fix test_range_indexing, when we use a too big negative index diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -325,9 +325,11 @@ return space.add(self.w_start, space.mul(w_index, self.w_step)) def _compute_item(self, space, w_index): - if space.is_true(space.lt(w_index, space.newint(0))): + w_zero = space.newint(0) + if space.is_true(space.lt(w_index, w_zero)): w_index = space.add(w_index, self.w_length) - if space.is_true(space.ge(w_index, self.w_length)): + if (space.is_true(space.ge(w_index, self.w_length)) or + space.is_true(space.lt(w_index, w_zero))): raise OperationError(space.w_IndexError, space.wrap( "range object index out of range")) return self._compute_item0(space, w_index) From noreply at buildbot.pypy.org Fri Feb 24 20:13:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: typo Message-ID: <20120224191338.2C2C782366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52881:fff27591cb0f Date: 2012-02-24 20:01 +0100 http://bitbucket.org/pypy/pypy/changeset/fff27591cb0f/ Log: typo diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -316,7 +316,7 @@ def test_range_repr(self): assert repr(range(1)) == 'range(1)' assert repr(range(1,2)) == 'range(1, 2)' - assert repr(range(1,2,3)) == 'range(1, 4, 3)' + assert repr(range(1,2,3)) == 'range(1, 2, 3)' def test_range_up(self): x = range(2) From noreply at buildbot.pypy.org Fri Feb 24 20:13:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill a test about compiling unicode strings: they are always unicode anyway; fix syntax in another test Message-ID: <20120224191339.5FB2382366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52882:bbe53f6110bf Date: 2012-02-24 20:03 +0100 http://bitbucket.org/pypy/pypy/changeset/bbe53f6110bf/ Log: kill a test about compiling unicode strings: they are always unicode anyway; fix syntax in another test diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -512,14 +512,8 @@ raises(ValueError, compile, "\n", "", "exec", 0xff) raises(TypeError, compile, '1+2', 12, 34) - def test_unicode_compile(self): - try: - compile(u'-', '?', 'eval') - except SyntaxError, e: - assert e.lineno == 1 - def test_unicode_encoding_compile(self): - code = u"# -*- coding: utf-8 -*-\npass\n" + code = "# -*- coding: utf-8 -*-\npass\n" raises(SyntaxError, compile, code, "tmp", "exec") def test_bytes_compile(self): From noreply at buildbot.pypy.org Fri Feb 24 20:13:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill execfile() and its tests (sigh\!) Message-ID: <20120224191340.94EC282366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52883:b8fbd7d49566 Date: 2012-02-24 20:05 +0100 http://bitbucket.org/pypy/pypy/changeset/b8fbd7d49566/ Log: kill execfile() and its tests (sigh\!) diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -12,7 +12,6 @@ expose__file__attribute = False appleveldefs = { - 'execfile' : 'app_io.execfile', 'input' : 'app_io.input', 'print' : 'app_io.print_', diff --git a/pypy/module/__builtin__/app_io.py b/pypy/module/__builtin__/app_io.py --- a/pypy/module/__builtin__/app_io.py +++ b/pypy/module/__builtin__/app_io.py @@ -5,28 +5,6 @@ import sys -def execfile(filename, glob=None, loc=None): - """execfile(filename[, globals[, locals]]) - -Read and execute a Python script from a file. -The globals and locals are dictionaries, defaulting to the current -globals and locals. If only globals is given, locals defaults to it.""" - if glob is None: - # Warning this is at hidden_applevel - glob = globals() - if loc is None: - loc = locals() - elif loc is None: - loc = glob - f = open(filename, 'rU') - try: - source = f.read() - finally: - f.close() - #Don't exec the source directly, as this loses the filename info - co = compile(source.rstrip()+"\n", filename, 'exec') - exec(co, glob, loc) - def _write_prompt(stdout, prompt): print(prompt, file=stdout, end='') try: diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -676,47 +676,3 @@ return {'a':2} __dict__ = property(fget=getDict) assert vars(C_get_vars()) == {'a':2} - - -class TestInternal: - def test_execfile(self, space): - from pypy.tool.udir import udir - fn = str(udir.join('test_execfile')) - f = open(fn, 'w') - print >>f, "i=42" - f.close() - - w_execfile = space.builtin.get("execfile") - w_dict = space.newdict() - space.call_function(w_execfile, - space.wrap(fn), w_dict, space.w_None) - w_value = space.getitem(w_dict, space.wrap('i')) - assert space.eq_w(w_value, space.wrap(42)) - - def test_execfile_different_lineendings(self, space): - from pypy.tool.udir import udir - d = udir.ensure('lineending', dir=1) - dos = d.join('dos.py') - f = dos.open('wb') - f.write("x=3\r\n\r\ny=4\r\n") - f.close() - space.appexec([space.wrap(str(dos))], """ - (filename): - d = {} - execfile(filename, d) - assert d['x'] == 3 - assert d['y'] == 4 - """) - - unix = d.join('unix.py') - f = unix.open('wb') - f.write("x=5\n\ny=6\n") - f.close() - - space.appexec([space.wrap(str(unix))], """ - (filename): - d = {} - execfile(filename, d) - assert d['x'] == 5 - assert d['y'] == 6 - """) From noreply at buildbot.pypy.org Fri Feb 24 20:13:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120224191341.CA1A682366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52884:706cd6dcfd7f Date: 2012-02-24 20:06 +0100 http://bitbucket.org/pypy/pypy/changeset/706cd6dcfd7f/ Log: fix syntax diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -323,7 +323,7 @@ for attr in "__doc__", "fget", "fset", "fdel": try: setattr(raw, attr, 42) - except TypeError, msg: + except TypeError as msg: if str(msg).find('readonly') < 0: raise Exception("when setting readonly attr %r on a " "property, got unexpected TypeError " From noreply at buildbot.pypy.org Fri Feb 24 20:13:43 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:43 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120224191343.0794882366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52885:a7d45791d56b Date: 2012-02-24 20:07 +0100 http://bitbucket.org/pypy/pypy/changeset/a7d45791d56b/ Log: fix syntax diff --git a/pypy/module/__builtin__/test/test_descriptor.py b/pypy/module/__builtin__/test/test_descriptor.py --- a/pypy/module/__builtin__/test/test_descriptor.py +++ b/pypy/module/__builtin__/test/test_descriptor.py @@ -124,7 +124,7 @@ def test_super_fail(self): try: super(list, 2) - except TypeError, e: + except TypeError as e: message = e.args[0] assert message.startswith('super(type, obj): obj must be an instance or subtype of type') From noreply at buildbot.pypy.org Fri Feb 24 20:13:44 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:44 +0100 (CET) Subject: [pypy-commit] pypy py3k: we no longer have longs; tweak the test to check that range() works well with large integers Message-ID: <20120224191344.408DC82366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52886:cd36cd993475 Date: 2012-02-24 20:10 +0100 http://bitbucket.org/pypy/pypy/changeset/cd36cd993475/ Log: we no longer have longs; tweak the test to check that range() works well with large integers diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -147,10 +147,10 @@ def test_range_long(self): import sys - a = long(10 * sys.maxint) - raises(OverflowError, range, a) - raises(OverflowError, range, 0, a) - raises(OverflowError, range, 0, 1, a) + a = 10 * sys.maxsize + assert range(a)[-1] == a-1 + assert range(0, a)[-1] == a-1 + assert range(0, 1, a)[-1] == 0 def test_range_reduce(self): x = range(2, 9, 3) From noreply at buildbot.pypy.org Fri Feb 24 20:13:45 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 24 Feb 2012 20:13:45 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, fix the test by actualling calling the code *inside* raises() Message-ID: <20120224191345.740F882366@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52887:55a73e1d6fcc Date: 2012-02-24 20:11 +0100 http://bitbucket.org/pypy/pypy/changeset/55a73e1d6fcc/ Log: bah, fix the test by actualling calling the code *inside* raises() diff --git a/pypy/module/__builtin__/test/test_functional.py b/pypy/module/__builtin__/test/test_functional.py --- a/pypy/module/__builtin__/test/test_functional.py +++ b/pypy/module/__builtin__/test/test_functional.py @@ -143,7 +143,7 @@ assert list(range(0, 10, A())) == [0, 5] def test_range_float(self): - raises(TypeError, range(0.1, 2.0, 1.1)) + raises(TypeError, "range(0.1, 2.0, 1.1)") def test_range_long(self): import sys From noreply at buildbot.pypy.org Fri Feb 24 21:44:10 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 24 Feb 2012 21:44:10 +0100 (CET) Subject: [pypy-commit] pypy default: Fix test Message-ID: <20120224204410.EEF3682366@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52888:2ca20aeccdd8 Date: 2012-02-24 21:43 +0100 http://bitbucket.org/pypy/pypy/changeset/2ca20aeccdd8/ Log: Fix test diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -117,8 +117,12 @@ flags = lltype.malloc(PyCompilerFlags, flavor='raw') flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) w_globals = space.newdict() - api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, - w_globals, w_globals, flags) + buf = rffi.str2charp("a = u'caf\xc3\xa9'") + try: + api.PyRun_StringFlags(buf, Py_single_input, + w_globals, w_globals, flags) + finally: + rffi.free_charp(buf) w_a = space.getitem(w_globals, space.wrap("a")) assert space.unwrap(w_a) == u'caf\xe9' lltype.free(flags, flavor='raw') From noreply at buildbot.pypy.org Fri Feb 24 21:44:12 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Fri, 24 Feb 2012 21:44:12 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Finally found a way to allow subclasses of int! Message-ID: <20120224204412.2D1CB82367@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r52889:0eaf96f13694 Date: 2012-02-24 21:43 +0100 http://bitbucket.org/pypy/pypy/changeset/0eaf96f13694/ Log: cpyext: Finally found a way to allow subclasses of int! It's even possible to set ob_ival... at least until the object escapes the C function and is seen by pypy. diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -407,7 +407,7 @@ }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) - for cpyname in 'Method List Int Long Dict Tuple Class'.split(): + for cpyname in 'Method List Long Dict Tuple Class'.split(): FORWARD_DECLS.append('typedef struct { PyObject_HEAD } ' 'Py%sObject' % (cpyname, )) build_exported_objects() diff --git a/pypy/module/cpyext/include/intobject.h b/pypy/module/cpyext/include/intobject.h --- a/pypy/module/cpyext/include/intobject.h +++ b/pypy/module/cpyext/include/intobject.h @@ -7,6 +7,11 @@ extern "C" { #endif +typedef struct { + PyObject_HEAD + long ob_ival; +} PyIntObject; + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -2,11 +2,37 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.error import OperationError from pypy.module.cpyext.api import ( - cpython_api, build_type_checkers, PyObject, - CONST_STRING, CANNOT_FAIL, Py_ssize_t) + cpython_api, cpython_struct, build_type_checkers, bootstrap_function, + PyObject, PyObjectFields, CONST_STRING, CANNOT_FAIL, Py_ssize_t) +from pypy.module.cpyext.pyobject import ( + make_typedescr, track_reference, RefcountState, from_ref) from pypy.rlib.rarithmetic import r_uint, intmask, LONG_TEST +from pypy.objspace.std.intobject import W_IntObject import sys +PyIntObjectStruct = lltype.ForwardReference() +PyIntObject = lltype.Ptr(PyIntObjectStruct) +PyIntObjectFields = PyObjectFields + \ + (("ob_ival", rffi.LONG),) +cpython_struct("PyIntObject", PyIntObjectFields, PyIntObjectStruct) + + at bootstrap_function +def init_intobject(space): + "Type description of PyIntObject" + make_typedescr(space.w_int.instancetypedef, + basestruct=PyIntObject.TO, + realize=int_realize) + +def int_realize(space, obj): + intval = rffi.cast(lltype.Signed, rffi.cast(PyIntObject, obj).c_ob_ival) + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(W_IntObject, w_type) + w_obj.__init__(intval) + track_reference(space, obj, w_obj) + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj + PyInt_Check, PyInt_CheckExact = build_type_checkers("Int") @cpython_api([], lltype.Signed, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -193,7 +193,7 @@ if not obj: PyErr_NoMemory(space) obj.c_ob_type = type - _Py_NewReference(space, obj) + obj.c_ob_refcnt = 1 return obj @cpython_api([PyVarObject, PyTypeObjectPtr, Py_ssize_t], PyObject) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -17,6 +17,7 @@ class BaseCpyTypedescr(object): basestruct = PyObject.TO + W_BaseObject = W_ObjectObject def get_dealloc(self, space): from pypy.module.cpyext.typeobject import subtype_dealloc @@ -51,10 +52,14 @@ def attach(self, space, pyobj, w_obj): pass - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) + def realize(self, space, obj): + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(self.W_BaseObject, w_type) + track_reference(space, obj, w_obj) + if w_type is not space.gettypefor(self.W_BaseObject): + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj typedescr_cache = {} @@ -369,13 +374,7 @@ obj.c_ob_refcnt = 1 w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) assert isinstance(w_type, W_TypeObject) - if w_type.is_cpytype(): - w_obj = space.allocate_instance(W_ObjectObject, w_type) - track_reference(space, obj, w_obj) - state = space.fromcache(RefcountState) - state.set_lifeline(w_obj, obj) - else: - assert False, "Please add more cases in _Py_NewReference()" + get_typedescr(w_type.instancetypedef).realize(space, obj) def _Py_Dealloc(space, obj): from pypy.module.cpyext.api import generic_cpy_call_dont_decref diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -65,4 +65,97 @@ values = module.values() types = [type(x) for x in values] assert types == [int, long, int, int] - + + def test_int_subtype(self): + module = self.import_extension( + 'foo', [ + ("newEnum", "METH_VARARGS", + """ + EnumObject *enumObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "Oi", &name, &intval)) + return NULL; + + PyType_Ready(&Enum_Type); + enumObj = PyObject_New(EnumObject, &Enum_Type); + if (!enumObj) { + return NULL; + } + + enumObj->ob_ival = intval; + Py_INCREF(name); + enumObj->ob_name = name; + + return (PyObject *)enumObj; + """), + ], + prologue=""" + typedef struct + { + PyObject_HEAD + long ob_ival; + PyObject* ob_name; + } EnumObject; + + static void + enum_dealloc(EnumObject *op) + { + Py_DECREF(op->ob_name); + Py_TYPE(op)->tp_free((PyObject *)op); + } + + static PyMemberDef enum_members[] = { + {"name", T_OBJECT, offsetof(EnumObject, ob_name), 0, NULL}, + {NULL} /* Sentinel */ + }; + + PyTypeObject Enum_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "Enum", + /*tp_basicsize*/ sizeof(EnumObject), + /*tp_itemsize*/ 0, + /*tp_dealloc*/ enum_dealloc, + /*tp_print*/ 0, + /*tp_getattr*/ 0, + /*tp_setattr*/ 0, + /*tp_compare*/ 0, + /*tp_repr*/ 0, + /*tp_as_number*/ 0, + /*tp_as_sequence*/ 0, + /*tp_as_mapping*/ 0, + /*tp_hash*/ 0, + /*tp_call*/ 0, + /*tp_str*/ 0, + /*tp_getattro*/ 0, + /*tp_setattro*/ 0, + /*tp_as_buffer*/ 0, + /*tp_flags*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, + /*tp_doc*/ 0, + /*tp_traverse*/ 0, + /*tp_clear*/ 0, + /*tp_richcompare*/ 0, + /*tp_weaklistoffset*/ 0, + /*tp_iter*/ 0, + /*tp_iternext*/ 0, + /*tp_methods*/ 0, + /*tp_members*/ enum_members, + /*tp_getset*/ 0, + /*tp_base*/ &PyInt_Type, + /*tp_dict*/ 0, + /*tp_descr_get*/ 0, + /*tp_descr_set*/ 0, + /*tp_dictoffset*/ 0, + /*tp_init*/ 0, + /*tp_alloc*/ 0, + /*tp_new*/ 0 + }; + """) + + a = module.newEnum("ULTIMATE_ANSWER", 42) + assert type(a).__name__ == "Enum" + assert isinstance(a, int) + assert a == int(a) == 42 + assert a.name == "ULTIMATE_ANSWER" From noreply at buildbot.pypy.org Sat Feb 25 00:23:08 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 25 Feb 2012 00:23:08 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: remove sizehint on the list, use instead special strategy Message-ID: <20120224232308.D6A1A82366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52890:2617e06218fe Date: 2012-02-24 12:49 -0700 http://bitbucket.org/pypy/pypy/changeset/2617e06218fe/ Log: remove sizehint on the list, use instead special strategy diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -33,8 +33,10 @@ return W_ListObject.from_storage_and_strategy(space, storage, strategy) @jit.look_inside_iff(lambda space, list_w: jit.isconstant(len(list_w)) and len(list_w) < UNROLL_CUTOFF) -def get_strategy_from_list_objects(space, list_w): +def get_strategy_from_list_objects(space, list_w, sizehint): if not list_w: + if sizehint != -1: + return SizeListStrategy(space, sizehint) return space.fromcache(EmptyListStrategy) # check for ints @@ -80,10 +82,10 @@ w_self.space = space if space.config.objspace.std.withliststrategies: w_self.strategy = get_strategy_from_list_objects(space, - wrappeditems) + wrappeditems, + sizehint) else: w_self.strategy = space.fromcache(ObjectListStrategy) - w_self.sizehint = sizehint w_self.init_from_list_w(wrappeditems) @staticmethod @@ -257,6 +259,7 @@ class ListStrategy(object): + sizehint = 0 def __init__(self, space): self.space = space @@ -338,6 +341,7 @@ def sort(self, w_list, reverse): raise NotImplementedError + class EmptyListStrategy(ListStrategy): """EmptyListStrategy is used when a W_List withouth elements is created. The storage is None. When items are added to the W_List a new RPython list @@ -399,7 +403,7 @@ else: strategy = self.space.fromcache(ObjectListStrategy) - storage = strategy.get_empty_storage(w_list.sizehint) + storage = strategy.get_empty_storage(self.sizehint) w_list.strategy = strategy w_list.lstorage = storage @@ -440,6 +444,13 @@ def reverse(self, w_list): pass +class SizeListStrategy(EmptyListStrategy): + """ Like empty, but when modified it'll preallocate the size to sizehint + """ + def __init__(self, space, sizehint): + self.sizehint = sizehint + ListStrategy.__init__(self, space) + class RangeListStrategy(ListStrategy): """RangeListStrategy is used when a list is created using the range method. The storage is a tuple containing only three integers start, step and length diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -1,6 +1,7 @@ # coding: iso-8859-15 import random -from pypy.objspace.std.listobject import W_ListObject +from pypy.objspace.std.listobject import W_ListObject, SizeListStrategy,\ + IntegerListStrategy, ObjectListStrategy from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace, option @@ -390,6 +391,16 @@ assert self.space.eq_w(self.space.le(w_list4, w_list3), self.space.w_True) + def test_sizehint(self): + space = self.space + w_l = space.newlist([], sizehint=10) + assert isinstance(w_l.strategy, SizeListStrategy) + space.call_method(w_l, 'append', space.wrap(3)) + assert isinstance(w_l.strategy, IntegerListStrategy) + w_l = space.newlist([], sizehint=10) + space.call_method(w_l, 'append', space.w_None) + assert isinstance(w_l.strategy, ObjectListStrategy) + class AppTestW_ListObject(object): def setup_class(cls): From noreply at buildbot.pypy.org Sat Feb 25 00:29:55 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 25 Feb 2012 00:29:55 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: o) initialize the ROOT system when using the CINT backend (only use case, really) Message-ID: <20120224232955.E185482366@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52891:be35f099e272 Date: 2012-02-24 14:43 -0800 http://bitbucket.org/pypy/pypy/changeset/be35f099e272/ Log: o) initialize the ROOT system when using the CINT backend (only use case, really) o) make operator name mapping behave better o) minor refactoring diff --git a/pypy/module/cppyy/capi/reflex_capi.py b/pypy/module/cppyy/capi/reflex_capi.py --- a/pypy/module/cppyy/capi/reflex_capi.py +++ b/pypy/module/cppyy/capi/reflex_capi.py @@ -9,9 +9,6 @@ srcpath = pkgpath.join("src") incpath = pkgpath.join("include") -def identify(): - return 'Reflex' - if os.environ.get("ROOTSYS"): rootincpath = [os.path.join(os.environ["ROOTSYS"], "include")] rootlibpath = [os.path.join(os.environ["ROOTSYS"], "lib")] @@ -19,6 +16,9 @@ rootincpath = [] rootlibpath = [] +def identify(): + return 'Reflex' + eci = ExternalCompilationInfo( separate_module_files=[srcpath.join("reflexcwrapper.cxx")], include_dirs=[incpath] + rootincpath, diff --git a/pypy/module/cppyy/helper.py b/pypy/module/cppyy/helper.py --- a/pypy/module/cppyy/helper.py +++ b/pypy/module/cppyy/helper.py @@ -64,11 +64,6 @@ if cppname[0:8] == "operator": op = cppname[8:].strip(' ') - # operator could be a conversion using a typedef - handle = capi.c_get_typehandle(op) - if handle: - op = capi.charp2str_free(capi.c_final_name(handle)) - # look for known mapping try: return _operator_mappings[op] @@ -100,7 +95,21 @@ if op == "--": # prefix v.s. postfix decrement (not python) return nargs and "__postdec__" or "__predec__"; - # might get here, as not all operator methods handled (new, delete,etc.) + # operator could have been a conversion using a typedef (this lookup + # is put at the end only as it is unlikely and may trigger unwanted + # errors in class loaders in the backend, because a typical operator + # name is illegal as a class name) + handle = capi.c_get_typehandle(op) + if handle: + op = capi.charp2str_free(capi.c_final_name(handle)) + + try: + return _operator_mappings[op] + except KeyError: + pass + + # might get here, as not all operator methods handled (although some with + # no python equivalent, such as new, delete, etc., are simply retained) # TODO: perhaps absorb or "pythonify" these operators? return cppname @@ -120,6 +129,7 @@ _operator_mappings["|"] = "__or__" _operator_mappings["^"] = "__xor__" _operator_mappings["~"] = "__inv__" +_operator_mappings["!"] = "__nonzero__" _operator_mappings["+="] = "__iadd__" _operator_mappings["-="] = "__isub__" _operator_mappings["*="] = "__imul__" @@ -161,3 +171,11 @@ # the following are not python, but useful to expose _operator_mappings["->"] = "__follow__" _operator_mappings["="] = "__assign__" + +# a bundle of operators that have no equivalent and are left "as-is" for now: +_operator_mappings["&&"] = "&&" +_operator_mappings["||"] = "||" +_operator_mappings["new"] = "new" +_operator_mappings["delete"] = "delete" +_operator_mappings["new[]"] = "new[]" +_operator_mappings["delete[]"] = "delete[]" diff --git a/pypy/module/cppyy/pythonify.py b/pypy/module/cppyy/pythonify.py --- a/pypy/module/cppyy/pythonify.py +++ b/pypy/module/cppyy/pythonify.py @@ -157,7 +157,6 @@ metabases = [type(base) for base in bases] metacpp = type(CppyyClass)(class_name+'_meta', _drop_cycles(metabases), {}) - # create the python-side C++ class representation d = {"_cpp_proxy" : cpptype, "__new__" : make_new(class_name, cpptype), diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -8,6 +8,10 @@ #include "TList.h" #include "TSystem.h" +#include "TApplication.h" +#include "TInterpreter.h" +#include "Getline.h" + #include "TBaseClass.h" #include "TClass.h" #include "TClassEdit.h" @@ -53,6 +57,54 @@ static GlobalFuncs_t g_globalfuncs; +/* initialization of th ROOT system (debatable ... ) ---------------------- */ +namespace { + +class TCppyyApplication : public TApplication { +public: + TCppyyApplication(const char* acn, Int_t* argc, char** argv, Bool_t do_load = kTRUE) + : TApplication(acn, argc, argv) { + + if (do_load) { + // follow TRint to minimize differences with CINT + ProcessLine("#include ", kTRUE); + ProcessLine("#include <_string>", kTRUE); // for std::string iostream. + ProcessLine("#include ", kTRUE); // needed because they're used within the + ProcessLine("#include ", kTRUE); // core ROOT dicts and CINT won't be able + // to properly unload these files + } + + // save current interpreter context + gInterpreter->SaveContext(); + gInterpreter->SaveGlobalsContext(); + + // prevent crashes on accessing history + Gl_histinit((char*)"-"); + + // prevent ROOT from exiting python + SetReturnFromRun(kTRUE); + + // enable auto-loader + gInterpreter->EnableAutoLoading(); + } +}; + +static const char* appname = "pypy-cppyy"; + +class ApplicationStarter { +public: + ApplicationStarter() { + if (!gApplication) { + int argc = 1; + char* argv[1]; argv[0] = (char*)appname; + gApplication = new TCppyyApplication(appname, &argc, argv, kTRUE); + } + } +} _applicationStarter; + +} // unnamed namespace + + /* local helpers ---------------------------------------------------------- */ static inline char* cppstring_to_cstring(const std::string& name) { char* name_char = (char*)malloc(name.size() + 1); @@ -102,7 +154,8 @@ if (icr != g_classref_indices.end()) return (cppyy_typehandle_t)icr->second; - TClassRef cr(class_name); + // use TClass directly, to enable auto-loading + TClassRef cr(TClass::GetClass(class_name, kTRUE, kTRUE)); if (!cr.GetClass()) return (cppyy_typehandle_t)NULL; From noreply at buildbot.pypy.org Sat Feb 25 00:29:58 2012 From: noreply at buildbot.pypy.org (wlav) Date: Sat, 25 Feb 2012 00:29:58 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120224232958.3B7ED82366@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52892:dce568feb7ba Date: 2012-02-24 14:43 -0800 http://bitbucket.org/pypy/pypy/changeset/dce568feb7ba/ Log: merge default into branch diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -328,7 +328,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -322,3 +322,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -17,14 +17,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.executable") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print __name__; print '__file__' in globals()" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -33,24 +33,24 @@ tmpfile.write("print __name__; print __file__\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -65,15 +65,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) @@ -95,7 +95,7 @@ tmpfile.write(TB_NORMALIZATION_CHK) tmpfile.close() - popen = subprocess.Popen([sys.executable, str(pypypath), tmpfilepath], + popen = subprocess.Popen([sys.executable, str(pypypath), '-S', tmpfilepath], stderr=subprocess.PIPE) _, stderr = popen.communicate() assert stderr.endswith('KeyError: \n') diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints, fatalerror_notb) + have_debug_prints) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,7 +104,6 @@ self._debug = v def setup_once(self): - self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -162,28 +161,6 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') - _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) - - def _check_sse2(self): - if WORD == 8: - return # all x86-64 CPUs support SSE2 - if not self.cpu.supports_floats: - return # the CPU doesn't support float, so we don't need SSE2 - # - from pypy.jit.backend.x86.detect_sse2 import INSNS - mc = codebuf.MachineCodeBlockWrapper() - for c in INSNS: - mc.writechar(c) - rawstart = mc.materialize(self.cpu.asmmemmgr, []) - fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) - features = fnptr() - if bool(features & (1<<25)) and bool(features & (1<<26)): - return # CPU supports SSE2 - fatalerror_notb( - "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" - "Your CPU is too old. Please translate a PyPy with the option:\n" - "--jit-backend=x86-without-sse2") - def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,18 +1,17 @@ import autopath +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib.rmmap import alloc, free -INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3") # RET def detect_sse2(): - from pypy.rpython.lltypesystem import lltype, rffi - from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in INSNS: + for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3"): # RET data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/support.py b/pypy/jit/backend/x86/support.py --- a/pypy/jit/backend/x86/support.py +++ b/pypy/jit/backend/x86/support.py @@ -1,6 +1,7 @@ import sys from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.x86.arch import WORD def values_array(TP, size): @@ -37,8 +38,13 @@ if sys.platform == 'win32': ensure_sse2_floats = lambda : None + # XXX check for SSE2 on win32 too else: + if WORD == 4: + extra = ['-DPYPY_X86_CHECK_SSE2'] + else: + extra = [] ensure_sse2_floats = rffi.llexternal_use_eci(ExternalCompilationInfo( compile_extra = ['-msse2', '-mfpmath=sse', - '-DPYPY_CPU_HAS_STANDARD_PRECISION'], + '-DPYPY_CPU_HAS_STANDARD_PRECISION'] + extra, )) diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -323,7 +323,12 @@ def autoflush(self, space): w_iobase = self.w_iobase_ref() if w_iobase is not None: - space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + try: + space.call_method(w_iobase, 'flush') + except OperationError, e: + # if it's an IOError, ignore it + if not e.match(space, space.w_IOError): + raise class AutoFlusher(object): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -170,10 +170,27 @@ space = make_objspace(config) space.appexec([space.wrap(str(tmpfile))], """(tmpfile): import io - f = io.open(tmpfile, 'w') + f = io.open(tmpfile, 'w', encoding='ascii') f.write('42') # no flush() and no close() import sys; sys._keepalivesomewhereobscure = f """) space.finish() assert tmpfile.read() == '42' + +def test_flush_at_exit_IOError(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([], """(): + import io + class MyStream(io.IOBase): + def flush(self): + raise IOError + + s = MyStream() + import sys; sys._keepalivesomewhereobscure = s + """) + space.finish() # the IOError has been ignored diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -407,7 +407,7 @@ }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) - for cpyname in 'Method List Int Long Dict Tuple Class'.split(): + for cpyname in 'Method List Long Dict Tuple Class'.split(): FORWARD_DECLS.append('typedef struct { PyObject_HEAD } ' 'Py%sObject' % (cpyname, )) build_exported_objects() diff --git a/pypy/module/cpyext/include/intobject.h b/pypy/module/cpyext/include/intobject.h --- a/pypy/module/cpyext/include/intobject.h +++ b/pypy/module/cpyext/include/intobject.h @@ -7,6 +7,11 @@ extern "C" { #endif +typedef struct { + PyObject_HEAD + long ob_ival; +} PyIntObject; + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -2,11 +2,37 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.error import OperationError from pypy.module.cpyext.api import ( - cpython_api, build_type_checkers, PyObject, - CONST_STRING, CANNOT_FAIL, Py_ssize_t) + cpython_api, cpython_struct, build_type_checkers, bootstrap_function, + PyObject, PyObjectFields, CONST_STRING, CANNOT_FAIL, Py_ssize_t) +from pypy.module.cpyext.pyobject import ( + make_typedescr, track_reference, RefcountState, from_ref) from pypy.rlib.rarithmetic import r_uint, intmask, LONG_TEST +from pypy.objspace.std.intobject import W_IntObject import sys +PyIntObjectStruct = lltype.ForwardReference() +PyIntObject = lltype.Ptr(PyIntObjectStruct) +PyIntObjectFields = PyObjectFields + \ + (("ob_ival", rffi.LONG),) +cpython_struct("PyIntObject", PyIntObjectFields, PyIntObjectStruct) + + at bootstrap_function +def init_intobject(space): + "Type description of PyIntObject" + make_typedescr(space.w_int.instancetypedef, + basestruct=PyIntObject.TO, + realize=int_realize) + +def int_realize(space, obj): + intval = rffi.cast(lltype.Signed, rffi.cast(PyIntObject, obj).c_ob_ival) + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(W_IntObject, w_type) + w_obj.__init__(intval) + track_reference(space, obj, w_obj) + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj + PyInt_Check, PyInt_CheckExact = build_type_checkers("Int") @cpython_api([], lltype.Signed, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -193,7 +193,7 @@ if not obj: PyErr_NoMemory(space) obj.c_ob_type = type - _Py_NewReference(space, obj) + obj.c_ob_refcnt = 1 return obj @cpython_api([PyVarObject, PyTypeObjectPtr, Py_ssize_t], PyObject) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -17,6 +17,7 @@ class BaseCpyTypedescr(object): basestruct = PyObject.TO + W_BaseObject = W_ObjectObject def get_dealloc(self, space): from pypy.module.cpyext.typeobject import subtype_dealloc @@ -51,10 +52,14 @@ def attach(self, space, pyobj, w_obj): pass - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) + def realize(self, space, obj): + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(self.W_BaseObject, w_type) + track_reference(space, obj, w_obj) + if w_type is not space.gettypefor(self.W_BaseObject): + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj typedescr_cache = {} @@ -369,13 +374,7 @@ obj.c_ob_refcnt = 1 w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) assert isinstance(w_type, W_TypeObject) - if w_type.is_cpytype(): - w_obj = space.allocate_instance(W_ObjectObject, w_type) - track_reference(space, obj, w_obj) - state = space.fromcache(RefcountState) - state.set_lifeline(w_obj, obj) - else: - assert False, "Please add more cases in _Py_NewReference()" + get_typedescr(w_type.instancetypedef).realize(space, obj) def _Py_Dealloc(space, obj): from pypy.module.cpyext.api import generic_cpy_call_dont_decref diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -117,8 +117,12 @@ flags = lltype.malloc(PyCompilerFlags, flavor='raw') flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) w_globals = space.newdict() - api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, - w_globals, w_globals, flags) + buf = rffi.str2charp("a = u'caf\xc3\xa9'") + try: + api.PyRun_StringFlags(buf, Py_single_input, + w_globals, w_globals, flags) + finally: + rffi.free_charp(buf) w_a = space.getitem(w_globals, space.wrap("a")) assert space.unwrap(w_a) == u'caf\xe9' lltype.free(flags, flavor='raw') diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -65,4 +65,97 @@ values = module.values() types = [type(x) for x in values] assert types == [int, long, int, int] - + + def test_int_subtype(self): + module = self.import_extension( + 'foo', [ + ("newEnum", "METH_VARARGS", + """ + EnumObject *enumObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "Oi", &name, &intval)) + return NULL; + + PyType_Ready(&Enum_Type); + enumObj = PyObject_New(EnumObject, &Enum_Type); + if (!enumObj) { + return NULL; + } + + enumObj->ob_ival = intval; + Py_INCREF(name); + enumObj->ob_name = name; + + return (PyObject *)enumObj; + """), + ], + prologue=""" + typedef struct + { + PyObject_HEAD + long ob_ival; + PyObject* ob_name; + } EnumObject; + + static void + enum_dealloc(EnumObject *op) + { + Py_DECREF(op->ob_name); + Py_TYPE(op)->tp_free((PyObject *)op); + } + + static PyMemberDef enum_members[] = { + {"name", T_OBJECT, offsetof(EnumObject, ob_name), 0, NULL}, + {NULL} /* Sentinel */ + }; + + PyTypeObject Enum_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "Enum", + /*tp_basicsize*/ sizeof(EnumObject), + /*tp_itemsize*/ 0, + /*tp_dealloc*/ enum_dealloc, + /*tp_print*/ 0, + /*tp_getattr*/ 0, + /*tp_setattr*/ 0, + /*tp_compare*/ 0, + /*tp_repr*/ 0, + /*tp_as_number*/ 0, + /*tp_as_sequence*/ 0, + /*tp_as_mapping*/ 0, + /*tp_hash*/ 0, + /*tp_call*/ 0, + /*tp_str*/ 0, + /*tp_getattro*/ 0, + /*tp_setattro*/ 0, + /*tp_as_buffer*/ 0, + /*tp_flags*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, + /*tp_doc*/ 0, + /*tp_traverse*/ 0, + /*tp_clear*/ 0, + /*tp_richcompare*/ 0, + /*tp_weaklistoffset*/ 0, + /*tp_iter*/ 0, + /*tp_iternext*/ 0, + /*tp_methods*/ 0, + /*tp_members*/ enum_members, + /*tp_getset*/ 0, + /*tp_base*/ &PyInt_Type, + /*tp_dict*/ 0, + /*tp_descr_get*/ 0, + /*tp_descr_set*/ 0, + /*tp_dictoffset*/ 0, + /*tp_init*/ 0, + /*tp_alloc*/ 0, + /*tp_new*/ 0 + }; + """) + + a = module.newEnum("ULTIMATE_ANSWER", 42) + assert type(a).__name__ == "Enum" + assert isinstance(a, int) + assert a == int(a) == 42 + assert a.name == "ULTIMATE_ANSWER" diff --git a/pypy/translator/c/src/asm_gcc_x86.h b/pypy/translator/c/src/asm_gcc_x86.h --- a/pypy/translator/c/src/asm_gcc_x86.h +++ b/pypy/translator/c/src/asm_gcc_x86.h @@ -102,6 +102,12 @@ #endif /* !PYPY_CPU_HAS_STANDARD_PRECISION */ +#ifdef PYPY_X86_CHECK_SSE2 +#define PYPY_X86_CHECK_SSE2_DEFINED +extern void pypy_x86_check_sse2(void); +#endif + + /* implementations */ #ifndef PYPY_NOT_MAIN_FILE @@ -113,4 +119,25 @@ } # endif +# ifdef PYPY_X86_CHECK_SSE2 +void pypy_x86_check_sse2(void) +{ + //Read the CPU features. + int features; + asm("mov $1, %%eax\n" + "cpuid\n" + "mov %%edx, %0" + : "=g"(features) : : "eax", "ebx", "edx", "ecx"); + + //Check bits 25 and 26, this indicates SSE2 support + if (((features & (1 << 25)) == 0) || ((features & (1 << 26)) == 0)) + { + fprintf(stderr, "Old CPU with no SSE2 support, cannot continue.\n" + "You need to re-translate with " + "'--jit-backend=x86-without-sse2'\n"); + abort(); + } +} +# endif + #endif diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -1,3 +1,4 @@ +#define PYPY_NOT_MAIN_FILE #include #include diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -36,6 +36,9 @@ RPyListOfString *list; pypy_asm_stack_bottom(); +#ifdef PYPY_X86_CHECK_SSE2_DEFINED + pypy_x86_check_sse2(); +#endif instrument_setup(); if (sizeof(void*) != SIZEOF_LONG) { From noreply at buildbot.pypy.org Sat Feb 25 02:38:52 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 02:38:52 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: merged default Message-ID: <20120225013852.9B9D682366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: speedup-list-comprehension Changeset: r52893:4c92df471b92 Date: 2012-02-24 19:06 -0500 http://bitbucket.org/pypy/pypy/changeset/4c92df471b92/ Log: merged default diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -328,7 +328,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -322,3 +322,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -17,14 +17,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.executable") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print __name__; print '__file__' in globals()" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -33,24 +33,24 @@ tmpfile.write("print __name__; print __file__\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -65,15 +65,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) @@ -95,7 +95,7 @@ tmpfile.write(TB_NORMALIZATION_CHK) tmpfile.close() - popen = subprocess.Popen([sys.executable, str(pypypath), tmpfilepath], + popen = subprocess.Popen([sys.executable, str(pypypath), '-S', tmpfilepath], stderr=subprocess.PIPE) _, stderr = popen.communicate() assert stderr.endswith('KeyError: \n') diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints, fatalerror_notb) + have_debug_prints) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,7 +104,6 @@ self._debug = v def setup_once(self): - self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -162,28 +161,6 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') - _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) - - def _check_sse2(self): - if WORD == 8: - return # all x86-64 CPUs support SSE2 - if not self.cpu.supports_floats: - return # the CPU doesn't support float, so we don't need SSE2 - # - from pypy.jit.backend.x86.detect_sse2 import INSNS - mc = codebuf.MachineCodeBlockWrapper() - for c in INSNS: - mc.writechar(c) - rawstart = mc.materialize(self.cpu.asmmemmgr, []) - fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) - features = fnptr() - if bool(features & (1<<25)) and bool(features & (1<<26)): - return # CPU supports SSE2 - fatalerror_notb( - "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" - "Your CPU is too old. Please translate a PyPy with the option:\n" - "--jit-backend=x86-without-sse2") - def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,18 +1,17 @@ import autopath +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib.rmmap import alloc, free -INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3") # RET def detect_sse2(): - from pypy.rpython.lltypesystem import lltype, rffi - from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in INSNS: + for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3"): # RET data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/support.py b/pypy/jit/backend/x86/support.py --- a/pypy/jit/backend/x86/support.py +++ b/pypy/jit/backend/x86/support.py @@ -1,6 +1,7 @@ import sys from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.x86.arch import WORD def values_array(TP, size): @@ -37,8 +38,13 @@ if sys.platform == 'win32': ensure_sse2_floats = lambda : None + # XXX check for SSE2 on win32 too else: + if WORD == 4: + extra = ['-DPYPY_X86_CHECK_SSE2'] + else: + extra = [] ensure_sse2_floats = rffi.llexternal_use_eci(ExternalCompilationInfo( compile_extra = ['-msse2', '-mfpmath=sse', - '-DPYPY_CPU_HAS_STANDARD_PRECISION'], + '-DPYPY_CPU_HAS_STANDARD_PRECISION'] + extra, )) diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -52,6 +52,7 @@ set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) + j = float(j) while frame.i > 3: jitdriver.can_enter_jit(frame=frame, total=total, j=j) jitdriver.jit_merge_point(frame=frame, total=total, j=j) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2943,11 +2943,18 @@ self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): - myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) + myjitdriver = JitDriver(greens = [], reds = ['n', 'a']) + class A: + pass def f(n): sa = i = rffi.cast(rffi.ULONGLONG, 1) + a = A() while i < rffi.cast(rffi.ULONGLONG, n): - myjitdriver.jit_merge_point(sa=sa, n=n, i=i) + a.sa = sa + a.i = i + myjitdriver.jit_merge_point(n=n, a=a) + sa = a.sa + i = a.i sa += sa % i i += 1 res = self.meta_interp(f, [32]) diff --git a/pypy/jit/tl/tinyframe/tinyframe.py b/pypy/jit/tl/tinyframe/tinyframe.py --- a/pypy/jit/tl/tinyframe/tinyframe.py +++ b/pypy/jit/tl/tinyframe/tinyframe.py @@ -210,7 +210,7 @@ def repr(self): return "" % (self.outer.repr(), self.inner.repr()) -driver = JitDriver(greens = ['code', 'i'], reds = ['self'], +driver = JitDriver(greens = ['i', 'code'], reds = ['self'], virtualizables = ['self']) class Frame(object): diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -323,7 +323,12 @@ def autoflush(self, space): w_iobase = self.w_iobase_ref() if w_iobase is not None: - space.call_method(w_iobase, 'flush') # XXX: ignore IOErrors? + try: + space.call_method(w_iobase, 'flush') + except OperationError, e: + # if it's an IOError, ignore it + if not e.match(space, space.w_IOError): + raise class AutoFlusher(object): diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -170,10 +170,27 @@ space = make_objspace(config) space.appexec([space.wrap(str(tmpfile))], """(tmpfile): import io - f = io.open(tmpfile, 'w') + f = io.open(tmpfile, 'w', encoding='ascii') f.write('42') # no flush() and no close() import sys; sys._keepalivesomewhereobscure = f """) space.finish() assert tmpfile.read() == '42' + +def test_flush_at_exit_IOError(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([], """(): + import io + class MyStream(io.IOBase): + def flush(self): + raise IOError + + s = MyStream() + import sys; sys._keepalivesomewhereobscure = s + """) + space.finish() # the IOError has been ignored diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -385,6 +385,7 @@ "Tuple": "space.w_tuple", "List": "space.w_list", "Set": "space.w_set", + "FrozenSet": "space.w_frozenset", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -406,7 +407,7 @@ }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) - for cpyname in 'Method List Int Long Dict Tuple Class'.split(): + for cpyname in 'Method List Long Dict Tuple Class'.split(): FORWARD_DECLS.append('typedef struct { PyObject_HEAD } ' 'Py%sObject' % (cpyname, )) build_exported_objects() diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -1,16 +1,24 @@ from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler import consts from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, fread, feof, Py_ssize_tP, cpython_struct) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.pyerrors import PyErr_SetFromErrno +from pypy.module.cpyext.funcobject import PyCodeObject from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( - "PyCompilerFlags", ()) + "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) +PyCF_MASK = (consts.CO_FUTURE_DIVISION | + consts.CO_FUTURE_ABSOLUTE_IMPORT | + consts.CO_FUTURE_WITH_STATEMENT | + consts.CO_FUTURE_PRINT_FUNCTION | + consts.CO_FUTURE_UNICODE_LITERALS) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyEval_CallObjectWithKeywords(space, w_obj, w_arg, w_kwds): return space.call(w_obj, w_arg, w_kwds) @@ -48,6 +56,17 @@ return None return borrow_from(None, caller.w_globals) + at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) +def PyEval_EvalCode(space, w_code, w_globals, w_locals): + """This is a simplified interface to PyEval_EvalCodeEx(), with just + the code object, and the dictionaries of global and local variables. + The other arguments are set to NULL.""" + if w_globals is None: + w_globals = space.w_None + if w_locals is None: + w_locals = space.w_None + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([PyObject, PyObject], PyObject) def PyObject_CallObject(space, w_obj, w_arg): """ @@ -74,7 +93,7 @@ Py_file_input = 257 Py_eval_input = 258 -def compile_string(space, source, filename, start): +def compile_string(space, source, filename, start, flags=0): w_source = space.wrap(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: @@ -86,7 +105,7 @@ else: raise OperationError(space.w_ValueError, space.wrap( "invalid mode parameter for compilation")) - return compiling.compile(space, w_source, filename, mode) + return compiling.compile(space, w_source, filename, mode, flags) def run_string(space, source, filename, start, w_globals, w_locals): w_code = compile_string(space, source, filename, start) @@ -109,6 +128,24 @@ filename = "" return run_string(space, source, filename, start, w_globals, w_locals) + at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, + PyCompilerFlagsPtr], PyObject) +def PyRun_StringFlags(space, source, start, w_globals, w_locals, flagsptr): + """Execute Python source code from str in the context specified by the + dictionaries globals and locals with the compiler flags specified by + flags. The parameter start specifies the start token that should be used to + parse the source code. + + Returns the result of executing the code as a Python object, or NULL if an + exception was raised.""" + source = rffi.charp2str(source) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + w_code = compile_string(space, source, "", start, flags) + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([FILEP, CONST_STRING, rffi.INT_real, PyObject, PyObject], PyObject) def PyRun_File(space, fp, filename, start, w_globals, w_locals): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -150,7 +187,7 @@ @cpython_api([rffi.CCHARP, rffi.CCHARP, rffi.INT_real, PyCompilerFlagsPtr], PyObject) -def Py_CompileStringFlags(space, source, filename, start, flags): +def Py_CompileStringFlags(space, source, filename, start, flagsptr): """Parse and compile the Python source code in str, returning the resulting code object. The start token is given by start; this can be used to constrain the code which can be compiled and should @@ -160,7 +197,30 @@ returns NULL if the code cannot be parsed or compiled.""" source = rffi.charp2str(source) filename = rffi.charp2str(filename) - if flags: - raise OperationError(space.w_NotImplementedError, space.wrap( - "cpyext Py_CompileStringFlags does not accept flags")) - return compile_string(space, source, filename, start) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + return compile_string(space, source, filename, start, flags) + + at cpython_api([PyCompilerFlagsPtr], rffi.INT_real, error=CANNOT_FAIL) +def PyEval_MergeCompilerFlags(space, cf): + """This function changes the flags of the current evaluation + frame, and returns true on success, false on failure.""" + flags = rffi.cast(lltype.Signed, cf.c_cf_flags) + result = flags != 0 + current_frame = space.getexecutioncontext().gettopframe_nohidden() + if current_frame: + codeflags = current_frame.pycode.co_flags + compilerflags = codeflags & PyCF_MASK + if compilerflags: + result = 1 + flags |= compilerflags + # No future keyword at the moment + # if codeflags & CO_GENERATOR_ALLOWED: + # result = 1 + # flags |= CO_GENERATOR_ALLOWED + cf.c_cf_flags = rffi.cast(rffi.INT, flags) + return result + + diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - PyObjectFields, generic_cpy_call, CONST_STRING, + PyObjectFields, generic_cpy_call, CONST_STRING, CANNOT_FAIL, cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) @@ -48,6 +48,7 @@ PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) +PyCode_Check, PyCode_CheckExact = build_type_checkers("Code", PyCode) def function_attach(space, py_obj, w_obj): py_func = rffi.cast(PyFunctionObject, py_obj) @@ -167,3 +168,9 @@ freevars=[], cellvars=[])) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyCode_GetNumFree(space, w_co): + """Return the number of free variables in co.""" + co = space.interp_w(PyCode, w_co) + return len(co.co_freevars) + diff --git a/pypy/module/cpyext/include/Python.h b/pypy/module/cpyext/include/Python.h --- a/pypy/module/cpyext/include/Python.h +++ b/pypy/module/cpyext/include/Python.h @@ -113,6 +113,7 @@ #include "compile.h" #include "frameobject.h" #include "eval.h" +#include "pymath.h" #include "pymem.h" #include "pycobject.h" #include "pycapsule.h" diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -13,13 +13,19 @@ /* Masks for co_flags above */ /* These values are also in funcobject.py */ -#define CO_OPTIMIZED 0x0001 -#define CO_NEWLOCALS 0x0002 -#define CO_VARARGS 0x0004 -#define CO_VARKEYWORDS 0x0008 +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 #define CO_NESTED 0x0010 #define CO_GENERATOR 0x0020 +#define CO_FUTURE_DIVISION 0x02000 +#define CO_FUTURE_ABSOLUTE_IMPORT 0x04000 +#define CO_FUTURE_WITH_STATEMENT 0x08000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/intobject.h b/pypy/module/cpyext/include/intobject.h --- a/pypy/module/cpyext/include/intobject.h +++ b/pypy/module/cpyext/include/intobject.h @@ -7,6 +7,11 @@ extern "C" { #endif +typedef struct { + PyObject_HEAD + long ob_ival; +} PyIntObject; + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pymath.h b/pypy/module/cpyext/include/pymath.h new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/include/pymath.h @@ -0,0 +1,20 @@ +#ifndef Py_PYMATH_H +#define Py_PYMATH_H + +/************************************************************************** +Symbols and macros to supply platform-independent interfaces to mathematical +functions and constants +**************************************************************************/ + +/* HUGE_VAL is supposed to expand to a positive double infinity. Python + * uses Py_HUGE_VAL instead because some platforms are broken in this + * respect. We used to embed code in pyport.h to try to worm around that, + * but different platforms are broken in conflicting ways. If you're on + * a platform where HUGE_VAL is defined incorrectly, fiddle your Python + * config to #define Py_HUGE_VAL to something that works on your platform. + */ +#ifndef Py_HUGE_VAL +#define Py_HUGE_VAL HUGE_VAL +#endif + +#endif /* Py_PYMATH_H */ diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,6 +19,14 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) +#define PyCF_MASK_OBSOLETE (CO_NESTED) +#define PyCF_SOURCE_IS_UTF8 0x0100 +#define PyCF_DONT_IMPLY_DEDENT 0x0200 +#define PyCF_ONLY_AST 0x0400 + #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) #ifdef __cplusplus diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -2,11 +2,37 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.error import OperationError from pypy.module.cpyext.api import ( - cpython_api, build_type_checkers, PyObject, - CONST_STRING, CANNOT_FAIL, Py_ssize_t) + cpython_api, cpython_struct, build_type_checkers, bootstrap_function, + PyObject, PyObjectFields, CONST_STRING, CANNOT_FAIL, Py_ssize_t) +from pypy.module.cpyext.pyobject import ( + make_typedescr, track_reference, RefcountState, from_ref) from pypy.rlib.rarithmetic import r_uint, intmask, LONG_TEST +from pypy.objspace.std.intobject import W_IntObject import sys +PyIntObjectStruct = lltype.ForwardReference() +PyIntObject = lltype.Ptr(PyIntObjectStruct) +PyIntObjectFields = PyObjectFields + \ + (("ob_ival", rffi.LONG),) +cpython_struct("PyIntObject", PyIntObjectFields, PyIntObjectStruct) + + at bootstrap_function +def init_intobject(space): + "Type description of PyIntObject" + make_typedescr(space.w_int.instancetypedef, + basestruct=PyIntObject.TO, + realize=int_realize) + +def int_realize(space, obj): + intval = rffi.cast(lltype.Signed, rffi.cast(PyIntObject, obj).c_ob_ival) + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(W_IntObject, w_type) + w_obj.__init__(intval) + track_reference(space, obj, w_obj) + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj + PyInt_Check, PyInt_CheckExact = build_type_checkers("Int") @cpython_api([], lltype.Signed, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -193,7 +193,7 @@ if not obj: PyErr_NoMemory(space) obj.c_ob_type = type - _Py_NewReference(space, obj) + obj.c_ob_refcnt = 1 return obj @cpython_api([PyVarObject, PyTypeObjectPtr, Py_ssize_t], PyObject) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -17,6 +17,7 @@ class BaseCpyTypedescr(object): basestruct = PyObject.TO + W_BaseObject = W_ObjectObject def get_dealloc(self, space): from pypy.module.cpyext.typeobject import subtype_dealloc @@ -51,10 +52,14 @@ def attach(self, space, pyobj, w_obj): pass - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) + def realize(self, space, obj): + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(self.W_BaseObject, w_type) + track_reference(space, obj, w_obj) + if w_type is not space.gettypefor(self.W_BaseObject): + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj typedescr_cache = {} @@ -369,13 +374,7 @@ obj.c_ob_refcnt = 1 w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) assert isinstance(w_type, W_TypeObject) - if w_type.is_cpytype(): - w_obj = space.allocate_instance(W_ObjectObject, w_type) - track_reference(space, obj, w_obj) - state = space.fromcache(RefcountState) - state.set_lifeline(w_obj, obj) - else: - assert False, "Please add more cases in _Py_NewReference()" + get_typedescr(w_type.instancetypedef).realize(space, obj) def _Py_Dealloc(space, obj): from pypy.module.cpyext.api import generic_cpy_call_dont_decref diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -182,16 +182,6 @@ used as the positional and keyword parameters to the object's constructor.""" raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_Check(space, co): - """Return true if co is a code object""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_GetNumFree(space, co): - """Return the number of free variables in co.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=-1) def PyCodec_Register(space, search_function): """Register a new codec search function. @@ -1853,26 +1843,6 @@ """ raise NotImplementedError - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISTITLE(space, ch): - """Return 1 or 0 depending on whether ch is a titlecase character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISDIGIT(space, ch): - """Return 1 or 0 depending on whether ch is a digit character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISNUMERIC(space, ch): - """Return 1 or 0 depending on whether ch is a numeric character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISALPHA(space, ch): - """Return 1 or 0 depending on whether ch is an alphabetic character.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP], PyObject) def PyUnicode_FromFormat(space, format): """Take a C printf()-style format string and a variable number of @@ -2317,17 +2287,6 @@ use the default error handling.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], rffi.INT_real, error=-1) -def PyUnicode_Tailmatch(space, str, substr, start, end, direction): - """Return 1 if substr matches str*[*start:end] at the given tail end - (direction == -1 means to do a prefix match, direction == 1 a suffix match), - 0 otherwise. Return -1 if an error occurred. - - This function used an int type for start and end. This - might require changes in your code for properly supporting 64-bit - systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], Py_ssize_t, error=-2) def PyUnicode_Find(space, str, substr, start, end, direction): """Return the first position of substr in str*[*start:end] using the given @@ -2524,17 +2483,6 @@ source code is read from fp instead of an in-memory string.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, PyCompilerFlags], PyObject) -def PyRun_StringFlags(space, str, start, globals, locals, flags): - """Execute Python source code from str in the context specified by the - dictionaries globals and locals with the compiler flags specified by - flags. The parameter start specifies the start token that should be used to - parse the source code. - - Returns the result of executing the code as a Python object, or NULL if an - exception was raised.""" - raise NotImplementedError - @cpython_api([FILE, rffi.CCHARP, rffi.INT_real, PyObject, PyObject, rffi.INT_real], PyObject) def PyRun_FileEx(space, fp, filename, start, globals, locals, closeit): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -2555,13 +2503,6 @@ returns.""" raise NotImplementedError - at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) -def PyEval_EvalCode(space, co, globals, locals): - """This is a simplified interface to PyEval_EvalCodeEx(), with just - the code object, and the dictionaries of global and local variables. - The other arguments are set to NULL.""" - raise NotImplementedError - @cpython_api([PyCodeObject, PyObject, PyObject, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObject], PyObject) def PyEval_EvalCodeEx(space, co, globals, locals, args, argcount, kws, kwcount, defs, defcount, closure): """Evaluate a precompiled code object, given a particular environment for its @@ -2586,12 +2527,6 @@ throw() methods of generator objects.""" raise NotImplementedError - at cpython_api([PyCompilerFlags], rffi.INT_real, error=CANNOT_FAIL) -def PyEval_MergeCompilerFlags(space, cf): - """This function changes the flags of the current evaluation frame, and returns - true on success, false on failure.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyWeakref_Check(space, ob): """Return true if ob is either a reference or proxy object. diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -2,9 +2,10 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.eval import ( - Py_single_input, Py_file_input, Py_eval_input) + Py_single_input, Py_file_input, Py_eval_input, PyCompilerFlags) from pypy.module.cpyext.api import fopen, fclose, fileno, Py_ssize_tP from pypy.interpreter.gateway import interp2app +from pypy.interpreter.astcompiler import consts from pypy.tool.udir import udir import sys, os @@ -63,6 +64,22 @@ assert space.int_w(w_res) == 10 + def test_evalcode(self, space, api): + w_f = space.appexec([], """(): + def f(*args): + assert isinstance(args, tuple) + return len(args) + 8 + return f + """) + + w_t = space.newtuple([space.wrap(1), space.wrap(2)]) + w_globals = space.newdict() + w_locals = space.newdict() + space.setitem(w_locals, space.wrap("args"), w_t) + w_res = api.PyEval_EvalCode(w_f.code, w_globals, w_locals) + + assert space.int_w(w_res) == 10 + def test_run_simple_string(self, space, api): def run(code): buf = rffi.str2charp(code) @@ -96,6 +113,20 @@ assert 42 * 43 == space.unwrap( api.PyObject_GetItem(w_globals, space.wrap("a"))) + def test_run_string_flags(self, space, api): + flags = lltype.malloc(PyCompilerFlags, flavor='raw') + flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) + w_globals = space.newdict() + buf = rffi.str2charp("a = u'caf\xc3\xa9'") + try: + api.PyRun_StringFlags(buf, Py_single_input, + w_globals, w_globals, flags) + finally: + rffi.free_charp(buf) + w_a = space.getitem(w_globals, space.wrap("a")) + assert space.unwrap(w_a) == u'caf\xe9' + lltype.free(flags, flavor='raw') + def test_run_file(self, space, api): filepath = udir / "cpyext_test_runfile.py" filepath.write("raise ZeroDivisionError") @@ -256,3 +287,21 @@ print dir(mod) print mod.__dict__ assert mod.f(42) == 47 + + def test_merge_compiler_flags(self): + module = self.import_extension('foo', [ + ("get_flags", "METH_NOARGS", + """ + PyCompilerFlags flags; + flags.cf_flags = 0; + int result = PyEval_MergeCompilerFlags(&flags); + return Py_BuildValue("ii", result, flags.cf_flags); + """), + ]) + assert module.get_flags() == (0, 0) + + ns = {'module':module} + exec """from __future__ import division \nif 1: + def nested_flags(): + return module.get_flags()""" in ns + assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -81,6 +81,14 @@ rffi.free_charp(filename) rffi.free_charp(funcname) + def test_getnumfree(self, space, api): + w_function = space.appexec([], """(): + a = 5 + def method(x): return a, x + return method + """) + assert api.PyCode_GetNumFree(w_function.code) == 1 + def test_classmethod(self, space, api): w_function = space.appexec([], """(): def method(x): return x diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -65,4 +65,97 @@ values = module.values() types = [type(x) for x in values] assert types == [int, long, int, int] - + + def test_int_subtype(self): + module = self.import_extension( + 'foo', [ + ("newEnum", "METH_VARARGS", + """ + EnumObject *enumObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "Oi", &name, &intval)) + return NULL; + + PyType_Ready(&Enum_Type); + enumObj = PyObject_New(EnumObject, &Enum_Type); + if (!enumObj) { + return NULL; + } + + enumObj->ob_ival = intval; + Py_INCREF(name); + enumObj->ob_name = name; + + return (PyObject *)enumObj; + """), + ], + prologue=""" + typedef struct + { + PyObject_HEAD + long ob_ival; + PyObject* ob_name; + } EnumObject; + + static void + enum_dealloc(EnumObject *op) + { + Py_DECREF(op->ob_name); + Py_TYPE(op)->tp_free((PyObject *)op); + } + + static PyMemberDef enum_members[] = { + {"name", T_OBJECT, offsetof(EnumObject, ob_name), 0, NULL}, + {NULL} /* Sentinel */ + }; + + PyTypeObject Enum_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "Enum", + /*tp_basicsize*/ sizeof(EnumObject), + /*tp_itemsize*/ 0, + /*tp_dealloc*/ enum_dealloc, + /*tp_print*/ 0, + /*tp_getattr*/ 0, + /*tp_setattr*/ 0, + /*tp_compare*/ 0, + /*tp_repr*/ 0, + /*tp_as_number*/ 0, + /*tp_as_sequence*/ 0, + /*tp_as_mapping*/ 0, + /*tp_hash*/ 0, + /*tp_call*/ 0, + /*tp_str*/ 0, + /*tp_getattro*/ 0, + /*tp_setattro*/ 0, + /*tp_as_buffer*/ 0, + /*tp_flags*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, + /*tp_doc*/ 0, + /*tp_traverse*/ 0, + /*tp_clear*/ 0, + /*tp_richcompare*/ 0, + /*tp_weaklistoffset*/ 0, + /*tp_iter*/ 0, + /*tp_iternext*/ 0, + /*tp_methods*/ 0, + /*tp_members*/ enum_members, + /*tp_getset*/ 0, + /*tp_base*/ &PyInt_Type, + /*tp_dict*/ 0, + /*tp_descr_get*/ 0, + /*tp_descr_set*/ 0, + /*tp_dictoffset*/ 0, + /*tp_init*/ 0, + /*tp_alloc*/ 0, + /*tp_new*/ 0 + }; + """) + + a = module.newEnum("ULTIMATE_ANSWER", 42) + assert type(a).__name__ == "Enum" + assert isinstance(a, int) + assert a == int(a) == 42 + assert a.name == "ULTIMATE_ANSWER" diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -204,8 +204,18 @@ assert api.Py_UNICODE_ISSPACE(unichr(char)) assert not api.Py_UNICODE_ISSPACE(u'a') + assert api.Py_UNICODE_ISALPHA(u'a') + assert not api.Py_UNICODE_ISALPHA(u'0') + assert api.Py_UNICODE_ISALNUM(u'a') + assert api.Py_UNICODE_ISALNUM(u'0') + assert not api.Py_UNICODE_ISALNUM(u'+') + assert api.Py_UNICODE_ISDECIMAL(u'\u0660') assert not api.Py_UNICODE_ISDECIMAL(u'a') + assert api.Py_UNICODE_ISDIGIT(u'9') + assert not api.Py_UNICODE_ISDIGIT(u'@') + assert api.Py_UNICODE_ISNUMERIC(u'9') + assert not api.Py_UNICODE_ISNUMERIC(u'@') for char in [0x0a, 0x0d, 0x1c, 0x1d, 0x1e, 0x85, 0x2028, 0x2029]: assert api.Py_UNICODE_ISLINEBREAK(unichr(char)) @@ -216,6 +226,9 @@ assert not api.Py_UNICODE_ISUPPER(u'a') assert not api.Py_UNICODE_ISLOWER(u'�') assert api.Py_UNICODE_ISUPPER(u'�') + assert not api.Py_UNICODE_ISTITLE(u'A') + assert api.Py_UNICODE_ISTITLE( + u'\N{LATIN CAPITAL LETTER L WITH SMALL LETTER J}') def test_TOLOWER(self, space, api): assert api.Py_UNICODE_TOLOWER(u'�') == u'�' @@ -437,3 +450,10 @@ api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) assert u"zbzbzbzb" == space.unwrap( api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) + + def test_tailmatch(self, space, api): + w_str = space.wrap(u"abcdef") + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -12,7 +12,7 @@ make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check from pypy.module.sys.interp_encoding import setdefaultencoding -from pypy.objspace.std import unicodeobject, unicodetype +from pypy.objspace.std import unicodeobject, unicodetype, stringtype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer import sys @@ -89,6 +89,11 @@ return unicodedb.isspace(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISALPHA(space, ch): + """Return 1 or 0 depending on whether ch is an alphabetic character.""" + return unicodedb.isalpha(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISALNUM(space, ch): """Return 1 or 0 depending on whether ch is an alphanumeric character.""" return unicodedb.isalnum(ord(ch)) @@ -104,6 +109,16 @@ return unicodedb.isdecimal(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISDIGIT(space, ch): + """Return 1 or 0 depending on whether ch is a digit character.""" + return unicodedb.isdigit(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISNUMERIC(space, ch): + """Return 1 or 0 depending on whether ch is a numeric character.""" + return unicodedb.isnumeric(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISLOWER(space, ch): """Return 1 or 0 depending on whether ch is a lowercase character.""" return unicodedb.islower(ord(ch)) @@ -113,6 +128,11 @@ """Return 1 or 0 depending on whether ch is an uppercase character.""" return unicodedb.isupper(ord(ch)) + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISTITLE(space, ch): + """Return 1 or 0 depending on whether ch is a titlecase character.""" + return unicodedb.istitle(ord(ch)) + @cpython_api([Py_UNICODE], Py_UNICODE, error=CANNOT_FAIL) def Py_UNICODE_TOLOWER(space, ch): """Return the character ch converted to lower case.""" @@ -155,6 +175,11 @@ except KeyError: return -1.0 + at cpython_api([], Py_UNICODE, error=CANNOT_FAIL) +def PyUnicode_GetMax(space): + """Get the maximum ordinal for a Unicode character.""" + return unichr(runicode.MAXUNICODE) + @cpython_api([PyObject], rffi.CCHARP, error=CANNOT_FAIL) def PyUnicode_AS_DATA(space, ref): """Return a pointer to the internal buffer of the object. o has to be a @@ -560,3 +585,16 @@ return space.call_method(w_str, "replace", w_substr, w_replstr, space.wrap(maxcount)) + at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], + rffi.INT_real, error=-1) +def PyUnicode_Tailmatch(space, w_str, w_substr, start, end, direction): + """Return 1 if substr matches str[start:end] at the given tail end + (direction == -1 means to do a prefix match, direction == 1 a + suffix match), 0 otherwise. Return -1 if an error occurred.""" + str = space.unicode_w(w_str) + substr = space.unicode_w(w_substr) + if rffi.cast(lltype.Signed, direction) >= 0: + return stringtype.stringstartswith(str, substr, start, end) + else: + return stringtype.stringendswith(str, substr, start, end) + diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -26,6 +26,7 @@ llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True +fatalerror._jit_look_inside_ = False fatalerror._annenforceargs_ = [str] def fatalerror_notb(msg): @@ -34,6 +35,7 @@ from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) fatalerror_notb._dont_inline_ = True +fatalerror_notb._jit_look_inside_ = False fatalerror_notb._annenforceargs_ = [str] diff --git a/pypy/translator/c/src/asm_gcc_x86.h b/pypy/translator/c/src/asm_gcc_x86.h --- a/pypy/translator/c/src/asm_gcc_x86.h +++ b/pypy/translator/c/src/asm_gcc_x86.h @@ -102,6 +102,12 @@ #endif /* !PYPY_CPU_HAS_STANDARD_PRECISION */ +#ifdef PYPY_X86_CHECK_SSE2 +#define PYPY_X86_CHECK_SSE2_DEFINED +extern void pypy_x86_check_sse2(void); +#endif + + /* implementations */ #ifndef PYPY_NOT_MAIN_FILE @@ -113,4 +119,25 @@ } # endif +# ifdef PYPY_X86_CHECK_SSE2 +void pypy_x86_check_sse2(void) +{ + //Read the CPU features. + int features; + asm("mov $1, %%eax\n" + "cpuid\n" + "mov %%edx, %0" + : "=g"(features) : : "eax", "ebx", "edx", "ecx"); + + //Check bits 25 and 26, this indicates SSE2 support + if (((features & (1 << 25)) == 0) || ((features & (1 << 26)) == 0)) + { + fprintf(stderr, "Old CPU with no SSE2 support, cannot continue.\n" + "You need to re-translate with " + "'--jit-backend=x86-without-sse2'\n"); + abort(); + } +} +# endif + #endif diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -1,3 +1,4 @@ +#define PYPY_NOT_MAIN_FILE #include #include diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -36,6 +36,9 @@ RPyListOfString *list; pypy_asm_stack_bottom(); +#ifdef PYPY_X86_CHECK_SSE2_DEFINED + pypy_x86_check_sse2(); +#endif instrument_setup(); if (sizeof(void*) != SIZEOF_LONG) { diff --git a/pypy/translator/sandbox/test/test_sandbox.py b/pypy/translator/sandbox/test/test_sandbox.py --- a/pypy/translator/sandbox/test/test_sandbox.py +++ b/pypy/translator/sandbox/test/test_sandbox.py @@ -145,9 +145,9 @@ g = pipe.stdin f = pipe.stdout expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GENERATIONGC_NURSERY",), None) - if sys.platform.startswith('linux'): # on Mac, uses another (sandboxsafe) approach - expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), - OSError(5232, "xyz")) + #if sys.platform.startswith('linux'): + # expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), + # OSError(5232, "xyz")) expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GC_DEBUG",), None) g.close() tail = f.read() From noreply at buildbot.pypy.org Sat Feb 25 02:38:53 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 02:38:53 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: review notes Message-ID: <20120225013853.C620082366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: speedup-list-comprehension Changeset: r52894:af87de586204 Date: 2012-02-24 20:38 -0500 http://bitbucket.org/pypy/pypy/changeset/af87de586204/ Log: review notes diff --git a/REVIEW.rst b/REVIEW.rst new file mode 100644 --- /dev/null +++ b/REVIEW.rst @@ -0,0 +1,10 @@ +Review notes +============ + + +* explicit tests for the generated bytecode +* BUILD_LIST_FROM_ARG shouldn't absort async exceptions +* dead code in jit/codewriter/support.py? +* Do we need __length_hint__? +* len_w instead of int_w(len()) +* share some code with _unpackiterable_unknown_length From noreply at buildbot.pypy.org Sat Feb 25 03:12:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 25 Feb 2012 03:12:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-record-dtypes: start implementing string boxes Message-ID: <20120225021210.EC96782366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-record-dtypes Changeset: r52895:36e4ff545206 Date: 2012-02-24 18:11 -0800 http://bitbucket.org/pypy/pypy/changeset/36e4ff545206/ Log: start implementing string boxes diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -197,9 +197,6 @@ class W_FlexibleBox(W_GenericBox): - pass - -class W_VoidBox(W_FlexibleBox): def __init__(self, arr, ofs): self.arr = arr # we have to keep array alive self.ofs = ofs @@ -207,6 +204,7 @@ def get_dtype(self, space): return self.arr.dtype +class W_VoidBox(W_FlexibleBox): @unwrap_spec(item=str) def descr_getitem(self, space, item): try: @@ -230,10 +228,26 @@ pass class W_StringBox(W_CharacterBox): - pass + def descr__new__(space, w_subtype, w_arg): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + from pypy.module.micronumpy.interp_dtype import new_string_dtype + + arg = space.str_w(space.str(w_arg)) + arr = W_NDimArray([1], new_string_dtype(space, len(arg))) + for i in range(len(arg)): + arr.storage[i] = arg[i] + return W_StringBox(arr, 0) class W_UnicodeBox(W_CharacterBox): - pass + def descr__new__(space, w_subtype, w_arg): + from pypy.module.micronumpy.interp_numarray import W_NDimArray + from pypy.module.micronumpy.interp_dtype import new_unicode_dtype + + arg = space.unicode_w(space.unicode(w_arg)) + arr = W_NDimArray([1], new_unicode_dtype(space, len(arg))) + for i in range(len(arg)): + arr.setitem(i, arg[i]) + return W_UnicodeBox(arr, 0) W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -394,9 +408,11 @@ W_StringBox.typedef = TypeDef("string_", (str_typedef, W_CharacterBox.typedef), __module__ = "numpypy", + __new__ = interp2app(W_StringBox.descr__new__.im_func), ) W_UnicodeBox.typedef = TypeDef("unicode_", (unicode_typedef, W_CharacterBox.typedef), __module__ = "numpypy", + __new__ = interp2app(W_UnicodeBox.descr__new__.im_func), ) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -247,6 +247,27 @@ byteorder_prefix = '>' nonnative_byteorder_prefix = '<' +def new_string_dtype(space, size): + return W_Dtype( + types.StringType(size), + num=18, + kind=STRINGLTR, + name='string', + char='S' + str(size), + w_box_type = space.gettypefor(interp_boxes.W_StringBox), + ) + +def new_unicode_dtype(space, size): + return W_Dtype( + types.UnicodeType(size), + num=19, + kind=UNICODELTR, + name='unicode', + char='U' + str(size), + w_box_type = space.gettypefor(interp_boxes.W_UnicodeBox), + ) + + class DtypeCache(object): def __init__(self, space): self.w_booldtype = W_Dtype( @@ -379,7 +400,7 @@ w_box_type = space.gettypefor(interp_boxes.W_ULongLongBox), ) self.w_stringdtype = W_Dtype( - types.StringType(0), + types.StringType(1), num=18, kind=STRINGLTR, name='string', @@ -388,7 +409,7 @@ alternate_constructors=[space.w_str], ) self.w_unicodedtype = W_Dtype( - types.UnicodeType(0), + types.UnicodeType(1), num=19, kind=UNICODELTR, name='unicode', diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -521,6 +521,14 @@ assert d.name == "unicode256" assert d.num == 19 + def test_string_boxes(self): + from _numpypy import str_ + assert str_(3) == '3' + + def test_unicode_boxes(self): + from _numpypy import str_ + assert str_(3) == '3' + class AppTestRecordDtypes(BaseNumpyAppTest): def test_create(self): from _numpypy import dtype, void From noreply at buildbot.pypy.org Sat Feb 25 03:16:50 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 03:16:50 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: let async exceptions propogate Message-ID: <20120225021650.D5F3982366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: speedup-list-comprehension Changeset: r52896:0838a21b4528 Date: 2012-02-24 21:16 -0500 http://bitbucket.org/pypy/pypy/changeset/0838a21b4528/ Log: let async exceptions propogate diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -719,7 +719,9 @@ last_val = self.popvalue() try: lgt = self.space.int_w(self.space.len(last_val)) - except OperationError: + except OperationError, e: + if e.async(space): + raise lgt = 0 # oh well self.pushvalue(self.space.newlist([], sizehint=lgt)) self.pushvalue(last_val) From noreply at buildbot.pypy.org Sat Feb 25 03:19:24 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 03:19:24 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: done Message-ID: <20120225021924.9F5D282366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: speedup-list-comprehension Changeset: r52897:f0994a94c60a Date: 2012-02-24 21:19 -0500 http://bitbucket.org/pypy/pypy/changeset/f0994a94c60a/ Log: done diff --git a/REVIEW.rst b/REVIEW.rst --- a/REVIEW.rst +++ b/REVIEW.rst @@ -3,7 +3,6 @@ * explicit tests for the generated bytecode -* BUILD_LIST_FROM_ARG shouldn't absort async exceptions * dead code in jit/codewriter/support.py? * Do we need __length_hint__? * len_w instead of int_w(len()) From noreply at buildbot.pypy.org Sat Feb 25 06:13:07 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 06:13:07 +0100 (CET) Subject: [pypy-commit] pypy default: kill these immutable fields, they're mutated in _del_sources Message-ID: <20120225051307.882A982366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52898:ffe32a5a3f80 Date: 2012-02-25 00:12 -0500 http://bitbucket.org/pypy/pypy/changeset/ffe32a5a3f80/ Log: kill these immutable fields, they're mutated in _del_sources diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -779,8 +779,6 @@ """ Intermediate class for performing binary operations. """ - _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -856,8 +854,6 @@ self.right.create_sig(), done_func) class AxisReduce(Call2): - _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) From notifications-noreply at bitbucket.org Sat Feb 25 08:04:49 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sat, 25 Feb 2012 07:04:49 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120225070449.11434.45979@bitbucket02.managed.contegix.com> You have received a notification from Ross Lagerwall. Hi, I forked pypy. My fork is at https://bitbucket.org/rosslagerwall/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Sat Feb 25 17:49:14 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 17:49:14 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: new slides Message-ID: <20120225164914.365DB82366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4106:6617ba2b6f20 Date: 2012-02-25 11:48 -0500 http://bitbucket.org/pypy/extradoc/changeset/6617ba2b6f20/ Log: new slides diff --git a/talk/pycon2012/tutorial/slides.rst b/talk/pycon2012/tutorial/slides.rst --- a/talk/pycon2012/tutorial/slides.rst +++ b/talk/pycon2012/tutorial/slides.rst @@ -1,3 +1,44 @@ +First rule of optimization? +=========================== + +|pause| + +If it's not correct, it doesn't matter. + +Second rule of optimization? +============================ + +|pause| + +If it's not faster, you're wasting ime. + +Third rule of optimization? +=========================== + +|pause| + +Measure twice, cut once. + +(C)Python performance tricks +============================ + +|pause| + +* ``map()`` instead of list comprehensions + +* ``def f(int=int):``, make globals local + +* ``append = my_list.append``, grab bound methods outside loop + +* Avoiding function calls + +Forget these +============ + +* PyPy has totally different performance characterists + +* Which we're going to learn about now + Why PyPy? ========= @@ -41,7 +82,7 @@ * moving computations to C, example:: - map(operator.... ) # XXX some obscure example + map(operator.attrgetter("a"), my_list) PyPy's sweetpot =============== From noreply at buildbot.pypy.org Sat Feb 25 20:08:55 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 25 Feb 2012 20:08:55 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Update the status. Message-ID: <20120225190855.62B0382366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4107:c85ffab7807f Date: 2012-02-25 20:08 +0100 http://bitbucket.org/pypy/extradoc/changeset/c85ffab7807f/ Log: Update the status. diff --git a/planning/stm.txt b/planning/stm.txt --- a/planning/stm.txt +++ b/planning/stm.txt @@ -75,8 +75,8 @@ use 4-5 bits, where in addition we use some "thread hash" value if there is only one copy. -<< NOW: think of a minimal GC model with these properties. We probably -need GC_GLOBAL, a single bit of GC_WAS_COPIED, and the version number. >> +<< NOW: implemented a minimal GC model with these properties. We have +GC_GLOBAL, a single bit of GC_WAS_COPIED, and the version number. >> stm_read @@ -102,9 +102,8 @@ depending on cases). And if the read is accepted then we need to remember in a local list that we've read that object. -<< NOW: implement the thread's local dictionary in C, as say a search -tree. Should be easy enough if we don't try to be as efficient as -possible. The rest of the logic here is straightforward. >> +<< NOW: the thread's local dictionary is in C, as a search tree. +The rest of the logic here is straightforward. >> stm_write @@ -124,7 +123,7 @@ consistent copy (i.e. nobody changed the object in the middle of us reading it). If it is too recent, then we might have to abort. -<< NOW: straightforward >> +<< NOW: done, straightforward >> TODO: how do we handle MemoryErrors when making a local copy?? Maybe force the transaction to abort, and then re-raise MemoryError @@ -147,8 +146,7 @@ We need to check that each of these global objects' versions have not been modified in the meantime. -<< NOW: should be easy, but with unclear interactions between the C -code and the GC. >> +<< NOW: done, kind of easy >> Annotator support @@ -167,7 +165,10 @@ of a localobj are themselves localobjs. This would be useful for 'PyFrame.fastlocals_w': it should also be known to always be a localobj. -<< do later >> +<< NOW: done in the basic form by translator/stm/transform.py. +Runs late (just before C databasing). Should work well enough to +remove the maximum number of write barriers, but still missing +PyFrame.fastlocals_w. >> Local collections @@ -243,7 +244,9 @@ << at first the global area keeps growing unboundedly. The next step will be to add the LIL but run the global collection by keeping all -other threads blocked. >> +other threads blocked. NOW: think about, at least, doing "minor +collections" on the global area *before* we even start running +transactions. >> When not running transactively @@ -267,19 +270,10 @@ is called, we can try to do such a collection, but what about the pinned objects? -<< NOW: let this mode be rather slow. Two solutions are considered: - - 1. we would have only global objects, and have the stm_write barrier - of 'obj' return 'obj'. Do only global collections (once we have - them; at first, don't collect at all). Allocation would allocate - immediately a global object, without being able to benefit from - bump-pointer allocation. - - 2. allocate in a nursery, never collected for now; but just do an - end-of-transaction collection when transaction.run() is first - called. - ->> +<< NOW: the global area is just the "nursery" for the main thread. +stm_writebarrier of 'obj' return 'obj' in the main thread. All +allocations get us directly a global object, but allocated from +the "nursery" of the main thread, with bump-pointer allocation. >> Pointer equality @@ -296,8 +290,11 @@ dictionary if they map to each other. And we need to take care of the cases of NULL pointers. -<< NOW: straightforward, if we're careful not to forget cases >> - +<< NOW: done, without needing the local dictionary: +stm_normalize_global(obj) returns globalobj if obj is a local, +WAS_COPIED object. Then a pointer comparison 'x == y' becomes +stm_normalize_global(x) == stm_normalize_global(y). Moreover +the call to stm_normalize_global() can be omitted for constants. >> From noreply at buildbot.pypy.org Sat Feb 25 20:16:39 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 20:16:39 +0100 (CET) Subject: [pypy-commit] pypy default: Unroll a loop, which allows super() to go through method caches in the JIT. Message-ID: <20120225191639.29A1182366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52899:d7300e63b1f7 Date: 2012-02-25 20:15 +0100 http://bitbucket.org/pypy/pypy/changeset/d7300e63b1f7/ Log: Unroll a loop, which allows super() to go through method caches in the JIT. diff --git a/pypy/module/pypyjit/test_pypy_c/test_instance.py b/pypy/module/pypyjit/test_pypy_c/test_instance.py --- a/pypy/module/pypyjit/test_pypy_c/test_instance.py +++ b/pypy/module/pypyjit/test_pypy_c/test_instance.py @@ -201,3 +201,28 @@ loop, = log.loops_by_filename(self.filepath) assert loop.match_by_id("compare", "") # optimized away + def test_super(self): + def main(): + class A(object): + def m(self, x): + return x + 1 + class B(A): + def m(self, x): + return super(B, self).m(x) + i = 0 + while i < 300: + i = B().m(i) + return i + + log = self.run(main, []) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i78 = int_lt(i72, 300) + guard_true(i78, descr=...) + guard_not_invalidated(descr=...) + i79 = force_token() + i80 = force_token() + i81 = int_add(i72, 1) + --TICK-- + jump(..., descr=...) + """) diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -345,9 +345,9 @@ return w_self._lookup_where(name) + @unroll_safe def lookup_starting_at(w_self, w_starttype, name): space = w_self.space - # XXX Optimize this with method cache look = False for w_class in w_self.mro_w: if w_class is w_starttype: From noreply at buildbot.pypy.org Sat Feb 25 21:15:31 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 21:15:31 +0100 (CET) Subject: [pypy-commit] pypy default: Mark type's mro_w as quassiimmut. Message-ID: <20120225201531.201AB82366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52900:706037d835be Date: 2012-02-25 21:14 +0100 http://bitbucket.org/pypy/pypy/changeset/706037d835be/ Log: Mark type's mro_w as quassiimmut. diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -103,6 +103,7 @@ 'terminator', '_version_tag?', 'name?', + 'mro_w?[*]', ] # for config.objspace.std.getattributeshortcut From noreply at buildbot.pypy.org Sat Feb 25 21:38:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 25 Feb 2012 21:38:22 +0100 (CET) Subject: [pypy-commit] pypy default: A failing test for quassiimmut arrays when used with an RPython forloop Message-ID: <20120225203822.7E07582366@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r52901:183a045d52d1 Date: 2012-02-25 15:38 -0500 http://bitbucket.org/pypy/pypy/changeset/183a045d52d1/ Log: A failing test for quassiimmut arrays when used with an RPython forloop diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe def test_get_current_qmut_instance(): @@ -480,6 +480,32 @@ assert res == 1 self.check_jitcell_token_count(2) + def test_for_loop_array(self): + myjitdriver = JitDriver(greens=[], reds=["n", "i"]) + class Foo(object): + _immutable_fields_ = ["x?[*]"] + def __init__(self, x): + self.x = x + f = Foo([1, 3, 5, 6]) + @unroll_safe + def g(v): + for x in f.x: + if x & 1 == 0: + v += 1 + return v + def main(n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + i = g(i) + return i + res = self.meta_interp(main, [10]) + assert res == 10 + self.check_resops({ + "int_add": 2, "int_lt": 2, "jump": 1, "guard_true": 2, + "guard_not_invalidated": 2 + }) + class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass From noreply at buildbot.pypy.org Sat Feb 25 22:22:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 25 Feb 2012 22:22:01 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Tentative: re-enable root stack walking, just by using the ShadowStack Message-ID: <20120225212201.04E0482366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52902:ae34644cc94c Date: 2012-02-25 19:54 +0100 http://bitbucket.org/pypy/pypy/changeset/ae34644cc94c/ Log: Tentative: re-enable root stack walking, just by using the ShadowStack approach. diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -1,5 +1,5 @@ from pypy.rpython.memory.gctransform.framework import FrameworkGCTransformer -from pypy.rpython.memory.gctransform.framework import BaseRootWalker +from pypy.rpython.memory.gctransform.shadowstack import ShadowStackRootWalker from pypy.rpython.lltypesystem import llmemory from pypy.annotation import model as annmodel @@ -28,12 +28,6 @@ self.gcdata.gc.commit_transaction.im_func, [s_gc], annmodel.s_None) - def push_roots(self, hop, keep_current_args=False): - pass - - def pop_roots(self, hop, livevars): - pass - def build_root_walker(self): return StmStackRootWalker(self) @@ -69,7 +63,5 @@ hop.genop("direct_call", [self.stm_commit_ptr, self.c_const_gc]) -class StmStackRootWalker(BaseRootWalker): - - def walk_stack_roots(self, collect_stack_root): - raise NotImplementedError +class StmStackRootWalker(ShadowStackRootWalker): + pass From noreply at buildbot.pypy.org Sat Feb 25 22:22:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 25 Feb 2012 22:22:02 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for 183a045d52d1: iterate over non-mutated lists using a Message-ID: <20120225212202.3354B82366@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52903:70a3702714bf Date: 2012-02-25 22:21 +0100 http://bitbucket.org/pypy/pypy/changeset/70a3702714bf/ Log: Fix for 183a045d52d1: iterate over non-mutated lists using a non- mutating getitem. diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -392,7 +392,11 @@ ('list', r_list.lowleveltype), ('index', Signed))) self.ll_listiter = ll_listiter - self.ll_listnext = ll_listnext + if (isinstance(r_list, FixedSizeListRepr) + and not r_list.listitem.mutated): + self.ll_listnext = ll_listnext_foldable + else: + self.ll_listnext = ll_listnext self.ll_getnextindex = ll_getnextindex def ll_listiter(ITERPTR, lst): @@ -409,5 +413,14 @@ iter.index = index + 1 # cannot overflow because index < l.length return l.ll_getitem_fast(index) +def ll_listnext_foldable(iter): + from pypy.rpython.rlist import ll_getitem_foldable_nonneg + l = iter.list + index = iter.index + if index >= l.ll_length(): + raise StopIteration + iter.index = index + 1 # cannot overflow because index < l.length + return ll_getitem_foldable_nonneg(l, index) + def ll_getnextindex(iter): return iter.index diff --git a/pypy/rpython/test/test_rlist.py b/pypy/rpython/test/test_rlist.py --- a/pypy/rpython/test/test_rlist.py +++ b/pypy/rpython/test/test_rlist.py @@ -8,6 +8,7 @@ from pypy.rpython.rlist import * from pypy.rpython.lltypesystem.rlist import ListRepr, FixedSizeListRepr, ll_newlist, ll_fixed_newlist from pypy.rpython.lltypesystem import rlist as ll_rlist +from pypy.rpython.llinterp import LLException from pypy.rpython.ootypesystem import rlist as oo_rlist from pypy.rpython.rint import signed_repr from pypy.objspace.flow.model import Constant, Variable @@ -1477,6 +1478,80 @@ assert func1.oopspec == 'list.getitem_foldable(l, index)' assert not hasattr(func2, 'oopspec') + def test_iterate_over_immutable_list(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + lst = list('abcdef') + def dummyfn(): + total = 0 + for c in lst: + total += ord(c) + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + e = raises(LLException, self.interpret, dummyfn, []) + assert 'KeyError' in str(e.value) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + def test_iterate_over_immutable_list_quasiimmut_attr(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + class Foo: + _immutable_fields_ = ['lst?[*]'] + lst = list('abcdef') + foo = Foo() + def dummyfn(): + total = 0 + for c in foo.lst: + total += ord(c) + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + e = raises(LLException, self.interpret, dummyfn, []) + assert 'KeyError' in str(e.value) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + def test_iterate_over_mutable_list(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + lst = list('abcdef') + def dummyfn(): + total = 0 + for c in lst: + total += ord(c) + lst[0] = 'x' + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + res = self.interpret(dummyfn, []) + assert res == sum(map(ord, 'abcdef')) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + class TestOOtype(BaseTestRlist, OORtypeMixin): rlist = oo_rlist type_system = 'ootype' From noreply at buildbot.pypy.org Sat Feb 25 22:25:09 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 25 Feb 2012 22:25:09 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: add more passing tests Message-ID: <20120225212509.EE0F682366@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52904:0705b078c3c2 Date: 2012-02-25 23:15 +0200 http://bitbucket.org/pypy/pypy/changeset/0705b078c3c2/ Log: add more passing tests diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -16,7 +16,7 @@ a[i][i] = 1 return a -def sum(a,axis=None): +def sum(a,axis=None, out=None): '''sum(a, axis=None) Sum of array elements over a given axis. @@ -43,17 +43,17 @@ # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): a = _numpypy.array(a) - return a.sum(axis) + return a.sum(axis=axis, out=out) -def min(a, axis=None): +def min(a, axis=None, out=None): if not hasattr(a, "min"): a = _numpypy.array(a) - return a.min(axis) + return a.min(axis=axis, out=out) -def max(a, axis=None): +def max(a, axis=None, out=None): if not hasattr(a, "max"): a = _numpypy.array(a) - return a.max(axis) + return a.max(axis=axis, out=out) def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -321,10 +321,17 @@ else: res_dtype = calc_dtype if isinstance(w_lhs, Scalar) and isinstance(w_rhs, Scalar): - return space.wrap(self.func(calc_dtype, + arr = self.func(calc_dtype, w_lhs.value.convert_to(calc_dtype), w_rhs.value.convert_to(calc_dtype) - )) + ) + if isinstance(out,Scalar): + out.value=arr + elif isinstance(out, BaseArray): + out.fill(space, arr) + else: + out = arr + return space.wrap(out) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) # Test correctness of out.shape if out and out.shape != shape_agreement(space, new_shape, out.shape): diff --git a/pypy/module/micronumpy/test/test_outarg.py b/pypy/module/micronumpy/test/test_outarg.py --- a/pypy/module/micronumpy/test/test_outarg.py +++ b/pypy/module/micronumpy/test/test_outarg.py @@ -70,21 +70,44 @@ assert c.dtype is a.dtype c[0,0] = 100 assert out[0, 0] == 100 + out[:] = 100 raises(ValueError, 'c = add(a, a, out=out[1])') c = add(a[0], a[1], out=out[1]) assert (c == out[1]).all() assert (c == [4, 6]).all() + assert (out[0] == 100).all() c = add(a[0], a[1], out=out) assert (c == out[1]).all() assert (c == out[0]).all() + out = array(16, dtype=int) + b = add(10, 10, out=out) + assert b==out + assert b.dtype == out.dtype - + def test_applevel(self): + from _numpypy import array, sum, max, min + a = array([[1, 2], [3, 4]]) + out = array([[0, 0], [0, 0]]) + c = sum(a, axis=0, out=out[0]) + assert (c == [4, 6]).all() + assert (c == out[0]).all() + assert (c != out[1]).all() + c = max(a, axis=1, out=out[0]) + assert (c == [2, 4]).all() + assert (c == out[0]).all() + assert (c != out[1]).all() + def test_ufunc_cast(self): - from _numpypy import array, negative + from _numpypy import array, negative, add, sum a = array(16, dtype = int) c = array(0, dtype = float) b = negative(a, out=c) assert b == c + b = add(a, a, out=c) + assert b == c + d = array([16, 16], dtype=int) + b = sum(d, out=c) + assert b == c try: from _numpypy import version v = version.version.split('.') @@ -93,6 +116,10 @@ if v[0]<'2': b = negative(c, out=a) assert b == a + b = add(c, c, out=a) + assert b == a + b = sum(array([16, 16], dtype=float), out=a) + assert b == a else: cast_error = raises(TypeError, negative, c, a) assert str(cast_error.value) == \ From noreply at buildbot.pypy.org Sat Feb 25 22:25:16 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 25 Feb 2012 22:25:16 +0100 (CET) Subject: [pypy-commit] pypy numpypy-out: merge default in Message-ID: <20120225212516.0E33982366@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-out Changeset: r52905:6e5f01f9a1b9 Date: 2012-02-25 23:19 +0200 http://bitbucket.org/pypy/pypy/changeset/6e5f01f9a1b9/ Log: merge default in diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -1,9 +1,9 @@ - +import _ffi import _rawffi from _ctypes.basics import _CData, cdata_from_address, _CDataMeta, sizeof from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import CArgObject +from _ctypes.basics import CArgObject, as_ffi_pointer class ArrayMeta(_CDataMeta): def __new__(self, name, cls, typedict): @@ -211,6 +211,9 @@ def _to_ffi_param(self): return self._get_buffer_value() + def _as_ffi_pointer_(self, ffitype): + return as_ffi_pointer(self, ffitype) + ARRAY_CACHE = {} def create_array_type(base, length): @@ -228,5 +231,6 @@ _type_ = base ) cls = ArrayMeta(name, (Array,), tpdict) + cls._ffiargtype = _ffi.types.Pointer(base.get_ffi_argtype()) ARRAY_CACHE[key] = cls return cls diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -230,5 +230,16 @@ } +# called from primitive.py, pointer.py, array.py +def as_ffi_pointer(value, ffitype): + my_ffitype = type(value).get_ffi_argtype() + # for now, we always allow types.pointer, else a lot of tests + # break. We need to rethink how pointers are represented, though + if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: + raise ArgumentError("expected %s instance, got %s" % (type(value), + ffitype)) + return value._get_buffer_value() + + # used by "byref" from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/pointer.py b/lib_pypy/_ctypes/pointer.py --- a/lib_pypy/_ctypes/pointer.py +++ b/lib_pypy/_ctypes/pointer.py @@ -3,7 +3,7 @@ import _ffi from _ctypes.basics import _CData, _CDataMeta, cdata_from_address, ArgumentError from _ctypes.basics import keepalive_key, store_reference, ensure_objects -from _ctypes.basics import sizeof, byref +from _ctypes.basics import sizeof, byref, as_ffi_pointer from _ctypes.array import Array, array_get_slice_params, array_slice_getitem,\ array_slice_setitem @@ -119,14 +119,6 @@ def _as_ffi_pointer_(self, ffitype): return as_ffi_pointer(self, ffitype) -def as_ffi_pointer(value, ffitype): - my_ffitype = type(value).get_ffi_argtype() - # for now, we always allow types.pointer, else a lot of tests - # break. We need to rethink how pointers are represented, though - if my_ffitype is not ffitype and ffitype is not _ffi.types.void_p: - raise ArgumentError("expected %s instance, got %s" % (type(value), - ffitype)) - return value._get_buffer_value() def _cast_addr(obj, _, tp): if not (isinstance(tp, _CDataMeta) and tp._is_pointer_like()): diff --git a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py b/lib_pypy/ctypes_config_cache/pyexpat.ctc.py deleted file mode 100644 --- a/lib_pypy/ctypes_config_cache/pyexpat.ctc.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -'ctypes_configure' source for pyexpat.py. -Run this to rebuild _pyexpat_cache.py. -""" - -import ctypes -from ctypes import c_char_p, c_int, c_void_p, c_char -from ctypes_configure import configure -import dumpcache - - -class CConfigure: - _compilation_info_ = configure.ExternalCompilationInfo( - includes = ['expat.h'], - libraries = ['expat'], - pre_include_lines = [ - '#define XML_COMBINED_VERSION (10000*XML_MAJOR_VERSION+100*XML_MINOR_VERSION+XML_MICRO_VERSION)'], - ) - - XML_Char = configure.SimpleType('XML_Char', c_char) - XML_COMBINED_VERSION = configure.ConstantInteger('XML_COMBINED_VERSION') - for name in ['XML_PARAM_ENTITY_PARSING_NEVER', - 'XML_PARAM_ENTITY_PARSING_UNLESS_STANDALONE', - 'XML_PARAM_ENTITY_PARSING_ALWAYS']: - locals()[name] = configure.ConstantInteger(name) - - XML_Encoding = configure.Struct('XML_Encoding',[ - ('data', c_void_p), - ('convert', c_void_p), - ('release', c_void_p), - ('map', c_int * 256)]) - XML_Content = configure.Struct('XML_Content',[ - ('numchildren', c_int), - ('children', c_void_p), - ('name', c_char_p), - ('type', c_int), - ('quant', c_int), - ]) - # this is insanely stupid - XML_FALSE = configure.ConstantInteger('XML_FALSE') - XML_TRUE = configure.ConstantInteger('XML_TRUE') - -config = configure.configure(CConfigure) - -dumpcache.dumpcache2('pyexpat', config) diff --git a/lib_pypy/ctypes_config_cache/test/test_cache.py b/lib_pypy/ctypes_config_cache/test/test_cache.py --- a/lib_pypy/ctypes_config_cache/test/test_cache.py +++ b/lib_pypy/ctypes_config_cache/test/test_cache.py @@ -39,10 +39,6 @@ d = run('resource.ctc.py', '_resource_cache.py') assert 'RLIM_NLIMITS' in d -def test_pyexpat(): - d = run('pyexpat.ctc.py', '_pyexpat_cache.py') - assert 'XML_COMBINED_VERSION' in d - def test_locale(): d = run('locale.ctc.py', '_locale_cache.py') assert 'LC_ALL' in d diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -271,8 +271,9 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): - if not isinstance(year, (int, long)): - raise TypeError('int expected') + for value in [year, day]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -282,8 +283,9 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): - if not isinstance(hour, (int, long)): - raise TypeError('int expected') + for value in [hour, minute, second, microsecond]: + if not isinstance(value, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: diff --git a/lib_pypy/pyexpat.py b/lib_pypy/pyexpat.py deleted file mode 100644 --- a/lib_pypy/pyexpat.py +++ /dev/null @@ -1,448 +0,0 @@ - -import ctypes -import ctypes.util -from ctypes import c_char_p, c_int, c_void_p, POINTER, c_char, c_wchar_p -import sys - -# load the platform-specific cache made by running pyexpat.ctc.py -from ctypes_config_cache._pyexpat_cache import * - -try: from __pypy__ import builtinify -except ImportError: builtinify = lambda f: f - - -lib = ctypes.CDLL(ctypes.util.find_library('expat')) - - -XML_Content.children = POINTER(XML_Content) -XML_Parser = ctypes.c_void_p # an opaque pointer -assert XML_Char is ctypes.c_char # this assumption is everywhere in -# cpython's expat, let's explode - -def declare_external(name, args, res): - func = getattr(lib, name) - func.args = args - func.restype = res - globals()[name] = func - -declare_external('XML_ParserCreate', [c_char_p], XML_Parser) -declare_external('XML_ParserCreateNS', [c_char_p, c_char], XML_Parser) -declare_external('XML_Parse', [XML_Parser, c_char_p, c_int, c_int], c_int) -currents = ['CurrentLineNumber', 'CurrentColumnNumber', - 'CurrentByteIndex'] -for name in currents: - func = getattr(lib, 'XML_Get' + name) - func.args = [XML_Parser] - func.restype = c_int - -declare_external('XML_SetReturnNSTriplet', [XML_Parser, c_int], None) -declare_external('XML_GetSpecifiedAttributeCount', [XML_Parser], c_int) -declare_external('XML_SetParamEntityParsing', [XML_Parser, c_int], None) -declare_external('XML_GetErrorCode', [XML_Parser], c_int) -declare_external('XML_StopParser', [XML_Parser, c_int], None) -declare_external('XML_ErrorString', [c_int], c_char_p) -declare_external('XML_SetBase', [XML_Parser, c_char_p], None) -if XML_COMBINED_VERSION >= 19505: - declare_external('XML_UseForeignDTD', [XML_Parser, c_int], None) - -declare_external('XML_SetUnknownEncodingHandler', [XML_Parser, c_void_p, - c_void_p], None) -declare_external('XML_FreeContentModel', [XML_Parser, POINTER(XML_Content)], - None) -declare_external('XML_ExternalEntityParserCreate', [XML_Parser,c_char_p, - c_char_p], - XML_Parser) - -handler_names = [ - 'StartElement', - 'EndElement', - 'ProcessingInstruction', - 'CharacterData', - 'UnparsedEntityDecl', - 'NotationDecl', - 'StartNamespaceDecl', - 'EndNamespaceDecl', - 'Comment', - 'StartCdataSection', - 'EndCdataSection', - 'Default', - 'DefaultHandlerExpand', - 'NotStandalone', - 'ExternalEntityRef', - 'StartDoctypeDecl', - 'EndDoctypeDecl', - 'EntityDecl', - 'XmlDecl', - 'ElementDecl', - 'AttlistDecl', - ] -if XML_COMBINED_VERSION >= 19504: - handler_names.append('SkippedEntity') -setters = {} - -for name in handler_names: - if name == 'DefaultHandlerExpand': - newname = 'XML_SetDefaultHandlerExpand' - else: - name += 'Handler' - newname = 'XML_Set' + name - cfunc = getattr(lib, newname) - cfunc.args = [XML_Parser, ctypes.c_void_p] - cfunc.result = ctypes.c_int - setters[name] = cfunc - -class ExpatError(Exception): - def __str__(self): - return self.s - -error = ExpatError - -class XMLParserType(object): - specified_attributes = 0 - ordered_attributes = 0 - returns_unicode = 1 - encoding = 'utf-8' - def __init__(self, encoding, namespace_separator, _hook_external_entity=False): - self.returns_unicode = 1 - if encoding: - self.encoding = encoding - if not _hook_external_entity: - if namespace_separator is None: - self.itself = XML_ParserCreate(encoding) - else: - self.itself = XML_ParserCreateNS(encoding, ord(namespace_separator)) - if not self.itself: - raise RuntimeError("Creating parser failed") - self._set_unknown_encoding_handler() - self.storage = {} - self.buffer = None - self.buffer_size = 8192 - self.character_data_handler = None - self.intern = {} - self.__exc_info = None - - def _flush_character_buffer(self): - if not self.buffer: - return - res = self._call_character_handler(''.join(self.buffer)) - self.buffer = [] - return res - - def _call_character_handler(self, buf): - if self.character_data_handler: - self.character_data_handler(buf) - - def _set_unknown_encoding_handler(self): - def UnknownEncoding(encodingData, name, info_p): - info = info_p.contents - s = ''.join([chr(i) for i in range(256)]) - u = s.decode(self.encoding, 'replace') - for i in range(len(u)): - if u[i] == u'\xfffd': - info.map[i] = -1 - else: - info.map[i] = ord(u[i]) - info.data = None - info.convert = None - info.release = None - return 1 - - CB = ctypes.CFUNCTYPE(c_int, c_void_p, c_char_p, POINTER(XML_Encoding)) - cb = CB(UnknownEncoding) - self._unknown_encoding_handler = (cb, UnknownEncoding) - XML_SetUnknownEncodingHandler(self.itself, cb, None) - - def _set_error(self, code): - e = ExpatError() - e.code = code - lineno = lib.XML_GetCurrentLineNumber(self.itself) - colno = lib.XML_GetCurrentColumnNumber(self.itself) - e.offset = colno - e.lineno = lineno - err = XML_ErrorString(code)[:200] - e.s = "%s: line: %d, column: %d" % (err, lineno, colno) - e.message = e.s - self._error = e - - def Parse(self, data, is_final=0): - res = XML_Parse(self.itself, data, len(data), is_final) - if res == 0: - self._set_error(XML_GetErrorCode(self.itself)) - if self.__exc_info: - exc_info = self.__exc_info - self.__exc_info = None - raise exc_info[0], exc_info[1], exc_info[2] - else: - raise self._error - self._flush_character_buffer() - return res - - def _sethandler(self, name, real_cb): - setter = setters[name] - try: - cb = self.storage[(name, real_cb)] - except KeyError: - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - self.storage[(name, real_cb)] = cb - except TypeError: - # weellll... - cb = getattr(self, 'get_cb_for_%s' % name)(real_cb) - setter(self.itself, cb) - - def _wrap_cb(self, cb): - def f(*args): - try: - return cb(*args) - except: - self.__exc_info = sys.exc_info() - XML_StopParser(self.itself, XML_FALSE) - return f - - def get_cb_for_StartElementHandler(self, real_cb): - def StartElement(unused, name, attrs): - # unpack name and attrs - conv = self.conv - self._flush_character_buffer() - if self.specified_attributes: - max = XML_GetSpecifiedAttributeCount(self.itself) - else: - max = 0 - while attrs[max]: - max += 2 # copied - if self.ordered_attributes: - res = [attrs[i] for i in range(max)] - else: - res = {} - for i in range(0, max, 2): - res[conv(attrs[i])] = conv(attrs[i + 1]) - real_cb(conv(name), res) - StartElement = self._wrap_cb(StartElement) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(c_char_p)) - return CB(StartElement) - - def get_cb_for_ExternalEntityRefHandler(self, real_cb): - def ExternalEntity(unused, context, base, sysId, pubId): - self._flush_character_buffer() - conv = self.conv - res = real_cb(conv(context), conv(base), conv(sysId), - conv(pubId)) - if res is None: - return 0 - return res - ExternalEntity = self._wrap_cb(ExternalEntity) - CB = ctypes.CFUNCTYPE(c_int, c_void_p, *([c_char_p] * 4)) - return CB(ExternalEntity) - - def get_cb_for_CharacterDataHandler(self, real_cb): - def CharacterData(unused, s, lgt): - if self.buffer is None: - self._call_character_handler(self.conv(s[:lgt])) - else: - if len(self.buffer) + lgt > self.buffer_size: - self._flush_character_buffer() - if self.character_data_handler is None: - return - if lgt >= self.buffer_size: - self._call_character_handler(s[:lgt]) - self.buffer = [] - else: - self.buffer.append(s[:lgt]) - CharacterData = self._wrap_cb(CharacterData) - CB = ctypes.CFUNCTYPE(None, c_void_p, POINTER(c_char), c_int) - return CB(CharacterData) - - def get_cb_for_NotStandaloneHandler(self, real_cb): - def NotStandaloneHandler(unused): - return real_cb() - NotStandaloneHandler = self._wrap_cb(NotStandaloneHandler) - CB = ctypes.CFUNCTYPE(c_int, c_void_p) - return CB(NotStandaloneHandler) - - def get_cb_for_EntityDeclHandler(self, real_cb): - def EntityDecl(unused, ename, is_param, value, value_len, base, - system_id, pub_id, not_name): - self._flush_character_buffer() - if not value: - value = None - else: - value = value[:value_len] - args = [ename, is_param, value, base, system_id, - pub_id, not_name] - args = [self.conv(arg) for arg in args] - real_cb(*args) - EntityDecl = self._wrap_cb(EntityDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, c_int, c_char_p, - c_int, c_char_p, c_char_p, c_char_p, c_char_p) - return CB(EntityDecl) - - def _conv_content_model(self, model): - children = tuple([self._conv_content_model(model.children[i]) - for i in range(model.numchildren)]) - return (model.type, model.quant, self.conv(model.name), - children) - - def get_cb_for_ElementDeclHandler(self, real_cb): - def ElementDecl(unused, name, model): - self._flush_character_buffer() - modelobj = self._conv_content_model(model[0]) - real_cb(name, modelobj) - XML_FreeContentModel(self.itself, model) - - ElementDecl = self._wrap_cb(ElementDecl) - CB = ctypes.CFUNCTYPE(None, c_void_p, c_char_p, POINTER(XML_Content)) - return CB(ElementDecl) - - def _new_callback_for_string_len(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, s, len): - self._flush_character_buffer() - arg = self.conv(s[:len]) - real_cb(arg) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name in ['DefaultHandlerExpand', - 'DefaultHandler']: - sign = [None, c_void_p, POINTER(c_char), c_int] - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_string_len(name, sign) - - def _new_callback_for_starargs(name, sign): - def get_callback_for_(self, real_cb): - def func(unused, *args): - self._flush_character_buffer() - args = [self.conv(arg) for arg in args] - real_cb(*args) - func.func_name = name - func = self._wrap_cb(func) - CB = ctypes.CFUNCTYPE(*sign) - return CB(func) - get_callback_for_.func_name = 'get_cb_for_' + name - return get_callback_for_ - - for name, num_or_sign in [ - ('EndElementHandler', 1), - ('ProcessingInstructionHandler', 2), - ('UnparsedEntityDeclHandler', 5), - ('NotationDeclHandler', 4), - ('StartNamespaceDeclHandler', 2), - ('EndNamespaceDeclHandler', 1), - ('CommentHandler', 1), - ('StartCdataSectionHandler', 0), - ('EndCdataSectionHandler', 0), - ('StartDoctypeDeclHandler', [None, c_void_p] + [c_char_p] * 3 + [c_int]), - ('XmlDeclHandler', [None, c_void_p, c_char_p, c_char_p, c_int]), - ('AttlistDeclHandler', [None, c_void_p] + [c_char_p] * 4 + [c_int]), - ('EndDoctypeDeclHandler', 0), - ('SkippedEntityHandler', [None, c_void_p, c_char_p, c_int]), - ]: - if isinstance(num_or_sign, int): - sign = [None, c_void_p] + [c_char_p] * num_or_sign - else: - sign = num_or_sign - name = 'get_cb_for_' + name - locals()[name] = _new_callback_for_starargs(name, sign) - - def conv_unicode(self, s): - if s is None or isinstance(s, int): - return s - return s.decode(self.encoding, "strict") - - def __setattr__(self, name, value): - # forest of ifs... - if name in ['ordered_attributes', - 'returns_unicode', 'specified_attributes']: - if value: - if name == 'returns_unicode': - self.conv = self.conv_unicode - self.__dict__[name] = 1 - else: - if name == 'returns_unicode': - self.conv = lambda s: s - self.__dict__[name] = 0 - elif name == 'buffer_text': - if value: - self.buffer = [] - else: - self._flush_character_buffer() - self.buffer = None - elif name == 'buffer_size': - if not isinstance(value, int): - raise TypeError("Expected int") - if value <= 0: - raise ValueError("Expected positive int") - self.__dict__[name] = value - elif name == 'namespace_prefixes': - XML_SetReturnNSTriplet(self.itself, int(bool(value))) - elif name in setters: - if name == 'CharacterDataHandler': - # XXX we need to flush buffer here - self._flush_character_buffer() - self.character_data_handler = value - #print name - #print value - #print - self._sethandler(name, value) - else: - self.__dict__[name] = value - - def SetParamEntityParsing(self, arg): - XML_SetParamEntityParsing(self.itself, arg) - - if XML_COMBINED_VERSION >= 19505: - def UseForeignDTD(self, arg=True): - if arg: - flag = XML_TRUE - else: - flag = XML_FALSE - XML_UseForeignDTD(self.itself, flag) - - def __getattr__(self, name): - if name == 'buffer_text': - return self.buffer is not None - elif name in currents: - return getattr(lib, 'XML_Get' + name)(self.itself) - elif name == 'ErrorColumnNumber': - return lib.XML_GetCurrentColumnNumber(self.itself) - elif name == 'ErrorLineNumber': - return lib.XML_GetCurrentLineNumber(self.itself) - return self.__dict__[name] - - def ParseFile(self, file): - return self.Parse(file.read(), False) - - def SetBase(self, base): - XML_SetBase(self.itself, base) - - def ExternalEntityParserCreate(self, context, encoding=None): - """ExternalEntityParserCreate(context[, encoding]) - Create a parser for parsing an external entity based on the - information passed to the ExternalEntityRefHandler.""" - new_parser = XMLParserType(encoding, None, True) - new_parser.itself = XML_ExternalEntityParserCreate(self.itself, - context, encoding) - new_parser._set_unknown_encoding_handler() - return new_parser - - at builtinify -def ErrorString(errno): - return XML_ErrorString(errno)[:200] - - at builtinify -def ParserCreate(encoding=None, namespace_separator=None, intern=None): - if (not isinstance(encoding, str) and - not encoding is None): - raise TypeError("ParserCreate() argument 1 must be string or None, not %s" % encoding.__class__.__name__) - if (not isinstance(namespace_separator, str) and - not namespace_separator is None): - raise TypeError("ParserCreate() argument 2 must be string or None, not %s" % namespace_separator.__class__.__name__) - if namespace_separator is not None: - if len(namespace_separator) > 1: - raise ValueError('namespace_separator must be at most one character, omitted, or None') - if len(namespace_separator) == 0: - namespace_separator = None - return XMLParserType(encoding, namespace_separator) diff --git a/lib_pypy/pypy_test/test_pyexpat.py b/lib_pypy/pypy_test/test_pyexpat.py deleted file mode 100644 --- a/lib_pypy/pypy_test/test_pyexpat.py +++ /dev/null @@ -1,665 +0,0 @@ -# XXX TypeErrors on calling handlers, or on bad return values from a -# handler, are obscure and unhelpful. - -from __future__ import absolute_import -import StringIO, sys -import unittest, py - -from lib_pypy.ctypes_config_cache import rebuild -rebuild.rebuild_one('pyexpat.ctc.py') - -from lib_pypy import pyexpat -#from xml.parsers import expat -expat = pyexpat - -from test.test_support import sortdict, run_unittest - - -class TestSetAttribute: - def setup_method(self, meth): - self.parser = expat.ParserCreate(namespace_separator='!') - self.set_get_pairs = [ - [0, 0], - [1, 1], - [2, 1], - [0, 0], - ] - - def test_returns_unicode(self): - for x, y in self.set_get_pairs: - self.parser.returns_unicode = x - assert self.parser.returns_unicode == y - - def test_ordered_attributes(self): - for x, y in self.set_get_pairs: - self.parser.ordered_attributes = x - assert self.parser.ordered_attributes == y - - def test_specified_attributes(self): - for x, y in self.set_get_pairs: - self.parser.specified_attributes = x - assert self.parser.specified_attributes == y - - -data = '''\ - - - - - - - - - -%unparsed_entity; -]> - - - - Contents of subelements - - -&external_entity; -&skipped_entity; - -''' - - -# Produce UTF-8 output -class TestParse: - class Outputter: - def __init__(self): - self.out = [] - - def StartElementHandler(self, name, attrs): - self.out.append('Start element: ' + repr(name) + ' ' + - sortdict(attrs)) - - def EndElementHandler(self, name): - self.out.append('End element: ' + repr(name)) - - def CharacterDataHandler(self, data): - data = data.strip() - if data: - self.out.append('Character data: ' + repr(data)) - - def ProcessingInstructionHandler(self, target, data): - self.out.append('PI: ' + repr(target) + ' ' + repr(data)) - - def StartNamespaceDeclHandler(self, prefix, uri): - self.out.append('NS decl: ' + repr(prefix) + ' ' + repr(uri)) - - def EndNamespaceDeclHandler(self, prefix): - self.out.append('End of NS decl: ' + repr(prefix)) - - def StartCdataSectionHandler(self): - self.out.append('Start of CDATA section') - - def EndCdataSectionHandler(self): - self.out.append('End of CDATA section') - - def CommentHandler(self, text): - self.out.append('Comment: ' + repr(text)) - - def NotationDeclHandler(self, *args): - name, base, sysid, pubid = args - self.out.append('Notation declared: %s' %(args,)) - - def UnparsedEntityDeclHandler(self, *args): - entityName, base, systemId, publicId, notationName = args - self.out.append('Unparsed entity decl: %s' %(args,)) - - def NotStandaloneHandler(self): - self.out.append('Not standalone') - return 1 - - def ExternalEntityRefHandler(self, *args): - context, base, sysId, pubId = args - self.out.append('External entity ref: %s' %(args[1:],)) - return 1 - - def StartDoctypeDeclHandler(self, *args): - self.out.append(('Start doctype', args)) - return 1 - - def EndDoctypeDeclHandler(self): - self.out.append("End doctype") - return 1 - - def EntityDeclHandler(self, *args): - self.out.append(('Entity declaration', args)) - return 1 - - def XmlDeclHandler(self, *args): - self.out.append(('XML declaration', args)) - return 1 - - def ElementDeclHandler(self, *args): - self.out.append(('Element declaration', args)) - return 1 - - def AttlistDeclHandler(self, *args): - self.out.append(('Attribute list declaration', args)) - return 1 - - def SkippedEntityHandler(self, *args): - self.out.append(("Skipped entity", args)) - return 1 - - def DefaultHandler(self, userData): - pass - - def DefaultHandlerExpand(self, userData): - pass - - handler_names = [ - 'StartElementHandler', 'EndElementHandler', 'CharacterDataHandler', - 'ProcessingInstructionHandler', 'UnparsedEntityDeclHandler', - 'NotationDeclHandler', 'StartNamespaceDeclHandler', - 'EndNamespaceDeclHandler', 'CommentHandler', - 'StartCdataSectionHandler', 'EndCdataSectionHandler', 'DefaultHandler', - 'DefaultHandlerExpand', 'NotStandaloneHandler', - 'ExternalEntityRefHandler', 'StartDoctypeDeclHandler', - 'EndDoctypeDeclHandler', 'EntityDeclHandler', 'XmlDeclHandler', - 'ElementDeclHandler', 'AttlistDeclHandler', 'SkippedEntityHandler', - ] - - def test_utf8(self): - - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - parser.returns_unicode = 0 - parser.Parse(data, 1) - - # Verify output - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: \'xml-stylesheet\' \'href="stylesheet.css"\'', - "Comment: ' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: ('notation', None, 'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, '\xc3\xa2', None, None, None, None)), - ('Entity declaration', ('external_entity', 0, None, None, - 'entity.file', None, None)), - "Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation')", - "Not standalone", - "End doctype", - "Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\\xe1\\xbd\\x80'}", - "NS decl: 'myns' 'http://www.python.org/namespace'", - "Start element: 'http://www.python.org/namespace!subelement' {}", - "Character data: 'Contents of subelements'", - "End element: 'http://www.python.org/namespace!subelement'", - "End of NS decl: 'myns'", - "Start element: 'sub2' {}", - 'Start of CDATA section', - "Character data: 'contents of CDATA section'", - 'End of CDATA section', - "End element: 'sub2'", - "External entity ref: (None, 'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: 'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_unicode(self): - # Try the parse again, this time producing Unicode output - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - - parser.Parse(data, 1) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', (u'acirc', 0, u'\xe2', None, None, None, - None)), - ('Entity declaration', (u'external_entity', 0, None, None, - u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - def test_parse_file(self): - # Try parsing a file - out = self.Outputter() - parser = expat.ParserCreate(namespace_separator='!') - parser.returns_unicode = 1 - for name in self.handler_names: - setattr(parser, name, getattr(out, name)) - file = StringIO.StringIO(data) - - parser.ParseFile(file) - - operations = out.out - expected_operations = [ - ('XML declaration', (u'1.0', u'iso-8859-1', 0)), - 'PI: u\'xml-stylesheet\' u\'href="stylesheet.css"\'', - "Comment: u' comment data '", - "Not standalone", - ("Start doctype", ('quotations', 'quotations.dtd', None, 1)), - ('Element declaration', (u'root', (2, 0, None, ()))), - ('Attribute list declaration', ('root', 'attr1', 'CDATA', None, - 1)), - ('Attribute list declaration', ('root', 'attr2', 'CDATA', None, - 0)), - "Notation declared: (u'notation', None, u'notation.jpeg', None)", - ('Entity declaration', ('acirc', 0, u'\xe2', None, None, None, None)), - ('Entity declaration', (u'external_entity', 0, None, None, u'entity.file', None, None)), - "Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation')", - "Not standalone", - "End doctype", - "Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\\u1f40'}", - "NS decl: u'myns' u'http://www.python.org/namespace'", - "Start element: u'http://www.python.org/namespace!subelement' {}", - "Character data: u'Contents of subelements'", - "End element: u'http://www.python.org/namespace!subelement'", - "End of NS decl: u'myns'", - "Start element: u'sub2' {}", - 'Start of CDATA section', - "Character data: u'contents of CDATA section'", - 'End of CDATA section', - "End element: u'sub2'", - "External entity ref: (None, u'entity.file', None)", - ('Skipped entity', ('skipped_entity', 0)), - "End element: u'root'", - ] - for operation, expected_operation in zip(operations, expected_operations): - assert operation == expected_operation - - -class TestNamespaceSeparator: - def test_legal(self): - # Tests that make sure we get errors when the namespace_separator value - # is illegal, and that we don't for good values: - expat.ParserCreate() - expat.ParserCreate(namespace_separator=None) - expat.ParserCreate(namespace_separator=' ') - - def test_illegal(self): - try: - expat.ParserCreate(namespace_separator=42) - raise AssertionError - except TypeError, e: - assert str(e) == ( - 'ParserCreate() argument 2 must be string or None, not int') - - try: - expat.ParserCreate(namespace_separator='too long') - raise AssertionError - except ValueError, e: - assert str(e) == ( - 'namespace_separator must be at most one character, omitted, or None') - - def test_zero_length(self): - # ParserCreate() needs to accept a namespace_separator of zero length - # to satisfy the requirements of RDF applications that are required - # to simply glue together the namespace URI and the localname. Though - # considered a wart of the RDF specifications, it needs to be supported. - # - # See XML-SIG mailing list thread starting with - # http://mail.python.org/pipermail/xml-sig/2001-April/005202.html - # - expat.ParserCreate(namespace_separator='') # too short - - -class TestInterning: - def test(self): - py.test.skip("Not working") - # Test the interning machinery. - p = expat.ParserCreate() - L = [] - def collector(name, *args): - L.append(name) - p.StartElementHandler = collector - p.EndElementHandler = collector - p.Parse(" ", 1) - tag = L[0] - assert len(L) == 6 - for entry in L: - # L should have the same string repeated over and over. - assert tag is entry - - -class TestBufferText: - def setup_method(self, meth): - self.stuff = [] - self.parser = expat.ParserCreate() - self.parser.buffer_text = 1 - self.parser.CharacterDataHandler = self.CharacterDataHandler - - def check(self, expected, label): - assert self.stuff == expected, ( - "%s\nstuff = %r\nexpected = %r" - % (label, self.stuff, map(unicode, expected))) - - def CharacterDataHandler(self, text): - self.stuff.append(text) - - def StartElementHandler(self, name, attrs): - self.stuff.append("<%s>" % name) - bt = attrs.get("buffer-text") - if bt == "yes": - self.parser.buffer_text = 1 - elif bt == "no": - self.parser.buffer_text = 0 - - def EndElementHandler(self, name): - self.stuff.append("" % name) - - def CommentHandler(self, data): - self.stuff.append("" % data) - - def setHandlers(self, handlers=[]): - for name in handlers: - setattr(self.parser, name, getattr(self, name)) - - def test_default_to_disabled(self): - parser = expat.ParserCreate() - assert not parser.buffer_text - - def test_buffering_enabled(self): - # Make sure buffering is turned on - assert self.parser.buffer_text - self.parser.Parse("123", 1) - assert self.stuff == ['123'], ( - "buffered text not properly collapsed") - - def test1(self): - # XXX This test exposes more detail of Expat's text chunking than we - # XXX like, but it tests what we need to concisely. - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) - assert self.stuff == ( - ["", "1", "", "2", "\n", "3", "", "4\n5"]), ( - "buffering control not reacting as expected") - - def test2(self): - self.parser.Parse("1<2> \n 3", 1) - assert self.stuff == ["1<2> \n 3"], ( - "buffered text not properly collapsed") - - def test3(self): - self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ["", "1", "", "2", "", "3"], ( - "buffered text not properly split") - - def test4(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "", "", "", "", ""]) - - def test5(self): - self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", ""]) - - def test6(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "345", ""]), ( - "buffered text not properly split") - - def test7(self): - self.setHandlers(["CommentHandler", "EndElementHandler", - "StartElementHandler"]) - self.parser.Parse("12345 ", 1) - assert self.stuff == ( - ["", "1", "", "", "2", "", "", "3", - "", "4", "", "5", ""]), ( - "buffered text not properly split") - - -# Test handling of exception from callback: -class TestHandlerException: - def StartElementHandler(self, name, attrs): - raise RuntimeError(name) - - def test(self): - parser = expat.ParserCreate() - parser.StartElementHandler = self.StartElementHandler - try: - parser.Parse("", 1) - raise AssertionError - except RuntimeError, e: - assert e.args[0] == 'a', ( - "Expected RuntimeError for element 'a', but" + \ - " found %r" % e.args[0]) - - -# Test Current* members: -class TestPosition: - def StartElementHandler(self, name, attrs): - self.check_pos('s') - - def EndElementHandler(self, name): - self.check_pos('e') - - def check_pos(self, event): - pos = (event, - self.parser.CurrentByteIndex, - self.parser.CurrentLineNumber, - self.parser.CurrentColumnNumber) - assert self.upto < len(self.expected_list) - expected = self.expected_list[self.upto] - assert pos == expected, ( - 'Expected position %s, got position %s' %(pos, expected)) - self.upto += 1 - - def test(self): - self.parser = expat.ParserCreate() - self.parser.StartElementHandler = self.StartElementHandler - self.parser.EndElementHandler = self.EndElementHandler - self.upto = 0 - self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), - ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - - xml = '\n \n \n \n' - self.parser.Parse(xml, 1) - - -class Testsf1296433: - def test_parse_only_xml_data(self): - # http://python.org/sf/1296433 - # - xml = "%s" % ('a' * 1025) - # this one doesn't crash - #xml = "%s" % ('a' * 10000) - - class SpecificException(Exception): - pass - - def handler(text): - raise SpecificException - - parser = expat.ParserCreate() - parser.CharacterDataHandler = handler - - py.test.raises(Exception, parser.Parse, xml) - -class TestChardataBuffer: - """ - test setting of chardata buffer size - """ - - def test_1025_bytes(self): - assert self.small_buffer_test(1025) == 2 - - def test_1000_bytes(self): - assert self.small_buffer_test(1000) == 1 - - def test_wrong_size(self): - parser = expat.ParserCreate() - parser.buffer_text = 1 - def f(size): - parser.buffer_size = size - - py.test.raises(TypeError, f, sys.maxint+1) - py.test.raises(ValueError, f, -1) - py.test.raises(ValueError, f, 0) - - def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 512 - parser.buffer_text = 1 - - # Feed 512 bytes of character data: the handler should be called - # once. - self.n = 0 - parser.Parse(xml1) - assert self.n == 1 - - # Reassign to buffer_size, but assign the same size. - parser.buffer_size = parser.buffer_size - assert self.n == 1 - - # Try parsing rest of the document - parser.Parse(xml2) - assert self.n == 2 - - - def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - # Parse one chunk of XML - self.n = 0 - parser.Parse(xml1, 0) - assert parser.buffer_size == 1024 - assert self.n == 1 - - # Turn off buffering and parse the next chunk. - parser.buffer_text = 0 - assert not parser.buffer_text - assert parser.buffer_size == 1024 - for i in range(10): - parser.Parse(xml2, 0) - assert self.n == 11 - - parser.buffer_text = 1 - assert parser.buffer_text - assert parser.buffer_size == 1024 - parser.Parse(xml3, 1) - assert self.n == 12 - - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - - def counting_handler(self, text): - self.n += 1 - - def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_size = 1024 - parser.buffer_text = 1 - - self.n = 0 - parser.Parse(xml) - return self.n - - def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 1024 - assert parser.buffer_size == 1024 - - self.n = 0 - parser.Parse(xml1, 0) - parser.buffer_size *= 2 - assert parser.buffer_size == 2048 - parser.Parse(xml2, 1) - assert self.n == 2 - - def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) - parser = expat.ParserCreate() - parser.CharacterDataHandler = self.counting_handler - parser.buffer_text = 1 - parser.buffer_size = 2048 - assert parser.buffer_size == 2048 - - self.n=0 - parser.Parse(xml1, 0) - parser.buffer_size /= 2 - assert parser.buffer_size == 1024 - parser.Parse(xml2, 1) - assert self.n == 4 - - def test_segfault(self): - py.test.raises(TypeError, expat.ParserCreate, 1234123123) - -def test_invalid_data(): - parser = expat.ParserCreate() - parser.Parse('invalid.xml', 0) - try: - parser.Parse("", 1) - except expat.ExpatError, e: - assert e.code == 2 # XXX is this reliable? - assert e.lineno == 1 - assert e.message.startswith('syntax error') - else: - py.test.fail("Did not raise") - diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -105,7 +105,8 @@ BoolOption("sandbox", "Produce a fully-sandboxed executable", default=False, cmdline="--sandbox", requires=[("translation.thread", False)], - suggests=[("translation.gc", "generation")]), + suggests=[("translation.gc", "generation"), + ("translation.gcrootfinder", "shadowstack")]), BoolOption("rweakref", "The backend supports RPython-level weakrefs", default=True), diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt --- a/pypy/doc/config/objspace.usemodules.pyexpat.txt +++ b/pypy/doc/config/objspace.usemodules.pyexpat.txt @@ -1,2 +1,1 @@ -Use (experimental) pyexpat module written in RPython, instead of CTypes -version which is used by default. +Use the pyexpat module, written in RPython. diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -313,5 +313,10 @@ implementation detail that shows up because of internal C-level slots that PyPy does not have. +* the ``__dict__`` attribute of new-style classes returns a normal dict, as + opposed to a dict proxy like in CPython. Mutating the dict will change the + type and vice versa. For builtin types, a dictionary will be returned that + cannot be changed (but still looks and behaves like a normal dictionary). + .. include:: _ref.txt diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -328,7 +328,7 @@ raise modname = self.str_w(w_modname) mod = self.interpclass_w(w_mod) - if isinstance(mod, Module): + if isinstance(mod, Module) and not mod.startup_called: self.timer.start("startup " + modname) mod.init(self) self.timer.stop("startup " + modname) diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -60,11 +60,10 @@ self.pycode = code eval.Frame.__init__(self, space, w_globals) self.locals_stack_w = [None] * (code.co_nlocals + code.co_stacksize) - self.nlocals = code.co_nlocals self.valuestackdepth = code.co_nlocals self.lastblock = None make_sure_not_resized(self.locals_stack_w) - check_nonneg(self.nlocals) + check_nonneg(self.valuestackdepth) # if space.config.objspace.honor__builtins__: self.builtin = space.builtin.pick_builtin(w_globals) @@ -144,8 +143,8 @@ def execute_frame(self, w_inputvalue=None, operr=None): """Execute this frame. Main entry point to the interpreter. The optional arguments are there to handle a generator's frame: - w_inputvalue is for generator.send()) and operr is for - generator.throw()). + w_inputvalue is for generator.send() and operr is for + generator.throw(). """ # the following 'assert' is an annotation hint: it hides from # the annotator all methods that are defined in PyFrame but @@ -195,7 +194,7 @@ def popvalue(self): depth = self.valuestackdepth - 1 - assert depth >= self.nlocals, "pop from empty value stack" + assert depth >= self.pycode.co_nlocals, "pop from empty value stack" w_object = self.locals_stack_w[depth] self.locals_stack_w[depth] = None self.valuestackdepth = depth @@ -223,7 +222,7 @@ def peekvalues(self, n): values_w = [None] * n base = self.valuestackdepth - n - assert base >= self.nlocals + assert base >= self.pycode.co_nlocals while True: n -= 1 if n < 0: @@ -235,7 +234,8 @@ def dropvalues(self, n): n = hint(n, promote=True) finaldepth = self.valuestackdepth - n - assert finaldepth >= self.nlocals, "stack underflow in dropvalues()" + assert finaldepth >= self.pycode.co_nlocals, ( + "stack underflow in dropvalues()") while True: n -= 1 if n < 0: @@ -267,13 +267,15 @@ # Contrast this with CPython where it's PEEK(-1). index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "peek past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "peek past the bottom of the stack") return self.locals_stack_w[index] def settopvalue(self, w_object, index_from_top=0): index_from_top = hint(index_from_top, promote=True) index = self.valuestackdepth + ~index_from_top - assert index >= self.nlocals, "settop past the bottom of the stack" + assert index >= self.pycode.co_nlocals, ( + "settop past the bottom of the stack") self.locals_stack_w[index] = w_object @jit.unroll_safe @@ -320,12 +322,13 @@ else: f_lineno = self.f_lineno - values_w = self.locals_stack_w[self.nlocals:self.valuestackdepth] + nlocals = self.pycode.co_nlocals + values_w = self.locals_stack_w[nlocals:self.valuestackdepth] w_valuestack = maker.slp_into_tuple_with_nulls(space, values_w) w_blockstack = nt([block._get_state_(space) for block in self.get_blocklist()]) w_fastlocals = maker.slp_into_tuple_with_nulls( - space, self.locals_stack_w[:self.nlocals]) + space, self.locals_stack_w[:nlocals]) if self.last_exception is None: w_exc_value = space.w_None w_tb = space.w_None @@ -442,7 +445,7 @@ """Initialize the fast locals from a list of values, where the order is according to self.pycode.signature().""" scope_len = len(scope_w) - if scope_len > self.nlocals: + if scope_len > self.pycode.co_nlocals: raise ValueError, "new fastscope is longer than the allocated area" # don't assign directly to 'locals_stack_w[:scope_len]' to be # virtualizable-friendly @@ -456,7 +459,7 @@ pass def getfastscopelength(self): - return self.nlocals + return self.pycode.co_nlocals def getclosure(self): return None diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -322,3 +322,14 @@ space.ALL_BUILTIN_MODULES.pop() del space._builtinmodule_list mods = space.get_builtinmodule_to_install() + + def test_dont_reload_builtin_mods_on_startup(self): + from pypy.tool.option import make_config, make_objspace + config = make_config(None) + space = make_objspace(config) + w_executable = space.wrap('executable') + assert space.str_w(space.getattr(space.sys, w_executable)) == 'py.py' + space.setattr(space.sys, w_executable, space.wrap('foobar')) + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' + space.startup() + assert space.str_w(space.getattr(space.sys, w_executable)) == 'foobar' diff --git a/pypy/interpreter/test/test_zpy.py b/pypy/interpreter/test/test_zpy.py --- a/pypy/interpreter/test/test_zpy.py +++ b/pypy/interpreter/test/test_zpy.py @@ -17,14 +17,14 @@ def test_executable(): """Ensures sys.executable points to the py.py script""" # TODO : watch out for spaces/special chars in pypypath - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.executable") assert output.splitlines()[-1] == pypypath def test_special_names(): """Test the __name__ and __file__ special global names""" cmd = "print __name__; print '__file__' in globals()" - output = run(sys.executable, pypypath, '-c', cmd) + output = run(sys.executable, pypypath, '-S', '-c', cmd) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == 'False' @@ -33,24 +33,24 @@ tmpfile.write("print __name__; print __file__\n") tmpfile.close() - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-2] == '__main__' assert output.splitlines()[-1] == str(tmpfilepath) def test_argv_command(): """Some tests on argv""" # test 1 : no arguments - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv") assert output.splitlines()[-1] == str(['-c']) # test 2 : some arguments after - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, + output = run(sys.executable, pypypath, '-S', "-O", "-c", "import sys;print sys.argv", "hello") assert output.splitlines()[-1] == str(['-c','hello']) @@ -65,15 +65,15 @@ tmpfile.close() # test 1 : no arguments - output = run(sys.executable, pypypath, tmpfilepath) + output = run(sys.executable, pypypath, '-S', tmpfilepath) assert output.splitlines()[-1] == str([tmpfilepath]) # test 2 : some arguments after - output = run(sys.executable, pypypath, tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) # test 3 : additionnal pypy parameters - output = run(sys.executable, pypypath, "-O", tmpfilepath, "hello") + output = run(sys.executable, pypypath, '-S', "-O", tmpfilepath, "hello") assert output.splitlines()[-1] == str([tmpfilepath,'hello']) @@ -95,7 +95,7 @@ tmpfile.write(TB_NORMALIZATION_CHK) tmpfile.close() - popen = subprocess.Popen([sys.executable, str(pypypath), tmpfilepath], + popen = subprocess.Popen([sys.executable, str(pypypath), '-S', tmpfilepath], stderr=subprocess.PIPE) _, stderr = popen.communicate() assert stderr.endswith('KeyError: \n') diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,7 +1,6 @@ import os from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated, specialize -from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr from pypy.rpython.lltypesystem import llgroup diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2221,6 +2221,35 @@ print 'step 4 ok' print '-'*79 + def test_guard_not_invalidated_and_label(self): + # test that the guard_not_invalidated reserves enough room before + # the label. If it doesn't, then in this example after we invalidate + # the guard, jumping to the label will hit the invalidation code too + cpu = self.cpu + i0 = BoxInt() + faildescr = BasicFailDescr(1) + labeldescr = TargetToken() + ops = [ + ResOperation(rop.GUARD_NOT_INVALIDATED, [], None, descr=faildescr), + ResOperation(rop.LABEL, [i0], None, descr=labeldescr), + ResOperation(rop.FINISH, [i0], None, descr=BasicFailDescr(3)), + ] + ops[0].setfailargs([]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0], ops, looptoken) + # mark as failing + self.cpu.invalidate_loop(looptoken) + # attach a bridge + i2 = BoxInt() + ops = [ + ResOperation(rop.JUMP, [ConstInt(333)], None, descr=labeldescr), + ] + self.cpu.compile_bridge(faildescr, [], ops, looptoken) + # run: must not be caught in an infinite loop + fail = self.cpu.execute_token(looptoken, 16) + assert fail.identifier == 3 + assert self.cpu.get_latest_value_int(0) == 333 + # pure do_ / descr features def test_do_operations(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -165,7 +165,6 @@ self.jump_target_descr = None self.close_stack_struct = 0 self.final_jump_op = None - self.min_bytes_before_label = 0 def _prepare(self, inputargs, operations, allgcrefs): self.fm = X86FrameManager() @@ -199,8 +198,13 @@ operations = self._prepare(inputargs, operations, allgcrefs) self._update_bindings(arglocs, inputargs) self.param_depth = prev_depths[1] + self.min_bytes_before_label = 0 return operations + def ensure_next_label_is_at_least_at_position(self, at_least_position): + self.min_bytes_before_label = max(self.min_bytes_before_label, + at_least_position) + def reserve_param(self, n): self.param_depth = max(self.param_depth, n) @@ -468,7 +472,11 @@ self.assembler.mc.mark_op(None) # end of the loop def flush_loop(self): - # rare case: if the loop is too short, pad with NOPs + # rare case: if the loop is too short, or if we are just after + # a GUARD_NOT_INVALIDATED, pad with NOPs. Important! This must + # be called to ensure that there are enough bytes produced, + # because GUARD_NOT_INVALIDATED or redirect_call_assembler() + # will maybe overwrite them. mc = self.assembler.mc while mc.get_relative_pos() < self.min_bytes_before_label: mc.NOP() @@ -558,7 +566,15 @@ def consider_guard_no_exception(self, op): self.perform_guard(op, [], None) - consider_guard_not_invalidated = consider_guard_no_exception + def consider_guard_not_invalidated(self, op): + mc = self.assembler.mc + n = mc.get_relative_pos() + self.perform_guard(op, [], None) + assert n == mc.get_relative_pos() + # ensure that the next label is at least 5 bytes farther than + # the current position. Otherwise, when invalidating the guard, + # we would overwrite randomly the next label's position. + self.ensure_next_label_is_at_least_at_position(n + 5) def consider_guard_exception(self, op): loc = self.rm.make_sure_var_in_reg(op.getarg(0)) diff --git a/pypy/jit/backend/x86/support.py b/pypy/jit/backend/x86/support.py --- a/pypy/jit/backend/x86/support.py +++ b/pypy/jit/backend/x86/support.py @@ -1,6 +1,7 @@ import sys from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.x86.arch import WORD def values_array(TP, size): @@ -37,8 +38,13 @@ if sys.platform == 'win32': ensure_sse2_floats = lambda : None + # XXX check for SSE2 on win32 too else: + if WORD == 4: + extra = ['-DPYPY_X86_CHECK_SSE2'] + else: + extra = [] ensure_sse2_floats = rffi.llexternal_use_eci(ExternalCompilationInfo( compile_extra = ['-msse2', '-mfpmath=sse', - '-DPYPY_CPU_HAS_STANDARD_PRECISION'], + '-DPYPY_CPU_HAS_STANDARD_PRECISION'] + extra, )) diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py b/pypy/jit/backend/x86/test/test_ztranslation.py --- a/pypy/jit/backend/x86/test/test_ztranslation.py +++ b/pypy/jit/backend/x86/test/test_ztranslation.py @@ -52,6 +52,7 @@ set_param(jitdriver, "trace_eagerness", 2) total = 0 frame = Frame(i) + j = float(j) while frame.i > 3: jitdriver.can_enter_jit(frame=frame, total=total, j=j) jitdriver.jit_merge_point(frame=frame, total=total, j=j) diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -567,7 +567,7 @@ assert isinstance(descr, compile.ResumeGuardDescr) modifier = resume.ResumeDataVirtualAdder(descr, self.resumedata_memo) try: - newboxes = modifier.finish(self.values, self.pendingfields) + newboxes = modifier.finish(self, self.pendingfields) if len(newboxes) > self.metainterp_sd.options.failargs_limit: raise resume.TagOverflow except resume.TagOverflow: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7760,6 +7760,59 @@ """ self.optimize_loop(ops, expected) + def test_constant_failargs(self): + ops = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16, i3] + jump(p1, i3, i2) + """ + preamble = """ + [p1, i2, i3] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + guard_true(i2) [i3] + jump(p1, i3) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected, preamble) + + def test_issue1048(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + + def test_issue1048_ok(self): + ops = """ + [p1, i2, i3] + p16 = getfield_gc(p1, descr=nextdescr) + call(p16, descr=nonwritedescr) + guard_true(i2) [p16] + setfield_gc(p1, ConstPtr(myptr), descr=nextdescr) + jump(p1, i3, i2) + """ + expected = """ + [p1, i3] + call(ConstPtr(myptr), descr=nonwritedescr) + guard_true(i3) [] + jump(p1, 1) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -182,23 +182,22 @@ # env numbering - def number(self, values, snapshot): + def number(self, optimizer, snapshot): if snapshot is None: return lltype.nullptr(NUMBERING), {}, 0 if snapshot in self.numberings: numb, liveboxes, v = self.numberings[snapshot] return numb, liveboxes.copy(), v - numb1, liveboxes, v = self.number(values, snapshot.prev) + numb1, liveboxes, v = self.number(optimizer, snapshot.prev) n = len(liveboxes)-v boxes = snapshot.boxes length = len(boxes) numb = lltype.malloc(NUMBERING, length) for i in range(length): box = boxes[i] - value = values.get(box, None) - if value is not None: - box = value.get_key_box() + value = optimizer.getvalue(box) + box = value.get_key_box() if isinstance(box, Const): tagged = self.getconst(box) @@ -318,14 +317,14 @@ _, tagbits = untag(tagged) return tagbits == TAGVIRTUAL - def finish(self, values, pending_setfields=[]): + def finish(self, optimizer, pending_setfields=[]): # compute the numbering storage = self.storage # make sure that nobody attached resume data to this guard yet assert not storage.rd_numb snapshot = storage.rd_snapshot assert snapshot is not None # is that true? - numb, liveboxes_from_env, v = self.memo.number(values, snapshot) + numb, liveboxes_from_env, v = self.memo.number(optimizer, snapshot) self.liveboxes_from_env = liveboxes_from_env self.liveboxes = {} storage.rd_numb = numb @@ -341,23 +340,23 @@ liveboxes[i] = box else: assert tagbits == TAGVIRTUAL - value = values[box] + value = optimizer.getvalue(box) value.get_args_for_fail(self) for _, box, fieldbox, _ in pending_setfields: self.register_box(box) self.register_box(fieldbox) - value = values[fieldbox] + value = optimizer.getvalue(fieldbox) value.get_args_for_fail(self) - self._number_virtuals(liveboxes, values, v) + self._number_virtuals(liveboxes, optimizer, v) self._add_pending_fields(pending_setfields) storage.rd_consts = self.memo.consts dump_storage(storage, liveboxes) return liveboxes[:] - def _number_virtuals(self, liveboxes, values, num_env_virtuals): + def _number_virtuals(self, liveboxes, optimizer, num_env_virtuals): # !! 'liveboxes' is a list that is extend()ed in-place !! memo = self.memo new_liveboxes = [None] * memo.num_cached_boxes() @@ -397,7 +396,7 @@ memo.nvholes += length - len(vfieldboxes) for virtualbox, fieldboxes in vfieldboxes.iteritems(): num, _ = untag(self.liveboxes[virtualbox]) - value = values[virtualbox] + value = optimizer.getvalue(virtualbox) fieldnums = [self._gettagged(box) for box in fieldboxes] vinfo = value.make_virtual_info(self, fieldnums) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2943,11 +2943,18 @@ self.check_resops(arraylen_gc=3) def test_ulonglong_mod(self): - myjitdriver = JitDriver(greens = [], reds = ['n', 'sa', 'i']) + myjitdriver = JitDriver(greens = [], reds = ['n', 'a']) + class A: + pass def f(n): sa = i = rffi.cast(rffi.ULONGLONG, 1) + a = A() while i < rffi.cast(rffi.ULONGLONG, n): - myjitdriver.jit_merge_point(sa=sa, n=n, i=i) + a.sa = sa + a.i = i + myjitdriver.jit_merge_point(n=n, a=a) + sa = a.sa + i = a.i sa += sa % i i += 1 res = self.meta_interp(f, [32]) diff --git a/pypy/jit/metainterp/test/test_resume.py b/pypy/jit/metainterp/test/test_resume.py --- a/pypy/jit/metainterp/test/test_resume.py +++ b/pypy/jit/metainterp/test/test_resume.py @@ -18,6 +18,19 @@ rd_virtuals = None rd_pendingfields = None + +class FakeOptimizer(object): + def __init__(self, values): + self.values = values + + def getvalue(self, box): + try: + value = self.values[box] + except KeyError: + value = self.values[box] = OptValue(box) + return value + + def test_tag(): assert tag(3, 1) == rffi.r_short(3<<2|1) assert tag(-3, 2) == rffi.r_short(-3<<2|2) @@ -500,7 +513,7 @@ capture_resumedata(fs, None, [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t = [BoxInt(), BoxPtr(), BoxInt()] @@ -524,7 +537,7 @@ capture_resumedata(fs, [b4], [], storage) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() b1t, b2t, b3t, b4t = [BoxInt(), BoxPtr(), BoxInt(), BoxPtr()] @@ -553,10 +566,10 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish({}) + liveboxes2 = modifier.finish(FakeOptimizer({})) metainterp = MyMetaInterp() @@ -617,7 +630,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -628,7 +641,7 @@ values = {b2: virtual_value(b2, b4, v6), b6: v6} memo.clear_box_virtual_numbers() modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes2 = modifier.finish(values) + liveboxes2 = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[0].fieldnums == [tag(len(liveboxes2)-1, TAGBOX), tag(-1, TAGVIRTUAL)] @@ -674,7 +687,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) values = {b2: virtual_value(b2, b5, c4)} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage.rd_virtuals) == 1 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), tag(0, TAGCONST)] @@ -684,7 +697,7 @@ capture_resumedata(fs, None, [], storage2) values[b4] = virtual_value(b4, b6, c4) modifier = ResumeDataVirtualAdder(storage2, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert len(storage2.rd_virtuals) == 2 assert storage2.rd_virtuals[1].fieldnums == storage.rd_virtuals[0].fieldnums assert storage2.rd_virtuals[1] is storage.rd_virtuals[0] @@ -703,7 +716,7 @@ v1.setfield(LLtypeMixin.nextdescr, v2) values = {b1: v1, b2: v2} modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert liveboxes == [b3] assert len(storage.rd_virtuals) == 2 assert storage.rd_virtuals[0].fieldnums == [tag(-1, TAGBOX), @@ -776,7 +789,7 @@ memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) - numb, liveboxes, v = memo.number({}, snap1) + numb, liveboxes, v = memo.number(FakeOptimizer({}), snap1) assert v == 0 assert liveboxes == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -788,7 +801,7 @@ tag(0, TAGBOX), tag(2, TAGINT)] assert not numb.prev.prev - numb2, liveboxes2, v = memo.number({}, snap2) + numb2, liveboxes2, v = memo.number(FakeOptimizer({}), snap2) assert v == 0 assert liveboxes2 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -813,7 +826,8 @@ return self.virt # renamed - numb3, liveboxes3, v = memo.number({b3: FakeValue(False, c4)}, snap3) + numb3, liveboxes3, v = memo.number(FakeOptimizer({b3: FakeValue(False, c4)}), + snap3) assert v == 0 assert liveboxes3 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX)} @@ -825,7 +839,8 @@ env4 = [c3, b4, b1, c3] snap4 = Snapshot(snap, env4) - numb4, liveboxes4, v = memo.number({b4: FakeValue(True, b4)}, snap4) + numb4, liveboxes4, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4)}), + snap4) assert v == 1 assert liveboxes4 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -837,8 +852,9 @@ env5 = [b1, b4, b5] snap5 = Snapshot(snap4, env5) - numb5, liveboxes5, v = memo.number({b4: FakeValue(True, b4), - b5: FakeValue(True, b5)}, snap5) + numb5, liveboxes5, v = memo.number(FakeOptimizer({b4: FakeValue(True, b4), + b5: FakeValue(True, b5)}), + snap5) assert v == 2 assert liveboxes5 == {b1: tag(0, TAGBOX), b2: tag(1, TAGBOX), @@ -940,7 +956,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) assert storage.rd_snapshot is None cpu = MyCPU([]) reader = ResumeDataDirectReader(MyMetaInterp(cpu), storage) @@ -954,14 +970,14 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - modifier.finish({}) + modifier.finish(FakeOptimizer({})) assert len(memo.consts) == 2 assert storage.rd_consts is memo.consts b1s, b2s, b3s = [ConstInt(sys.maxint), ConstInt(2**17), ConstInt(-65)] storage2 = make_storage(b1s, b2s, b3s) modifier2 = ResumeDataVirtualAdder(storage2, memo) - modifier2.finish({}) + modifier2.finish(FakeOptimizer({})) assert len(memo.consts) == 3 assert storage2.rd_consts is memo.consts @@ -1022,7 +1038,7 @@ val = FakeValue() values = {b1s: val, b2s: val} - liveboxes = modifier.finish(values) + liveboxes = modifier.finish(FakeOptimizer(values)) assert storage.rd_snapshot is None b1t, b3t = [BoxInt(11), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b1_2, b3s], b1t, b3t) @@ -1043,7 +1059,7 @@ storage = make_storage(b1s, b2s, b3s) memo = ResumeDataLoopMemo(FakeMetaInterpStaticData()) modifier = ResumeDataVirtualAdder(storage, memo) - liveboxes = modifier.finish({}) + liveboxes = modifier.finish(FakeOptimizer({})) b2t, b3t = [BoxPtr(demo55o), BoxInt(33)] newboxes = _resume_remap(liveboxes, [b2s, b3s], b2t, b3t) metainterp = MyMetaInterp() @@ -1086,7 +1102,7 @@ values = {b2s: v2, b4s: v4} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) storage.rd_consts = memo.consts[:] storage.rd_numb = None # resume @@ -1156,7 +1172,7 @@ modifier.register_virtual_fields(b2s, [b4s, c1s]) liveboxes = [] values = {b2s: v2} - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1203,7 +1219,7 @@ v2.setfield(LLtypeMixin.bdescr, OptValue(b4s)) modifier.register_virtual_fields(b2s, [c1s, b4s]) liveboxes = [] - modifier._number_virtuals(liveboxes, {b2s: v2}, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer({b2s: v2}), 0) dump_storage(storage, liveboxes) storage.rd_consts = memo.consts[:] storage.rd_numb = None @@ -1249,7 +1265,7 @@ values = {b4s: v4, b2s: v2} liveboxes = [] - modifier._number_virtuals(liveboxes, values, 0) + modifier._number_virtuals(liveboxes, FakeOptimizer(values), 0) assert liveboxes == [b2s, b4s] or liveboxes == [b4s, b2s] modifier._add_pending_fields([(LLtypeMixin.nextdescr, b2s, b4s, -1)]) storage.rd_consts = memo.consts[:] diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -453,7 +453,7 @@ if sys.stdout == sys.__stdout__: import pdb; pdb.post_mortem(tb) raise e.__class__, e, tb - fatalerror('~~~ Crash in JIT! %s' % (e,), traceback=True) + fatalerror('~~~ Crash in JIT! %s' % (e,)) crash_in_jit._dont_inline_ = True if self.translator.rtyper.type_system.name == 'lltypesystem': diff --git a/pypy/jit/tl/tinyframe/tinyframe.py b/pypy/jit/tl/tinyframe/tinyframe.py --- a/pypy/jit/tl/tinyframe/tinyframe.py +++ b/pypy/jit/tl/tinyframe/tinyframe.py @@ -210,7 +210,7 @@ def repr(self): return "" % (self.outer.repr(), self.inner.repr()) -driver = JitDriver(greens = ['code', 'i'], reds = ['self'], +driver = JitDriver(greens = ['i', 'code'], reds = ['self'], virtualizables = ['self']) class Frame(object): diff --git a/pypy/module/_demo/test/test_sieve.py b/pypy/module/_demo/test/test_sieve.py new file mode 100644 --- /dev/null +++ b/pypy/module/_demo/test/test_sieve.py @@ -0,0 +1,12 @@ +from pypy.conftest import gettestobjspace + + +class AppTestSieve: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_demo',)) + + def test_sieve(self): + import _demo + lst = _demo.sieve(100) + assert lst == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, + 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] diff --git a/pypy/module/_io/__init__.py b/pypy/module/_io/__init__.py --- a/pypy/module/_io/__init__.py +++ b/pypy/module/_io/__init__.py @@ -28,6 +28,7 @@ } def init(self, space): + MixedModule.init(self, space) w_UnsupportedOperation = space.call_function( space.w_type, space.wrap('UnsupportedOperation'), @@ -35,3 +36,9 @@ space.newdict()) space.setattr(self, space.wrap('UnsupportedOperation'), w_UnsupportedOperation) + + def shutdown(self, space): + # at shutdown, flush all open streams. Ignore I/O errors. + from pypy.module._io.interp_iobase import get_autoflushher + get_autoflushher(space).flush_all(space) + diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -5,6 +5,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib.rstring import StringBuilder +from pypy.rlib import rweakref + DEFAULT_BUFFER_SIZE = 8192 @@ -43,6 +45,8 @@ self.space = space self.w_dict = space.newdict() self.__IOBase_closed = False + self.streamholder = None # needed by AutoFlusher + get_autoflushher(space).add(self) def getdict(self, space): return self.w_dict @@ -98,6 +102,7 @@ space.call_method(self, "flush") finally: self.__IOBase_closed = True + get_autoflushher(space).remove(self) def flush_w(self, space): if self._CLOSED(): @@ -303,3 +308,57 @@ read = interp2app(W_RawIOBase.read_w), readall = interp2app(W_RawIOBase.readall_w), ) + + +# ------------------------------------------------------------ +# functions to make sure that all streams are flushed on exit +# ------------------------------------------------------------ + +class StreamHolder(object): + + def __init__(self, w_iobase): + self.w_iobase_ref = rweakref.ref(w_iobase) + w_iobase.autoflusher = self + + def autoflush(self, space): + w_iobase = self.w_iobase_ref() + if w_iobase is not None: + try: + space.call_method(w_iobase, 'flush') + except OperationError, e: + # if it's an IOError, ignore it + if not e.match(space, space.w_IOError): + raise + + +class AutoFlusher(object): + + def __init__(self, space): + self.streams = {} + + def add(self, w_iobase): + assert w_iobase.streamholder is None + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + + def remove(self, w_iobase): + holder = w_iobase.streamholder + if holder is not None: + del self.streams[holder] + + def flush_all(self, space): + while self.streams: + for streamholder in self.streams.keys(): + try: + del self.streams[streamholder] + except KeyError: + pass # key was removed in the meantime + else: + streamholder.autoflush(space) + + +def get_autoflushher(space): + return space.fromcache(AutoFlusher) + + diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -160,3 +160,37 @@ f.close() assert repr(f) == "<_io.FileIO [closed]>" +def test_flush_at_exit(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + from pypy.tool.udir import udir + + tmpfile = udir.join('test_flush_at_exit') + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([space.wrap(str(tmpfile))], """(tmpfile): + import io + f = io.open(tmpfile, 'w', encoding='ascii') + f.write('42') + # no flush() and no close() + import sys; sys._keepalivesomewhereobscure = f + """) + space.finish() + assert tmpfile.read() == '42' + +def test_flush_at_exit_IOError(): + from pypy import conftest + from pypy.tool.option import make_config, make_objspace + + config = make_config(conftest.option) + space = make_objspace(config) + space.appexec([], """(): + import io + class MyStream(io.IOBase): + def flush(self): + raise IOError + + s = MyStream() + import sys; sys._keepalivesomewhereobscure = s + """) + space.finish() # the IOError has been ignored diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable @@ -383,6 +384,8 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", + "Set": "space.w_set", + "FrozenSet": "space.w_frozenset", "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", @@ -397,13 +400,14 @@ 'Module': 'space.gettypeobject(Module.typedef)', 'Property': 'space.gettypeobject(W_Property.typedef)', 'Slice': 'space.gettypeobject(W_SliceObject.typedef)', + 'Class': 'space.gettypeobject(W_ClassObject.typedef)', 'StaticMethod': 'space.gettypeobject(StaticMethod.typedef)', 'CFunction': 'space.gettypeobject(cpyext.methodobject.W_PyCFunctionObject.typedef)', 'WrapperDescr': 'space.gettypeobject(cpyext.methodobject.W_PyCMethodObject.typedef)' }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) - for cpyname in 'Method List Int Long Dict Tuple Class'.split(): + for cpyname in 'Method List Long Dict Tuple Class'.split(): FORWARD_DECLS.append('typedef struct { PyObject_HEAD } ' 'Py%sObject' % (cpyname, )) build_exported_objects() @@ -432,16 +436,16 @@ ('buf', rffi.VOIDP), ('obj', PyObject), ('len', Py_ssize_t), - # ('itemsize', Py_ssize_t), + ('itemsize', Py_ssize_t), - # ('readonly', lltype.Signed), - # ('ndim', lltype.Signed), - # ('format', rffi.CCHARP), - # ('shape', Py_ssize_tP), - # ('strides', Py_ssize_tP), - # ('suboffets', Py_ssize_tP), - # ('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), - # ('internal', rffi.VOIDP) + ('readonly', lltype.Signed), + ('ndim', lltype.Signed), + ('format', rffi.CCHARP), + ('shape', Py_ssize_tP), + ('strides', Py_ssize_tP), + ('suboffsets', Py_ssize_tP), + #('smalltable', rffi.CFixedArray(Py_ssize_t, 2)), + ('internal', rffi.VOIDP) )) @specialize.memo() diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -6,6 +6,7 @@ from pypy.module.cpyext.pyobject import RefcountState from pypy.module.cpyext.pyerrors import PyErr_BadInternalCall from pypy.interpreter.error import OperationError +from pypy.rlib.objectmodel import specialize @cpython_api([], PyObject) def PyDict_New(space): @@ -183,11 +184,34 @@ w_item = space.call_method(w_iter, "next") w_key, w_value = space.fixedview(w_item, 2) state = space.fromcache(RefcountState) - pkey[0] = state.make_borrowed(w_dict, w_key) - pvalue[0] = state.make_borrowed(w_dict, w_value) + if pkey: + pkey[0] = state.make_borrowed(w_dict, w_key) + if pvalue: + pvalue[0] = state.make_borrowed(w_dict, w_value) ppos[0] += 1 except OperationError, e: if not e.match(space, space.w_StopIteration): raise return 0 return 1 + + at specialize.memo() +def make_frozendict(space): + return space.appexec([], '''(): + import collections + class FrozenDict(collections.Mapping): + def __init__(self, *args, **kwargs): + self._d = dict(*args, **kwargs) + def __iter__(self): + return iter(self._d) + def __len__(self): + return len(self._d) + def __getitem__(self, key): + return self._d[key] + return FrozenDict''') + + at cpython_api([PyObject], PyObject) +def PyDictProxy_New(space, w_dict): + w_frozendict = make_frozendict(space) + return space.call_function(w_frozendict, w_dict) + diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -1,16 +1,24 @@ from pypy.interpreter.error import OperationError +from pypy.interpreter.astcompiler import consts from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, fread, feof, Py_ssize_tP, cpython_struct) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.pyerrors import PyErr_SetFromErrno +from pypy.module.cpyext.funcobject import PyCodeObject from pypy.module.__builtin__ import compiling PyCompilerFlags = cpython_struct( - "PyCompilerFlags", ()) + "PyCompilerFlags", (("cf_flags", rffi.INT),)) PyCompilerFlagsPtr = lltype.Ptr(PyCompilerFlags) +PyCF_MASK = (consts.CO_FUTURE_DIVISION | + consts.CO_FUTURE_ABSOLUTE_IMPORT | + consts.CO_FUTURE_WITH_STATEMENT | + consts.CO_FUTURE_PRINT_FUNCTION | + consts.CO_FUTURE_UNICODE_LITERALS) + @cpython_api([PyObject, PyObject, PyObject], PyObject) def PyEval_CallObjectWithKeywords(space, w_obj, w_arg, w_kwds): return space.call(w_obj, w_arg, w_kwds) @@ -48,6 +56,17 @@ return None return borrow_from(None, caller.w_globals) + at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) +def PyEval_EvalCode(space, w_code, w_globals, w_locals): + """This is a simplified interface to PyEval_EvalCodeEx(), with just + the code object, and the dictionaries of global and local variables. + The other arguments are set to NULL.""" + if w_globals is None: + w_globals = space.w_None + if w_locals is None: + w_locals = space.w_None + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([PyObject, PyObject], PyObject) def PyObject_CallObject(space, w_obj, w_arg): """ @@ -74,7 +93,7 @@ Py_file_input = 257 Py_eval_input = 258 -def compile_string(space, source, filename, start): +def compile_string(space, source, filename, start, flags=0): w_source = space.wrap(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: @@ -86,7 +105,7 @@ else: raise OperationError(space.w_ValueError, space.wrap( "invalid mode parameter for compilation")) - return compiling.compile(space, w_source, filename, mode) + return compiling.compile(space, w_source, filename, mode, flags) def run_string(space, source, filename, start, w_globals, w_locals): w_code = compile_string(space, source, filename, start) @@ -109,6 +128,24 @@ filename = "" return run_string(space, source, filename, start, w_globals, w_locals) + at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, + PyCompilerFlagsPtr], PyObject) +def PyRun_StringFlags(space, source, start, w_globals, w_locals, flagsptr): + """Execute Python source code from str in the context specified by the + dictionaries globals and locals with the compiler flags specified by + flags. The parameter start specifies the start token that should be used to + parse the source code. + + Returns the result of executing the code as a Python object, or NULL if an + exception was raised.""" + source = rffi.charp2str(source) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + w_code = compile_string(space, source, "", start, flags) + return compiling.eval(space, w_code, w_globals, w_locals) + @cpython_api([FILEP, CONST_STRING, rffi.INT_real, PyObject, PyObject], PyObject) def PyRun_File(space, fp, filename, start, w_globals, w_locals): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -150,7 +187,7 @@ @cpython_api([rffi.CCHARP, rffi.CCHARP, rffi.INT_real, PyCompilerFlagsPtr], PyObject) -def Py_CompileStringFlags(space, source, filename, start, flags): +def Py_CompileStringFlags(space, source, filename, start, flagsptr): """Parse and compile the Python source code in str, returning the resulting code object. The start token is given by start; this can be used to constrain the code which can be compiled and should @@ -160,7 +197,30 @@ returns NULL if the code cannot be parsed or compiled.""" source = rffi.charp2str(source) filename = rffi.charp2str(filename) - if flags: - raise OperationError(space.w_NotImplementedError, space.wrap( - "cpyext Py_CompileStringFlags does not accept flags")) - return compile_string(space, source, filename, start) + if flagsptr: + flags = rffi.cast(lltype.Signed, flagsptr.c_cf_flags) + else: + flags = 0 + return compile_string(space, source, filename, start, flags) + + at cpython_api([PyCompilerFlagsPtr], rffi.INT_real, error=CANNOT_FAIL) +def PyEval_MergeCompilerFlags(space, cf): + """This function changes the flags of the current evaluation + frame, and returns true on success, false on failure.""" + flags = rffi.cast(lltype.Signed, cf.c_cf_flags) + result = flags != 0 + current_frame = space.getexecutioncontext().gettopframe_nohidden() + if current_frame: + codeflags = current_frame.pycode.co_flags + compilerflags = codeflags & PyCF_MASK + if compilerflags: + result = 1 + flags |= compilerflags + # No future keyword at the moment + # if codeflags & CO_GENERATOR_ALLOWED: + # result = 1 + # flags |= CO_GENERATOR_ALLOWED + cf.c_cf_flags = rffi.cast(rffi.INT, flags) + return result + + diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - PyObjectFields, generic_cpy_call, CONST_STRING, + PyObjectFields, generic_cpy_call, CONST_STRING, CANNOT_FAIL, cpython_api, bootstrap_function, cpython_struct, build_type_checkers) from pypy.module.cpyext.pyobject import ( PyObject, make_ref, from_ref, Py_DecRef, make_typedescr, borrow_from) @@ -48,6 +48,7 @@ PyFunction_Check, PyFunction_CheckExact = build_type_checkers("Function", Function) PyMethod_Check, PyMethod_CheckExact = build_type_checkers("Method", Method) +PyCode_Check, PyCode_CheckExact = build_type_checkers("Code", PyCode) def function_attach(space, py_obj, w_obj): py_func = rffi.cast(PyFunctionObject, py_obj) @@ -167,3 +168,9 @@ freevars=[], cellvars=[])) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyCode_GetNumFree(space, w_co): + """Return the number of free variables in co.""" + co = space.interp_w(PyCode, w_co) + return len(co.co_freevars) + diff --git a/pypy/module/cpyext/include/Python.h b/pypy/module/cpyext/include/Python.h --- a/pypy/module/cpyext/include/Python.h +++ b/pypy/module/cpyext/include/Python.h @@ -113,6 +113,7 @@ #include "compile.h" #include "frameobject.h" #include "eval.h" +#include "pymath.h" #include "pymem.h" #include "pycobject.h" #include "pycapsule.h" diff --git a/pypy/module/cpyext/include/code.h b/pypy/module/cpyext/include/code.h --- a/pypy/module/cpyext/include/code.h +++ b/pypy/module/cpyext/include/code.h @@ -13,13 +13,19 @@ /* Masks for co_flags above */ /* These values are also in funcobject.py */ -#define CO_OPTIMIZED 0x0001 -#define CO_NEWLOCALS 0x0002 -#define CO_VARARGS 0x0004 -#define CO_VARKEYWORDS 0x0008 +#define CO_OPTIMIZED 0x0001 +#define CO_NEWLOCALS 0x0002 +#define CO_VARARGS 0x0004 +#define CO_VARKEYWORDS 0x0008 #define CO_NESTED 0x0010 #define CO_GENERATOR 0x0020 +#define CO_FUTURE_DIVISION 0x02000 +#define CO_FUTURE_ABSOLUTE_IMPORT 0x04000 +#define CO_FUTURE_WITH_STATEMENT 0x08000 +#define CO_FUTURE_PRINT_FUNCTION 0x10000 +#define CO_FUTURE_UNICODE_LITERALS 0x20000 + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/intobject.h b/pypy/module/cpyext/include/intobject.h --- a/pypy/module/cpyext/include/intobject.h +++ b/pypy/module/cpyext/include/intobject.h @@ -7,6 +7,11 @@ extern "C" { #endif +typedef struct { + PyObject_HEAD + long ob_ival; +} PyIntObject; + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/methodobject.h b/pypy/module/cpyext/include/methodobject.h --- a/pypy/module/cpyext/include/methodobject.h +++ b/pypy/module/cpyext/include/methodobject.h @@ -26,6 +26,7 @@ PyObject_HEAD PyMethodDef *m_ml; /* Description of the C function to call */ PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */ + PyObject *m_module; /* The __module__ attribute, can be anything */ } PyCFunctionObject; /* Flag passed to newmethodobject */ diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -131,18 +131,18 @@ /* This is Py_ssize_t so it can be pointed to by strides in simple case.*/ - /* Py_ssize_t itemsize; */ - /* int readonly; */ - /* int ndim; */ - /* char *format; */ - /* Py_ssize_t *shape; */ - /* Py_ssize_t *strides; */ - /* Py_ssize_t *suboffsets; */ + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; /* static store for shape and strides of mono-dimensional buffers. */ /* Py_ssize_t smalltable[2]; */ - /* void *internal; */ + void *internal; } Py_buffer; diff --git a/pypy/module/cpyext/include/pymath.h b/pypy/module/cpyext/include/pymath.h new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/include/pymath.h @@ -0,0 +1,20 @@ +#ifndef Py_PYMATH_H +#define Py_PYMATH_H + +/************************************************************************** +Symbols and macros to supply platform-independent interfaces to mathematical +functions and constants +**************************************************************************/ + +/* HUGE_VAL is supposed to expand to a positive double infinity. Python + * uses Py_HUGE_VAL instead because some platforms are broken in this + * respect. We used to embed code in pyport.h to try to worm around that, + * but different platforms are broken in conflicting ways. If you're on + * a platform where HUGE_VAL is defined incorrectly, fiddle your Python + * config to #define Py_HUGE_VAL to something that works on your platform. + */ +#ifndef Py_HUGE_VAL +#define Py_HUGE_VAL HUGE_VAL +#endif + +#endif /* Py_PYMATH_H */ diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -10,6 +10,7 @@ typedef struct _ts { PyInterpreterState *interp; + PyObject *dict; /* Stores per-thread state */ } PyThreadState; #define Py_BEGIN_ALLOW_THREADS { \ @@ -24,4 +25,6 @@ enum {PyGILState_LOCKED, PyGILState_UNLOCKED} PyGILState_STATE; +#define PyThreadState_GET() PyThreadState_Get() + #endif /* !Py_PYSTATE_H */ diff --git a/pypy/module/cpyext/include/pythonrun.h b/pypy/module/cpyext/include/pythonrun.h --- a/pypy/module/cpyext/include/pythonrun.h +++ b/pypy/module/cpyext/include/pythonrun.h @@ -19,6 +19,14 @@ int cf_flags; /* bitmask of CO_xxx flags relevant to future */ } PyCompilerFlags; +#define PyCF_MASK (CO_FUTURE_DIVISION | CO_FUTURE_ABSOLUTE_IMPORT | \ + CO_FUTURE_WITH_STATEMENT | CO_FUTURE_PRINT_FUNCTION | \ + CO_FUTURE_UNICODE_LITERALS) +#define PyCF_MASK_OBSOLETE (CO_NESTED) +#define PyCF_SOURCE_IS_UTF8 0x0100 +#define PyCF_DONT_IMPLY_DEDENT 0x0200 +#define PyCF_ONLY_AST 0x0400 + #define Py_CompileString(str, filename, start) Py_CompileStringFlags(str, filename, start, NULL) #ifdef __cplusplus diff --git a/pypy/module/cpyext/include/pythread.h b/pypy/module/cpyext/include/pythread.h --- a/pypy/module/cpyext/include/pythread.h +++ b/pypy/module/cpyext/include/pythread.h @@ -1,6 +1,8 @@ #ifndef Py_PYTHREAD_H #define Py_PYTHREAD_H +#define WITH_THREAD + typedef void *PyThread_type_lock; #define WAIT_LOCK 1 #define NOWAIT_LOCK 0 diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -20,7 +20,7 @@ } PyMemberDef; -/* Types */ +/* Types. These constants are also in structmemberdefs.py. */ #define T_SHORT 0 #define T_INT 1 #define T_LONG 2 @@ -42,9 +42,12 @@ #define T_LONGLONG 17 #define T_ULONGLONG 18 -/* Flags */ +/* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 #define RO READONLY /* Shorthand */ +#define READ_RESTRICTED 2 +#define PY_WRITE_RESTRICTED 4 +#define RESTRICTED (READ_RESTRICTED | PY_WRITE_RESTRICTED) #ifdef __cplusplus diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -2,11 +2,37 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.error import OperationError from pypy.module.cpyext.api import ( - cpython_api, build_type_checkers, PyObject, - CONST_STRING, CANNOT_FAIL, Py_ssize_t) + cpython_api, cpython_struct, build_type_checkers, bootstrap_function, + PyObject, PyObjectFields, CONST_STRING, CANNOT_FAIL, Py_ssize_t) +from pypy.module.cpyext.pyobject import ( + make_typedescr, track_reference, RefcountState, from_ref) from pypy.rlib.rarithmetic import r_uint, intmask, LONG_TEST +from pypy.objspace.std.intobject import W_IntObject import sys +PyIntObjectStruct = lltype.ForwardReference() +PyIntObject = lltype.Ptr(PyIntObjectStruct) +PyIntObjectFields = PyObjectFields + \ + (("ob_ival", rffi.LONG),) +cpython_struct("PyIntObject", PyIntObjectFields, PyIntObjectStruct) + + at bootstrap_function +def init_intobject(space): + "Type description of PyIntObject" + make_typedescr(space.w_int.instancetypedef, + basestruct=PyIntObject.TO, + realize=int_realize) + +def int_realize(space, obj): + intval = rffi.cast(lltype.Signed, rffi.cast(PyIntObject, obj).c_ob_ival) + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(W_IntObject, w_type) + w_obj.__init__(intval) + track_reference(space, obj, w_obj) + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj + PyInt_Check, PyInt_CheckExact = build_type_checkers("Int") @cpython_api([], lltype.Signed, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -32,6 +32,7 @@ PyObjectFields + ( ('m_ml', lltype.Ptr(PyMethodDef)), ('m_self', PyObject), + ('m_module', PyObject), )) PyCFunctionObject = lltype.Ptr(PyCFunctionObjectStruct) @@ -47,11 +48,13 @@ assert isinstance(w_obj, W_PyCFunctionObject) py_func.c_m_ml = w_obj.ml py_func.c_m_self = make_ref(space, w_obj.w_self) + py_func.c_m_module = make_ref(space, w_obj.w_module) @cpython_api([PyObject], lltype.Void, external=False) def cfunction_dealloc(space, py_obj): py_func = rffi.cast(PyCFunctionObject, py_obj) Py_DecRef(space, py_func.c_m_self) + Py_DecRef(space, py_func.c_m_module) from pypy.module.cpyext.object import PyObject_dealloc PyObject_dealloc(space, py_obj) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -193,7 +193,7 @@ if not obj: PyErr_NoMemory(space) obj.c_ob_type = type - _Py_NewReference(space, obj) + obj.c_ob_refcnt = 1 return obj @cpython_api([PyVarObject, PyTypeObjectPtr, Py_ssize_t], PyObject) @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], PyObject) +def PyObject_Dir(space, w_o): + """This is equivalent to the Python expression dir(o), returning a (possibly + empty) list of strings appropriate for the object argument, or NULL if there + was an error. If the argument is NULL, this is like the Python dir(), + returning the names of the current locals; in this case, if no execution frame + is active then NULL is returned but PyErr_Occurred() will return false.""" + return space.call_function(space.builtin.get('dir'), w_o) + @cpython_api([PyObject, rffi.CCHARPP, Py_ssize_tP], rffi.INT_real, error=-1) def PyObject_AsCharBuffer(space, obj, bufferp, sizep): """Returns a pointer to a read-only memory location usable as @@ -430,6 +439,8 @@ return 0 +PyBUF_WRITABLE = 0x0001 # Copied from object.h + @cpython_api([lltype.Ptr(Py_buffer), PyObject, rffi.VOIDP, Py_ssize_t, lltype.Signed, lltype.Signed], rffi.INT, error=CANNOT_FAIL) def PyBuffer_FillInfo(space, view, obj, buf, length, readonly, flags): @@ -445,6 +456,18 @@ view.c_len = length view.c_obj = obj Py_IncRef(space, obj) + view.c_itemsize = 1 + if flags & PyBUF_WRITABLE: + rffi.setintfield(view, 'c_readonly', 0) + else: + rffi.setintfield(view, 'c_readonly', 1) + rffi.setintfield(view, 'c_ndim', 0) + view.c_format = lltype.nullptr(rffi.CCHARP.TO) + view.c_shape = lltype.nullptr(Py_ssize_tP.TO) + view.c_strides = lltype.nullptr(Py_ssize_tP.TO) + view.c_suboffsets = lltype.nullptr(Py_ssize_tP.TO) + view.c_internal = lltype.nullptr(rffi.VOIDP.TO) + return 0 diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,7 +1,8 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) from pypy.module.cpyext.pyobject import PyObject, borrow_from +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError from pypy.module._file.interp_file import W_File @@ -61,11 +62,49 @@ def PyFile_WriteString(space, s, w_p): """Write string s to file object p. Return 0 on success or -1 on failure; the appropriate exception will be set.""" - w_s = space.wrap(rffi.charp2str(s)) - space.call_method(w_p, "write", w_s) + w_str = space.wrap(rffi.charp2str(s)) + space.call_method(w_p, "write", w_str) + return 0 + + at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) +def PyFile_WriteObject(space, w_obj, w_p, flags): + """ + Write object obj to file object p. The only supported flag for flags is + Py_PRINT_RAW; if given, the str() of the object is written + instead of the repr(). Return 0 on success or -1 on failure; the + appropriate exception will be set.""" + if rffi.cast(lltype.Signed, flags) & Py_PRINT_RAW: + w_str = space.str(w_obj) + else: + w_str = space.repr(w_obj) + space.call_method(w_p, "write", w_str) return 0 @cpython_api([PyObject], PyObject) def PyFile_Name(space, w_p): """Return the name of the file specified by p as a string object.""" - return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) \ No newline at end of file + return borrow_from(w_p, space.getattr(w_p, space.wrap("name"))) + + at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) +def PyFile_SoftSpace(space, w_p, newflag): + """ + This function exists for internal use by the interpreter. Set the + softspace attribute of p to newflag and return the previous value. + p does not have to be a file object for this function to work + properly; any object is supported (thought its only interesting if + the softspace attribute can be set). This function clears any + errors, and will return 0 as the previous value if the attribute + either does not exist or if there were errors in retrieving it. + There is no way to detect errors from this function, but doing so + should not be needed.""" + try: + if rffi.cast(lltype.Signed, newflag): + w_newflag = space.w_True + else: + w_newflag = space.w_False + oldflag = space.int_w(space.getattr(w_p, space.wrap("softspace"))) + space.setattr(w_p, space.wrap("softspace"), w_newflag) + return oldflag + except OperationError, e: + return 0 + diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -17,6 +17,7 @@ class BaseCpyTypedescr(object): basestruct = PyObject.TO + W_BaseObject = W_ObjectObject def get_dealloc(self, space): from pypy.module.cpyext.typeobject import subtype_dealloc @@ -51,10 +52,14 @@ def attach(self, space, pyobj, w_obj): pass - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) + def realize(self, space, obj): + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(self.W_BaseObject, w_type) + track_reference(space, obj, w_obj) + if w_type is not space.gettypefor(self.W_BaseObject): + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj typedescr_cache = {} @@ -369,13 +374,7 @@ obj.c_ob_refcnt = 1 w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) assert isinstance(w_type, W_TypeObject) - if w_type.is_cpytype(): - w_obj = space.allocate_instance(W_ObjectObject, w_type) - track_reference(space, obj, w_obj) - state = space.fromcache(RefcountState) - state.set_lifeline(w_obj, obj) - else: - assert False, "Please add more cases in _Py_NewReference()" + get_typedescr(w_type.instancetypedef).realize(space, obj) def _Py_Dealloc(space, obj): from pypy.module.cpyext.api import generic_cpy_call_dont_decref diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -1,12 +1,19 @@ from pypy.module.cpyext.api import ( cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) +from pypy.module.cpyext.pyobject import PyObject, Py_DecRef, make_ref from pypy.rpython.lltypesystem import rffi, lltype PyInterpreterStateStruct = lltype.ForwardReference() PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) cpython_struct( - "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) -PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) + "PyInterpreterState", + [('next', PyInterpreterState)], + PyInterpreterStateStruct) +PyThreadState = lltype.Ptr(cpython_struct( + "PyThreadState", + [('interp', PyInterpreterState), + ('dict', PyObject), + ])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) def PyEval_SaveThread(space): @@ -38,41 +45,49 @@ return 1 # XXX: might be generally useful -def encapsulator(T, flavor='raw'): +def encapsulator(T, flavor='raw', dealloc=None): class MemoryCapsule(object): - def __init__(self, alloc=True): - if alloc: + def __init__(self, space): + self.space = space + if space is not None: self.memory = lltype.malloc(T, flavor=flavor) else: self.memory = lltype.nullptr(T) def __del__(self): if self.memory: + if dealloc and self.space: + dealloc(self.memory, self.space) lltype.free(self.memory, flavor=flavor) return MemoryCapsule -ThreadStateCapsule = encapsulator(PyThreadState.TO) +def ThreadState_dealloc(ts, space): + assert space is not None + Py_DecRef(space, ts.c_dict) +ThreadStateCapsule = encapsulator(PyThreadState.TO, + dealloc=ThreadState_dealloc) from pypy.interpreter.executioncontext import ExecutionContext -ExecutionContext.cpyext_threadstate = ThreadStateCapsule(alloc=False) +ExecutionContext.cpyext_threadstate = ThreadStateCapsule(None) class InterpreterState(object): def __init__(self, space): self.interpreter_state = lltype.malloc( PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) - def new_thread_state(self): - capsule = ThreadStateCapsule() + def new_thread_state(self, space): + capsule = ThreadStateCapsule(space) ts = capsule.memory ts.c_interp = self.interpreter_state + ts.c_dict = make_ref(space, space.newdict()) return capsule def get_thread_state(self, space): ec = space.getexecutioncontext() - return self._get_thread_state(ec).memory + return self._get_thread_state(space, ec).memory - def _get_thread_state(self, ec): + def _get_thread_state(self, space, ec): if ec.cpyext_threadstate.memory == lltype.nullptr(PyThreadState.TO): - ec.cpyext_threadstate = self.new_thread_state() + ec.cpyext_threadstate = self.new_thread_state(space) return ec.cpyext_threadstate @@ -81,6 +96,11 @@ state = space.fromcache(InterpreterState) return state.get_thread_state(space) + at cpython_api([], PyObject, error=CANNOT_FAIL) +def PyThreadState_GetDict(space): + state = space.fromcache(InterpreterState) + return state.get_thread_state(space).c_dict + @cpython_api([PyThreadState], PyThreadState, error=CANNOT_FAIL) def PyThreadState_Swap(space, tstate): """Swap the current thread state with the thread state given by the argument diff --git a/pypy/module/cpyext/pythonrun.py b/pypy/module/cpyext/pythonrun.py --- a/pypy/module/cpyext/pythonrun.py +++ b/pypy/module/cpyext/pythonrun.py @@ -14,6 +14,20 @@ value.""" return space.fromcache(State).get_programname() + at cpython_api([], rffi.CCHARP) +def Py_GetVersion(space): + """Return the version of this Python interpreter. This is a + string that looks something like + + "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" + + The first word (up to the first space character) is the current + Python version; the first three characters are the major and minor + version separated by a period. The returned string points into + static storage; the caller should not modify its value. The value + is available to Python code as sys.version.""" + return space.fromcache(State).get_version() + @cpython_api([lltype.Ptr(lltype.FuncType([], lltype.Void))], rffi.INT_real, error=-1) def Py_AtExit(space, func_ptr): """Register a cleanup function to be called by Py_Finalize(). The cleanup diff --git a/pypy/module/cpyext/setobject.py b/pypy/module/cpyext/setobject.py --- a/pypy/module/cpyext/setobject.py +++ b/pypy/module/cpyext/setobject.py @@ -54,6 +54,20 @@ return 0 + at cpython_api([PyObject], PyObject) +def PySet_Pop(space, w_set): + """Return a new reference to an arbitrary object in the set, and removes the + object from the set. Return NULL on failure. Raise KeyError if the + set is empty. Raise a SystemError if set is an not an instance of + set or its subtype.""" + return space.call_method(w_set, "pop") + + at cpython_api([PyObject], rffi.INT_real, error=-1) +def PySet_Clear(space, w_set): + """Empty an existing set of all elements.""" + space.call_method(w_set, 'clear') + return 0 + @cpython_api([PyObject], Py_ssize_t, error=CANNOT_FAIL) def PySet_GET_SIZE(space, w_s): """Macro form of PySet_Size() without error checking.""" diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -185,6 +185,15 @@ space.fromcache(State).check_and_raise_exception(always=True) return space.wrap(res) +def wrap_delitem(space, w_self, w_args, func): + func_target = rffi.cast(objobjargproc, func) + check_num_args(space, w_args, 1) + w_key, = space.fixedview(w_args) + res = generic_cpy_call(space, func_target, w_self, w_key, None) + if rffi.cast(lltype.Signed, res) == -1: + space.fromcache(State).check_and_raise_exception(always=True) + return space.w_None + def wrap_ssizessizeargfunc(space, w_self, w_args, func): func_target = rffi.cast(ssizessizeargfunc, func) check_num_args(space, w_args, 2) @@ -291,6 +300,14 @@ def slot_nb_int(space, w_self): return space.int(w_self) + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iter(space, w_self): + return space.iter(w_self) + + at cpython_api([PyObject], PyObject, external=False) +def slot_tp_iternext(space, w_self): + return space.next(w_self) + from pypy.rlib.nonconst import NonConstant SLOTS = {} @@ -632,6 +649,19 @@ TPSLOT("__buffer__", "tp_as_buffer.c_bf_getreadbuffer", None, "wrap_getreadbuffer", ""), ) +# partial sort to solve some slot conflicts: +# Number slots before Mapping slots before Sequence slots. +# These are the only conflicts between __name__ methods +def slotdef_sort_key(slotdef): + if slotdef.slot_name.startswith('tp_as_number'): + return 1 + if slotdef.slot_name.startswith('tp_as_mapping'): + return 2 + if slotdef.slot_name.startswith('tp_as_sequence'): + return 3 + return 0 +slotdefs = sorted(slotdefs, key=slotdef_sort_key) + slotdefs_for_tp_slots = unrolling_iterable( [(x.method_name, x.slot_name, x.slot_names, x.slot_func) for x in slotdefs]) diff --git a/pypy/module/cpyext/state.py b/pypy/module/cpyext/state.py --- a/pypy/module/cpyext/state.py +++ b/pypy/module/cpyext/state.py @@ -10,6 +10,7 @@ self.space = space self.reset() self.programname = lltype.nullptr(rffi.CCHARP.TO) + self.version = lltype.nullptr(rffi.CCHARP.TO) def reset(self): from pypy.module.cpyext.modsupport import PyMethodDef @@ -102,6 +103,15 @@ lltype.render_immortal(self.programname) return self.programname + def get_version(self): + if not self.version: + space = self.space + w_version = space.sys.get('version') + version = space.str_w(w_version) + self.version = rffi.str2charp(version) + lltype.render_immortal(self.version) + return self.version + def find_extension(self, name, path): from pypy.module.cpyext.modsupport import PyImport_AddModule from pypy.interpreter.module import Module diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -250,6 +250,26 @@ s = rffi.charp2str(string) return space.new_interned_str(s) + at cpython_api([PyObjectP], lltype.Void) +def PyString_InternInPlace(space, string): + """Intern the argument *string in place. The argument must be the + address of a pointer variable pointing to a Python string object. + If there is an existing interned string that is the same as + *string, it sets *string to it (decrementing the reference count + of the old string object and incrementing the reference count of + the interned string object), otherwise it leaves *string alone and + interns it (incrementing its reference count). (Clarification: + even though there is a lot of talk about reference counts, think + of this function as reference-count-neutral; you own the object + after the call if and only if you owned it before the call.) + + This function is not available in 3.x and does not have a PyBytes + alias.""" + w_str = from_ref(space, string[0]) + w_str = space.new_interned_w_str(w_str) + Py_DecRef(space, string[0]) + string[0] = make_ref(space, w_str) + @cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_AsEncodedObject(space, w_str, encoding, errors): """Encode a string object using the codec registered for encoding and return diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -1,3 +1,5 @@ +# These constants are also in include/structmember.h + T_SHORT = 0 T_INT = 1 T_LONG = 2 @@ -18,3 +20,6 @@ T_ULONGLONG = 18 READONLY = RO = 1 +READ_RESTRICTED = 2 +WRITE_RESTRICTED = 4 +RESTRICTED = READ_RESTRICTED | WRITE_RESTRICTED diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1,5 +1,5 @@ from pypy.module.cpyext.api import ( - cpython_api, PyObject, PyObjectP, CANNOT_FAIL, Py_buffer + cpython_api, PyObject, PyObjectP, CANNOT_FAIL ) from pypy.module.cpyext.complexobject import Py_complex_ptr as Py_complex from pypy.rpython.lltypesystem import rffi, lltype @@ -10,6 +10,7 @@ PyMethodDef = rffi.VOIDP PyGetSetDef = rffi.VOIDP PyMemberDef = rffi.VOIDP +Py_buffer = rffi.VOIDP va_list = rffi.VOIDP PyDateTime_Date = rffi.VOIDP PyDateTime_DateTime = rffi.VOIDP @@ -32,10 +33,6 @@ def _PyObject_Del(space, op): raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyObject_CheckBuffer(space, obj): - raise NotImplementedError - @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) def PyBuffer_SizeFromFormat(space, format): """Return the implied ~Py_buffer.itemsize from the struct-stype @@ -185,16 +182,6 @@ used as the positional and keyword parameters to the object's constructor.""" raise NotImplementedError - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_Check(space, co): - """Return true if co is a code object""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) -def PyCode_GetNumFree(space, co): - """Return the number of free variables in co.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=-1) def PyCodec_Register(space, search_function): """Register a new codec search function. @@ -684,28 +671,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, rffi.INT_real], rffi.INT_real, error=CANNOT_FAIL) -def PyFile_SoftSpace(space, p, newflag): - """ - This function exists for internal use by the interpreter. Set the - softspace attribute of p to newflag and return the previous value. - p does not have to be a file object for this function to work properly; any - object is supported (thought its only interesting if the softspace - attribute can be set). This function clears any errors, and will return 0 - as the previous value if the attribute either does not exist or if there were - errors in retrieving it. There is no way to detect errors from this function, - but doing so should not be needed.""" - raise NotImplementedError - - at cpython_api([PyObject, PyObject, rffi.INT_real], rffi.INT_real, error=-1) -def PyFile_WriteObject(space, obj, p, flags): - """ - Write object obj to file object p. The only supported flag for flags is - Py_PRINT_RAW; if given, the str() of the object is written - instead of the repr(). Return 0 on success or -1 on failure; the - appropriate exception will be set.""" - raise NotImplementedError - @cpython_api([], PyObject) def PyFloat_GetInfo(space): """Return a structseq instance which contains information about the @@ -1097,19 +1062,6 @@ raise NotImplementedError @cpython_api([], rffi.CCHARP) -def Py_GetVersion(space): - """Return the version of this Python interpreter. This is a string that looks - something like - - "1.5 (\#67, Dec 31 1997, 22:34:28) [GCC 2.7.2.2]" - - The first word (up to the first space character) is the current Python version; - the first three characters are the major and minor version separated by a - period. The returned string points into static storage; the caller should not - modify its value. The value is available to Python code as sys.version.""" - raise NotImplementedError - - at cpython_api([], rffi.CCHARP) def Py_GetPlatform(space): """Return the platform identifier for the current platform. On Unix, this is formed from the"official" name of the operating system, converted to lower @@ -1331,28 +1283,6 @@ that haven't been explicitly destroyed at that point.""" raise NotImplementedError - at cpython_api([rffi.VOIDP], lltype.Void) -def Py_AddPendingCall(space, func): - """Post a notification to the Python main thread. If successful, func will - be called with the argument arg at the earliest convenience. func will be - called having the global interpreter lock held and can thus use the full - Python API and can take any action such as setting object attributes to - signal IO completion. It must return 0 on success, or -1 signalling an - exception. The notification function won't be interrupted to perform another - asynchronous notification recursively, but it can still be interrupted to - switch threads if the global interpreter lock is released, for example, if it - calls back into Python code. - - This function returns 0 on success in which case the notification has been - scheduled. Otherwise, for example if the notification buffer is full, it - returns -1 without setting any exception. - - This function can be called on any thread, be it a Python thread or some - other system thread. If it is a Python thread, it doesn't matter if it holds - the global interpreter lock or not. - """ - raise NotImplementedError - @cpython_api([Py_tracefunc, PyObject], lltype.Void) def PyEval_SetProfile(space, func, obj): """Set the profiler function to func. The obj parameter is passed to the @@ -1685,15 +1615,6 @@ """ raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PyObject_Dir(space, o): - """This is equivalent to the Python expression dir(o), returning a (possibly - empty) list of strings appropriate for the object argument, or NULL if there - was an error. If the argument is NULL, this is like the Python dir(), - returning the names of the current locals; in this case, if no execution frame - is active then NULL is returned but PyErr_Occurred() will return false.""" - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1802,34 +1723,6 @@ building-up new frozensets with PySet_Add().""" raise NotImplementedError - at cpython_api([PyObject], PyObject) -def PySet_Pop(space, set): - """Return a new reference to an arbitrary object in the set, and removes the - object from the set. Return NULL on failure. Raise KeyError if the - set is empty. Raise a SystemError if set is an not an instance of - set or its subtype.""" - raise NotImplementedError - - at cpython_api([PyObject], rffi.INT_real, error=-1) -def PySet_Clear(space, set): - """Empty an existing set of all elements.""" - raise NotImplementedError - - at cpython_api([PyObjectP], lltype.Void) -def PyString_InternInPlace(space, string): - """Intern the argument *string in place. The argument must be the address of a - pointer variable pointing to a Python string object. If there is an existing - interned string that is the same as *string, it sets *string to it - (decrementing the reference count of the old string object and incrementing the - reference count of the interned string object), otherwise it leaves *string - alone and interns it (incrementing its reference count). (Clarification: even - though there is a lot of talk about reference counts, think of this function as - reference-count-neutral; you own the object after the call if and only if you - owned it before the call.) - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Decode(space, s, size, encoding, errors): """Create an object by decoding size bytes of the encoded buffer s using the @@ -1950,26 +1843,6 @@ """ raise NotImplementedError - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISTITLE(space, ch): - """Return 1 or 0 depending on whether ch is a titlecase character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISDIGIT(space, ch): - """Return 1 or 0 depending on whether ch is a digit character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISNUMERIC(space, ch): - """Return 1 or 0 depending on whether ch is a numeric character.""" - raise NotImplementedError - - at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) -def Py_UNICODE_ISALPHA(space, ch): - """Return 1 or 0 depending on whether ch is an alphabetic character.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP], PyObject) def PyUnicode_FromFormat(space, format): """Take a C printf()-style format string and a variable number of @@ -2414,17 +2287,6 @@ use the default error handling.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], rffi.INT_real, error=-1) -def PyUnicode_Tailmatch(space, str, substr, start, end, direction): - """Return 1 if substr matches str*[*start:end] at the given tail end - (direction == -1 means to do a prefix match, direction == 1 a suffix match), - 0 otherwise. Return -1 if an error occurred. - - This function used an int type for start and end. This - might require changes in your code for properly supporting 64-bit - systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], Py_ssize_t, error=-2) def PyUnicode_Find(space, str, substr, start, end, direction): """Return the first position of substr in str*[*start:end] using the given @@ -2448,16 +2310,6 @@ properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) -def PyUnicode_Replace(space, str, substr, replstr, maxcount): - """Replace at most maxcount occurrences of substr in str with replstr and - return the resulting Unicode object. maxcount == -1 means replace all - occurrences. - - This function used an int type for maxcount. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyUnicode_RichCompare(space, left, right, op): """Rich compare two unicode strings and return one of the following: @@ -2631,17 +2483,6 @@ source code is read from fp instead of an in-memory string.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, rffi.INT_real, PyObject, PyObject, PyCompilerFlags], PyObject) -def PyRun_StringFlags(space, str, start, globals, locals, flags): - """Execute Python source code from str in the context specified by the - dictionaries globals and locals with the compiler flags specified by - flags. The parameter start specifies the start token that should be used to - parse the source code. - - Returns the result of executing the code as a Python object, or NULL if an - exception was raised.""" - raise NotImplementedError - @cpython_api([FILE, rffi.CCHARP, rffi.INT_real, PyObject, PyObject, rffi.INT_real], PyObject) def PyRun_FileEx(space, fp, filename, start, globals, locals, closeit): """This is a simplified interface to PyRun_FileExFlags() below, leaving @@ -2662,13 +2503,6 @@ returns.""" raise NotImplementedError - at cpython_api([PyCodeObject, PyObject, PyObject], PyObject) -def PyEval_EvalCode(space, co, globals, locals): - """This is a simplified interface to PyEval_EvalCodeEx(), with just - the code object, and the dictionaries of global and local variables. - The other arguments are set to NULL.""" - raise NotImplementedError - @cpython_api([PyCodeObject, PyObject, PyObject, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObjectP, rffi.INT_real, PyObject], PyObject) def PyEval_EvalCodeEx(space, co, globals, locals, args, argcount, kws, kwcount, defs, defcount, closure): """Evaluate a precompiled code object, given a particular environment for its @@ -2693,12 +2527,6 @@ throw() methods of generator objects.""" raise NotImplementedError - at cpython_api([PyCompilerFlags], rffi.INT_real, error=CANNOT_FAIL) -def PyEval_MergeCompilerFlags(space, cf): - """This function changes the flags of the current evaluation frame, and returns - true on success, false on failure.""" - raise NotImplementedError - @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyWeakref_Check(space, ob): """Return true if ob is either a reference or proxy object. diff --git a/pypy/module/cpyext/stubsactive.py b/pypy/module/cpyext/stubsactive.py --- a/pypy/module/cpyext/stubsactive.py +++ b/pypy/module/cpyext/stubsactive.py @@ -38,3 +38,31 @@ def Py_MakePendingCalls(space): return 0 +pending_call = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.INT_real)) + at cpython_api([pending_call, rffi.VOIDP], rffi.INT_real, error=-1) +def Py_AddPendingCall(space, func, arg): + """Post a notification to the Python main thread. If successful, + func will be called with the argument arg at the earliest + convenience. func will be called having the global interpreter + lock held and can thus use the full Python API and can take any + action such as setting object attributes to signal IO completion. + It must return 0 on success, or -1 signalling an exception. The + notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be + interrupted to switch threads if the global interpreter lock is + released, for example, if it calls back into Python code. + + This function returns 0 on success in which case the notification + has been scheduled. Otherwise, for example if the notification + buffer is full, it returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread + or some other system thread. If it is a Python thread, it doesn't + matter if it holds the global interpreter lock or not. + """ + return -1 + +thread_func = lltype.Ptr(lltype.FuncType([rffi.VOIDP], lltype.Void)) + at cpython_api([thread_func, rffi.VOIDP], rffi.INT_real, error=-1) +def PyThread_start_new_thread(space, func, arg): + return -1 diff --git a/pypy/module/cpyext/test/test_arraymodule.py b/pypy/module/cpyext/test/test_arraymodule.py --- a/pypy/module/cpyext/test/test_arraymodule.py +++ b/pypy/module/cpyext/test/test_arraymodule.py @@ -43,6 +43,15 @@ assert arr[:2].tolist() == [1,2] assert arr[1:3].tolist() == [2,3] + def test_slice_object(self): + module = self.import_module(name='array') + arr = module.array('i', [1,2,3,4]) + assert arr[slice(1,3)].tolist() == [2,3] + arr[slice(1,3)] = module.array('i', [21, 22, 23]) + assert arr.tolist() == [1, 21, 22, 23, 4] + del arr[slice(1, 3)] + assert arr.tolist() == [1, 23, 4] + def test_buffer(self): module = self.import_module(name='array') arr = module.array('i', [1,2,3,4]) diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py --- a/pypy/module/cpyext/test/test_classobject.py +++ b/pypy/module/cpyext/test/test_classobject.py @@ -1,4 +1,5 @@ from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.interpreter.function import Function, Method class TestClassObject(BaseApiTest): @@ -51,3 +52,14 @@ assert api.PyInstance_Check(w_instance) assert space.is_true(space.call_method(space.builtin, "isinstance", w_instance, w_class)) + +class AppTestStringObject(AppTestCpythonExtensionBase): + def test_class_type(self): + module = self.import_extension('foo', [ + ("get_classtype", "METH_NOARGS", + """ + Py_INCREF(&PyClass_Type); + return &PyClass_Type; + """)]) + class C: pass + assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_cpyext.py b/pypy/module/cpyext/test/test_cpyext.py --- a/pypy/module/cpyext/test/test_cpyext.py +++ b/pypy/module/cpyext/test/test_cpyext.py @@ -744,6 +744,22 @@ print p assert 'py' in p + def test_get_version(self): + mod = self.import_extension('foo', [ + ('get_version', 'METH_NOARGS', + ''' + char* name1 = Py_GetVersion(); + char* name2 = Py_GetVersion(); + if (name1 != name2) + Py_RETURN_FALSE; + return PyString_FromString(name1); + ''' + ), + ]) + p = mod.get_version() + print p + assert 'PyPy' in p + def test_no_double_imports(self): import sys, os try: diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.api import Py_ssize_tP, PyObjectP from pypy.module.cpyext.pyobject import make_ref, from_ref +from pypy.interpreter.error import OperationError class TestDictObject(BaseApiTest): def test_dict(self, space, api): @@ -110,3 +111,44 @@ assert space.eq_w(space.len(w_copy), space.len(w_dict)) assert space.eq_w(w_copy, w_dict) + + def test_iterkeys(self, space, api): + w_dict = space.sys.getdict(space) + py_dict = make_ref(space, w_dict) + + ppos = lltype.malloc(Py_ssize_tP.TO, 1, flavor='raw') + pkey = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + pvalue = lltype.malloc(PyObjectP.TO, 1, flavor='raw') + + keys_w = [] + values_w = [] + try: + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, pkey, None): + w_key = from_ref(space, pkey[0]) + keys_w.append(w_key) + ppos[0] = 0 + while api.PyDict_Next(w_dict, ppos, None, pvalue): + w_value = from_ref(space, pvalue[0]) + values_w.append(w_value) + finally: + lltype.free(ppos, flavor='raw') + lltype.free(pkey, flavor='raw') + lltype.free(pvalue, flavor='raw') + + api.Py_DecRef(py_dict) # release borrowed references + + assert space.eq_w(space.newlist(keys_w), + space.call_method(w_dict, "keys")) + assert space.eq_w(space.newlist(values_w), + space.call_method(w_dict, "values")) + + def test_dictproxy(self, space, api): + w_dict = space.sys.get('modules') + w_proxy = api.PyDictProxy_New(w_dict) + assert space.is_true(space.contains(w_proxy, space.wrap('sys'))) + raises(OperationError, space.setitem, + w_proxy, space.wrap('sys'), space.w_None) + raises(OperationError, space.delitem, + w_proxy, space.wrap('sys')) + raises(OperationError, space.call_method, w_proxy, 'clear') diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -2,9 +2,10 @@ from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.eval import ( - Py_single_input, Py_file_input, Py_eval_input) + Py_single_input, Py_file_input, Py_eval_input, PyCompilerFlags) from pypy.module.cpyext.api import fopen, fclose, fileno, Py_ssize_tP from pypy.interpreter.gateway import interp2app +from pypy.interpreter.astcompiler import consts from pypy.tool.udir import udir import sys, os @@ -63,6 +64,22 @@ assert space.int_w(w_res) == 10 + def test_evalcode(self, space, api): + w_f = space.appexec([], """(): + def f(*args): + assert isinstance(args, tuple) + return len(args) + 8 + return f + """) + + w_t = space.newtuple([space.wrap(1), space.wrap(2)]) + w_globals = space.newdict() + w_locals = space.newdict() + space.setitem(w_locals, space.wrap("args"), w_t) + w_res = api.PyEval_EvalCode(w_f.code, w_globals, w_locals) + + assert space.int_w(w_res) == 10 + def test_run_simple_string(self, space, api): def run(code): buf = rffi.str2charp(code) @@ -96,6 +113,20 @@ assert 42 * 43 == space.unwrap( api.PyObject_GetItem(w_globals, space.wrap("a"))) + def test_run_string_flags(self, space, api): + flags = lltype.malloc(PyCompilerFlags, flavor='raw') + flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) + w_globals = space.newdict() + buf = rffi.str2charp("a = u'caf\xc3\xa9'") + try: + api.PyRun_StringFlags(buf, Py_single_input, + w_globals, w_globals, flags) + finally: + rffi.free_charp(buf) + w_a = space.getitem(w_globals, space.wrap("a")) + assert space.unwrap(w_a) == u'caf\xe9' + lltype.free(flags, flavor='raw') + def test_run_file(self, space, api): filepath = udir / "cpyext_test_runfile.py" filepath.write("raise ZeroDivisionError") @@ -256,3 +287,21 @@ print dir(mod) print mod.__dict__ assert mod.f(42) == 47 + + def test_merge_compiler_flags(self): + module = self.import_extension('foo', [ + ("get_flags", "METH_NOARGS", + """ + PyCompilerFlags flags; + flags.cf_flags = 0; + int result = PyEval_MergeCompilerFlags(&flags); + return Py_BuildValue("ii", result, flags.cf_flags); + """), + ]) + assert module.get_flags() == (0, 0) + + ns = {'module':module} + exec """from __future__ import division \nif 1: + def nested_flags(): + return module.get_flags()""" in ns + assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -81,6 +81,14 @@ rffi.free_charp(filename) rffi.free_charp(funcname) + def test_getnumfree(self, space, api): + w_function = space.appexec([], """(): + a = 5 + def method(x): return a, x + return method + """) + assert api.PyCode_GetNumFree(w_function.code) == 1 + def test_classmethod(self, space, api): w_function = space.appexec([], """(): def method(x): return x diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -65,4 +65,97 @@ values = module.values() types = [type(x) for x in values] assert types == [int, long, int, int] - + + def test_int_subtype(self): + module = self.import_extension( + 'foo', [ + ("newEnum", "METH_VARARGS", + """ + EnumObject *enumObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "Oi", &name, &intval)) + return NULL; + + PyType_Ready(&Enum_Type); + enumObj = PyObject_New(EnumObject, &Enum_Type); + if (!enumObj) { + return NULL; + } + + enumObj->ob_ival = intval; + Py_INCREF(name); + enumObj->ob_name = name; + + return (PyObject *)enumObj; + """), + ], + prologue=""" + typedef struct + { + PyObject_HEAD + long ob_ival; + PyObject* ob_name; + } EnumObject; + + static void + enum_dealloc(EnumObject *op) + { + Py_DECREF(op->ob_name); + Py_TYPE(op)->tp_free((PyObject *)op); + } + + static PyMemberDef enum_members[] = { + {"name", T_OBJECT, offsetof(EnumObject, ob_name), 0, NULL}, + {NULL} /* Sentinel */ + }; + + PyTypeObject Enum_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "Enum", + /*tp_basicsize*/ sizeof(EnumObject), + /*tp_itemsize*/ 0, + /*tp_dealloc*/ enum_dealloc, + /*tp_print*/ 0, + /*tp_getattr*/ 0, + /*tp_setattr*/ 0, + /*tp_compare*/ 0, + /*tp_repr*/ 0, + /*tp_as_number*/ 0, + /*tp_as_sequence*/ 0, + /*tp_as_mapping*/ 0, + /*tp_hash*/ 0, + /*tp_call*/ 0, + /*tp_str*/ 0, + /*tp_getattro*/ 0, + /*tp_setattro*/ 0, + /*tp_as_buffer*/ 0, + /*tp_flags*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, + /*tp_doc*/ 0, + /*tp_traverse*/ 0, + /*tp_clear*/ 0, + /*tp_richcompare*/ 0, + /*tp_weaklistoffset*/ 0, + /*tp_iter*/ 0, + /*tp_iternext*/ 0, + /*tp_methods*/ 0, + /*tp_members*/ enum_members, + /*tp_getset*/ 0, + /*tp_base*/ &PyInt_Type, + /*tp_dict*/ 0, + /*tp_descr_get*/ 0, + /*tp_descr_set*/ 0, + /*tp_dictoffset*/ 0, + /*tp_init*/ 0, + /*tp_alloc*/ 0, + /*tp_new*/ 0 + }; + """) + + a = module.newEnum("ULTIMATE_ANSWER", 42) + assert type(a).__name__ == "Enum" + assert isinstance(a, int) + assert a == int(a) == 42 + assert a.name == "ULTIMATE_ANSWER" diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -9,7 +9,7 @@ class AppTestMethodObject(AppTestCpythonExtensionBase): def test_call_METH(self): - mod = self.import_extension('foo', [ + mod = self.import_extension('MyModule', [ ('getarg_O', 'METH_O', ''' Py_INCREF(args); @@ -51,11 +51,23 @@ } ''' ), + ('getModule', 'METH_O', + ''' + if(PyCFunction_Check(args)) { + PyCFunctionObject* func = (PyCFunctionObject*)args; + Py_INCREF(func->m_module); + return func->m_module; + } + else { + Py_RETURN_FALSE; + } + ''' + ), ('isSameFunction', 'METH_O', ''' PyCFunction ptr = PyCFunction_GetFunction(args); if (!ptr) return NULL; - if (ptr == foo_getarg_O) + if (ptr == MyModule_getarg_O) Py_RETURN_TRUE; else Py_RETURN_FALSE; @@ -76,6 +88,7 @@ assert mod.getarg_OLD(1, 2) == (1, 2) assert mod.isCFunction(mod.getarg_O) == "getarg_O" + assert mod.getModule(mod.getarg_O) == 'MyModule' assert mod.isSameFunction(mod.getarg_O) raises(TypeError, mod.isSameFunction, 1) diff --git a/pypy/module/cpyext/test/test_object.py b/pypy/module/cpyext/test/test_object.py --- a/pypy/module/cpyext/test/test_object.py +++ b/pypy/module/cpyext/test/test_object.py @@ -191,6 +191,11 @@ assert api.PyObject_Unicode(space.wrap("\xe9")) is None api.PyErr_Clear() + def test_dir(self, space, api): + w_dir = api.PyObject_Dir(space.sys) + assert space.isinstance_w(w_dir, space.w_list) + assert space.is_true(space.contains(w_dir, space.wrap('modules'))) + class AppTestObject(AppTestCpythonExtensionBase): def setup_class(cls): AppTestCpythonExtensionBase.setup_class.im_func(cls) diff --git a/pypy/module/cpyext/test/test_pyfile.py b/pypy/module/cpyext/test/test_pyfile.py --- a/pypy/module/cpyext/test/test_pyfile.py +++ b/pypy/module/cpyext/test/test_pyfile.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.api import fopen, fclose, fwrite from pypy.module.cpyext.test.test_api import BaseApiTest +from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.rpython.lltypesystem import rffi, lltype from pypy.tool.udir import udir import pytest @@ -77,3 +78,28 @@ out = out.replace('\r\n', '\n') assert out == "test\n" + def test_file_writeobject(self, space, api, capfd): + w_obj = space.wrap("test\n") + w_stdout = space.sys.get("stdout") + api.PyFile_WriteObject(w_obj, w_stdout, Py_PRINT_RAW) + api.PyFile_WriteObject(w_obj, w_stdout, 0) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == "test\n'test\\n'" + + def test_file_softspace(self, space, api, capfd): + w_stdout = space.sys.get("stdout") + assert api.PyFile_SoftSpace(w_stdout, 1) == 0 + assert api.PyFile_SoftSpace(w_stdout, 0) == 1 + + api.PyFile_SoftSpace(w_stdout, 1) + w_ns = space.newdict() + space.exec_("print 1,", w_ns, w_ns) + space.exec_("print 2,", w_ns, w_ns) + api.PyFile_SoftSpace(w_stdout, 0) + space.exec_("print 3", w_ns, w_ns) + space.call_method(w_stdout, "flush") + out, err = capfd.readouterr() + out = out.replace('\r\n', '\n') + assert out == " 1 23\n" diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.rpython.lltypesystem.lltype import nullptr from pypy.module.cpyext.pystate import PyInterpreterState, PyThreadState +from pypy.module.cpyext.pyobject import from_ref class AppTestThreads(AppTestCpythonExtensionBase): def test_allow_threads(self): @@ -49,3 +50,10 @@ api.PyEval_AcquireThread(tstate) api.PyEval_ReleaseThread(tstate) + + def test_threadstate_dict(self, space, api): + ts = api.PyThreadState_Get() + ref = ts.c_dict + assert ref == api.PyThreadState_GetDict() + w_obj = from_ref(space, ref) + assert space.isinstance_w(w_obj, space.w_dict) diff --git a/pypy/module/cpyext/test/test_setobject.py b/pypy/module/cpyext/test/test_setobject.py --- a/pypy/module/cpyext/test/test_setobject.py +++ b/pypy/module/cpyext/test/test_setobject.py @@ -32,3 +32,13 @@ w_set = api.PySet_New(space.wrap([1,2,3,4])) assert api.PySet_Contains(w_set, space.wrap(1)) assert not api.PySet_Contains(w_set, space.wrap(0)) + + def test_set_pop_clear(self, space, api): + w_set = api.PySet_New(space.wrap([1,2,3,4])) + w_obj = api.PySet_Pop(w_set) + assert space.int_w(w_obj) in (1,2,3,4) + assert space.len_w(w_set) == 3 + api.PySet_Clear(w_set) + assert space.len_w(w_set) == 0 + + diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -166,6 +166,20 @@ res = module.test_string_format(1, "xyz") assert res == "bla 1 ble xyz\n" + def test_intern_inplace(self): + module = self.import_extension('foo', [ + ("test_intern_inplace", "METH_O", + ''' + PyObject *s = args; + Py_INCREF(s); + PyString_InternInPlace(&s); + return s; + ''' + ) + ]) + # This does not test much, but at least the refcounts are checked. + assert module.test_intern_inplace('s') == 's' + class TestString(BaseApiTest): def test_string_resize(self, space, api): py_str = new_empty_str(space, 10) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -425,3 +425,32 @@ ''') obj = module.new_obj() raises(ZeroDivisionError, obj.__setitem__, 5, None) + + def test_tp_iter(self): + module = self.import_extension('foo', [ + ("tp_iter", "METH_O", + ''' + if (!args->ob_type->tp_iter) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iter(args); + ''' + ), + ("tp_iternext", "METH_O", + ''' + if (!args->ob_type->tp_iternext) + { + PyErr_SetNone(PyExc_ValueError); + return NULL; + } + return args->ob_type->tp_iternext(args); + ''' + ) + ]) + l = [1] + it = module.tp_iter(l) + assert type(it) is type(iter([])) + assert module.tp_iternext(it) == 1 + raises(StopIteration, module.tp_iternext, it) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -204,8 +204,18 @@ assert api.Py_UNICODE_ISSPACE(unichr(char)) assert not api.Py_UNICODE_ISSPACE(u'a') + assert api.Py_UNICODE_ISALPHA(u'a') + assert not api.Py_UNICODE_ISALPHA(u'0') + assert api.Py_UNICODE_ISALNUM(u'a') + assert api.Py_UNICODE_ISALNUM(u'0') + assert not api.Py_UNICODE_ISALNUM(u'+') + assert api.Py_UNICODE_ISDECIMAL(u'\u0660') assert not api.Py_UNICODE_ISDECIMAL(u'a') + assert api.Py_UNICODE_ISDIGIT(u'9') + assert not api.Py_UNICODE_ISDIGIT(u'@') + assert api.Py_UNICODE_ISNUMERIC(u'9') + assert not api.Py_UNICODE_ISNUMERIC(u'@') for char in [0x0a, 0x0d, 0x1c, 0x1d, 0x1e, 0x85, 0x2028, 0x2029]: assert api.Py_UNICODE_ISLINEBREAK(unichr(char)) @@ -216,6 +226,9 @@ assert not api.Py_UNICODE_ISUPPER(u'a') assert not api.Py_UNICODE_ISLOWER(u'�') assert api.Py_UNICODE_ISUPPER(u'�') + assert not api.Py_UNICODE_ISTITLE(u'A') + assert api.Py_UNICODE_ISTITLE( + u'\N{LATIN CAPITAL LETTER L WITH SMALL LETTER J}') def test_TOLOWER(self, space, api): assert api.Py_UNICODE_TOLOWER(u'�') == u'�' @@ -420,3 +433,27 @@ w_seq = space.wrap([u'a', u'b']) w_joined = api.PyUnicode_Join(w_sep, w_seq) assert space.unwrap(w_joined) == u'ab' + + def test_fromordinal(self, space, api): + w_char = api.PyUnicode_FromOrdinal(65) + assert space.unwrap(w_char) == u'A' + w_char = api.PyUnicode_FromOrdinal(0) + assert space.unwrap(w_char) == u'\0' + w_char = api.PyUnicode_FromOrdinal(0xFFFF) + assert space.unwrap(w_char) == u'\uFFFF' + + def test_replace(self, space, api): + w_str = space.wrap(u"abababab") + w_substr = space.wrap(u"a") + w_replstr = space.wrap(u"z") + assert u"zbzbabab" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, 2)) + assert u"zbzbzbzb" == space.unwrap( + api.PyUnicode_Replace(w_str, w_substr, w_replstr, -1)) + + def test_tailmatch(self, space, api): + w_str = space.wrap(u"abcdef") + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -12,7 +12,7 @@ make_typedescr, get_typedescr) from pypy.module.cpyext.stringobject import PyString_Check from pypy.module.sys.interp_encoding import setdefaultencoding -from pypy.objspace.std import unicodeobject, unicodetype +from pypy.objspace.std import unicodeobject, unicodetype, stringtype from pypy.rlib import runicode from pypy.tool.sourcetools import func_renamer import sys @@ -89,6 +89,11 @@ return unicodedb.isspace(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISALPHA(space, ch): + """Return 1 or 0 depending on whether ch is an alphabetic character.""" + return unicodedb.isalpha(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISALNUM(space, ch): """Return 1 or 0 depending on whether ch is an alphanumeric character.""" return unicodedb.isalnum(ord(ch)) @@ -104,6 +109,16 @@ return unicodedb.isdecimal(ord(ch)) @cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISDIGIT(space, ch): + """Return 1 or 0 depending on whether ch is a digit character.""" + return unicodedb.isdigit(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISNUMERIC(space, ch): + """Return 1 or 0 depending on whether ch is a numeric character.""" + return unicodedb.isnumeric(ord(ch)) + + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) def Py_UNICODE_ISLOWER(space, ch): """Return 1 or 0 depending on whether ch is a lowercase character.""" return unicodedb.islower(ord(ch)) @@ -113,6 +128,11 @@ """Return 1 or 0 depending on whether ch is an uppercase character.""" return unicodedb.isupper(ord(ch)) + at cpython_api([Py_UNICODE], rffi.INT_real, error=CANNOT_FAIL) +def Py_UNICODE_ISTITLE(space, ch): + """Return 1 or 0 depending on whether ch is a titlecase character.""" + return unicodedb.istitle(ord(ch)) + @cpython_api([Py_UNICODE], Py_UNICODE, error=CANNOT_FAIL) def Py_UNICODE_TOLOWER(space, ch): """Return the character ch converted to lower case.""" @@ -155,6 +175,11 @@ except KeyError: return -1.0 + at cpython_api([], Py_UNICODE, error=CANNOT_FAIL) +def PyUnicode_GetMax(space): + """Get the maximum ordinal for a Unicode character.""" + return unichr(runicode.MAXUNICODE) + @cpython_api([PyObject], rffi.CCHARP, error=CANNOT_FAIL) def PyUnicode_AS_DATA(space, ref): """Return a pointer to the internal buffer of the object. o has to be a @@ -395,6 +420,16 @@ w_str = space.wrap(rffi.charpsize2str(s, size)) return space.call_method(w_str, 'decode', space.wrap("utf-8")) + at cpython_api([rffi.INT_real], PyObject) +def PyUnicode_FromOrdinal(space, ordinal): + """Create a Unicode Object from the given Unicode code point ordinal. + + The ordinal must be in range(0x10000) on narrow Python builds + (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is + raised in case it is not.""" + w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) + return space.call_function(space.builtin.get('unichr'), w_ordinal) + @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): # XXX always create a new string so far @@ -538,6 +573,28 @@ @cpython_api([PyObject, PyObject], PyObject) def PyUnicode_Join(space, w_sep, w_seq): - """Join a sequence of strings using the given separator and return the resulting - Unicode string.""" + """Join a sequence of strings using the given separator and return + the resulting Unicode string.""" return space.call_method(w_sep, 'join', w_seq) + + at cpython_api([PyObject, PyObject, PyObject, Py_ssize_t], PyObject) +def PyUnicode_Replace(space, w_str, w_substr, w_replstr, maxcount): + """Replace at most maxcount occurrences of substr in str with replstr and + return the resulting Unicode object. maxcount == -1 means replace all + occurrences.""" + return space.call_method(w_str, "replace", w_substr, w_replstr, + space.wrap(maxcount)) + + at cpython_api([PyObject, PyObject, Py_ssize_t, Py_ssize_t, rffi.INT_real], + rffi.INT_real, error=-1) +def PyUnicode_Tailmatch(space, w_str, w_substr, start, end, direction): + """Return 1 if substr matches str[start:end] at the given tail end + (direction == -1 means to do a prefix match, direction == 1 a + suffix match), 0 otherwise. Return -1 if an error occurred.""" + str = space.unicode_w(w_str) + substr = space.unicode_w(w_substr) + if rffi.cast(lltype.Signed, direction) >= 0: + return stringtype.stringstartswith(str, substr, start, end) + else: + return stringtype.stringendswith(str, substr, start, end) + diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -67,10 +67,12 @@ ("arccos", "arccos"), ("arcsin", "arcsin"), ("arctan", "arctan"), + ("arccosh", "arccosh"), ("arcsinh", "arcsinh"), ("arctanh", "arctanh"), ("copysign", "copysign"), ("cos", "cos"), + ("cosh", "cosh"), ("divide", "divide"), ("true_divide", "true_divide"), ("equal", "equal"), @@ -90,9 +92,11 @@ ("reciprocal", "reciprocal"), ("sign", "sign"), ("sin", "sin"), + ("sinh", "sinh"), ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ("tanh", "tanh"), ('bitwise_and', 'bitwise_and'), ('bitwise_or', 'bitwise_or'), ('bitwise_xor', 'bitwise_xor'), diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.inttype import int_typedef @@ -29,7 +29,6 @@ def convert_to(self, dtype): return dtype.box(self.value) - class W_GenericBox(Wrappable): _attrs_ = () @@ -39,10 +38,10 @@ ) def descr_str(self, space): - return self.descr_repr(space) + return space.wrap(self.get_dtype(space).itemtype.str_format(self)) - def descr_repr(self, space): - return space.wrap(self.get_dtype(space).itemtype.str_format(self)) + def descr_format(self, space, w_spec): + return space.format(self.item(space), w_spec) def descr_int(self, space): box = self.convert_to(W_LongBox.get_dtype(space)) @@ -190,6 +189,10 @@ descr__new__, get_dtype = new_dtype_getter("float64") + at unwrap_spec(self=W_GenericBox) +def descr_index(space, self): + return space.index(self.item(space)) + W_GenericBox.typedef = TypeDef("generic", __module__ = "numpypy", @@ -197,7 +200,8 @@ __new__ = interp2app(W_GenericBox.descr__new__.im_func), __str__ = interp2app(W_GenericBox.descr_str), - __repr__ = interp2app(W_GenericBox.descr_repr), + __repr__ = interp2app(W_GenericBox.descr_str), + __format__ = interp2app(W_GenericBox.descr_format), __int__ = interp2app(W_GenericBox.descr_int), __float__ = interp2app(W_GenericBox.descr_float), __nonzero__ = interp2app(W_GenericBox.descr_nonzero), @@ -248,6 +252,8 @@ W_BoolBox.typedef = TypeDef("bool_", W_GenericBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_BoolBox.descr__new__.im_func), + + __index__ = interp2app(descr_index), ) W_NumberBox.typedef = TypeDef("number", W_GenericBox.typedef, @@ -269,36 +275,43 @@ W_Int8Box.typedef = TypeDef("int8", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt8Box.typedef = TypeDef("uint8", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt8Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int16Box.typedef = TypeDef("int16", W_SignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_Int16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt16Box.typedef = TypeDef("uint16", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt16Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int32Box.typedef = TypeDef("int32", (W_SignedIntegerBox.typedef,) + MIXIN_32, __module__ = "numpypy", __new__ = interp2app(W_Int32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_UInt32Box.typedef = TypeDef("uint32", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt32Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_Int64Box.typedef = TypeDef("int64", (W_SignedIntegerBox.typedef,) + MIXIN_64, __module__ = "numpypy", __new__ = interp2app(W_Int64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) if LONG_BIT == 32: @@ -311,6 +324,7 @@ W_UInt64Box.typedef = TypeDef("uint64", W_UnsignedIntegerBox.typedef, __module__ = "numpypy", __new__ = interp2app(W_UInt64Box.descr__new__.im_func), + __index__ = interp2app(descr_index), ) W_InexactBox.typedef = TypeDef("inexact", W_NumberBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1333,6 +1333,7 @@ nbytes = GetSetProperty(BaseArray.descr_get_nbytes), T = GetSetProperty(BaseArray.descr_get_transpose), + transpose = interp2app(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), ravel = interp2app(BaseArray.descr_ravel), item = interp2app(BaseArray.descr_item), diff --git a/pypy/module/micronumpy/interp_support.py b/pypy/module/micronumpy/interp_support.py --- a/pypy/module/micronumpy/interp_support.py +++ b/pypy/module/micronumpy/interp_support.py @@ -3,7 +3,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.module.micronumpy import interp_dtype from pypy.objspace.std.strutil import strip_spaces - +from pypy.rlib import jit FLOAT_SIZE = rffi.sizeof(lltype.Float) @@ -72,11 +72,20 @@ "string is smaller than requested size")) a = W_NDimArray(count, [count], dtype=dtype) - for i in range(count): + fromstring_loop(a, count, dtype, itemsize, s) + return space.wrap(a) + +fromstring_driver = jit.JitDriver(greens=[], reds=['count', 'i', 'itemsize', + 'dtype', 's', 'a']) + +def fromstring_loop(a, count, dtype, itemsize, s): + i = 0 + while i < count: + fromstring_driver.jit_merge_point(a=a, count=count, dtype=dtype, + itemsize=itemsize, s=s, i=i) val = dtype.itemtype.runpack_str(s[i*itemsize:i*itemsize + itemsize]) a.dtype.setitem(a.storage, i, val) - - return space.wrap(a) + i += 1 @unwrap_spec(s=str, count=int, sep=str) def fromstring(space, s, w_dtype=None, count=-1, sep=''): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -546,7 +546,11 @@ ("arcsin", "arcsin", 1, {"promote_to_float": True}), ("arccos", "arccos", 1, {"promote_to_float": True}), ("arctan", "arctan", 1, {"promote_to_float": True}), + ("sinh", "sinh", 1, {"promote_to_float": True}), + ("cosh", "cosh", 1, {"promote_to_float": True}), + ("tanh", "tanh", 1, {"promote_to_float": True}), ("arcsinh", "arcsinh", 1, {"promote_to_float": True}), + ("arccosh", "arccosh", 1, {"promote_to_float": True}), ("arctanh", "arctanh", 1, {"promote_to_float": True}), ]: self.add_ufunc(space, *ufunc_def) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -371,6 +371,8 @@ assert type(a[1]) is numpy.float64 assert numpy.dtype(float).type is numpy.float64 + assert "{:3f}".format(numpy.float64(3)) == "3.000000" + assert numpy.float64(2.0) == 2.0 assert numpy.float64('23.4') == numpy.float64(23.4) raises(ValueError, numpy.float64, '23.2df') @@ -387,9 +389,9 @@ assert b.m() == 12 def test_long_as_index(self): - skip("waiting for removal of multimethods of __index__") - from _numpypy import int_ + from _numpypy import int_, float64 assert (1, 2, 3)[int_(1)] == 2 + raises(TypeError, lambda: (1, 2, 3)[float64(1)]) def test_int(self): import sys diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1489,6 +1489,7 @@ a = array((range(10), range(20, 30))) b = a.T assert(b[:, 0] == a[0, :]).all() + assert (a.transpose() == b).all() def test_flatiter(self): from _numpypy import array, flatiter, arange diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -310,6 +310,33 @@ b = arctan(a) assert math.isnan(b[0]) + def test_sinh(self): + import math + from _numpypy import array, sinh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = sinh(a) + for i in range(len(a)): + assert b[i] == math.sinh(a[i]) + + def test_cosh(self): + import math + from _numpypy import array, cosh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = cosh(a) + for i in range(len(a)): + assert b[i] == math.cosh(a[i]) + + def test_tanh(self): + import math + from _numpypy import array, tanh + + a = array([-1, 0, 1, float('inf'), float('-inf')]) + b = tanh(a) + for i in range(len(a)): + assert b[i] == math.tanh(a[i]) + def test_arcsinh(self): import math from _numpypy import arcsinh @@ -318,6 +345,15 @@ assert math.asinh(v) == arcsinh(v) assert math.isnan(arcsinh(float("nan"))) + def test_arccosh(self): + import math + from _numpypy import arccosh + + for v in [1.0, 1.1, 2]: + assert math.acosh(v) == arccosh(v) + for v in [-1.0, 0, .99]: + assert math.isnan(arccosh(v)) + def test_arctanh(self): import math from _numpypy import arctanh diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -479,38 +479,3 @@ 'int_sub': 3, 'jump': 1, 'setinteriorfield_raw': 1}) - - -class TestNumpyOld(LLJitMixin): - def setup_class(cls): - py.test.skip("old") - from pypy.module.micronumpy.compile import FakeSpace - from pypy.module.micronumpy.interp_dtype import get_dtype_cache - - cls.space = FakeSpace() - cls.float64_dtype = get_dtype_cache(cls.space).w_float64dtype - - def test_int32_sum(self): - py.test.skip("pypy/jit/backend/llimpl.py needs to be changed to " - "deal correctly with int dtypes for this test to " - "work. skip for now until someone feels up to the task") - space = self.space - float64_dtype = self.float64_dtype - int32_dtype = self.int32_dtype - - def f(n): - if NonConstant(False): - dtype = float64_dtype - else: - dtype = int32_dtype - ar = W_NDimArray(n, [n], dtype=dtype) - i = 0 - while i < n: - ar.get_concrete().setitem(i, int32_dtype.box(7)) - i += 1 - v = ar.descr_add(space, ar).descr_sum(space) - assert isinstance(v, IntObject) - return v.intval - - result = self.meta_interp(f, [5], listops=True, backendopt=True) - assert result == f(5) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -489,10 +489,28 @@ return math.atan(v) @simple_unary_op + def sinh(self, v): + return math.sinh(v) + + @simple_unary_op + def cosh(self, v): + return math.cosh(v) + + @simple_unary_op + def tanh(self, v): + return math.tanh(v) + + @simple_unary_op def arcsinh(self, v): return math.asinh(v) @simple_unary_op + def arccosh(self, v): + if v < 1.0: + return rfloat.NAN + return math.acosh(v) + + @simple_unary_op def arctanh(self, v): if v == 1.0 or v == -1.0: return math.copysign(rfloat.INFINITY, v) diff --git a/pypy/module/oracle/interp_error.py b/pypy/module/oracle/interp_error.py --- a/pypy/module/oracle/interp_error.py +++ b/pypy/module/oracle/interp_error.py @@ -72,7 +72,7 @@ get(space).w_InternalError, space.wrap("No Oracle error?")) - self.code = codeptr[0] + self.code = rffi.cast(lltype.Signed, codeptr[0]) self.w_message = config.w_string(space, textbuf) finally: lltype.free(codeptr, flavor='raw') diff --git a/pypy/module/oracle/interp_variable.py b/pypy/module/oracle/interp_variable.py --- a/pypy/module/oracle/interp_variable.py +++ b/pypy/module/oracle/interp_variable.py @@ -359,14 +359,14 @@ # Verifies that truncation or other problems did not take place on # retrieve. if self.isVariableLength: - if rffi.cast(lltype.Signed, self.returnCode[pos]) != 0: + error_code = rffi.cast(lltype.Signed, self.returnCode[pos]) + if error_code != 0: error = W_Error(space, self.environment, "Variable_VerifyFetch()", 0) - error.code = self.returnCode[pos] + error.code = error_code error.message = space.wrap( "column at array pos %d fetched with error: %d" % - (pos, - rffi.cast(lltype.Signed, self.returnCode[pos]))) + (pos, error_code)) w_error = get(space).w_DatabaseError raise OperationError(get(space).w_DatabaseError, diff --git a/pypy/module/pypyjit/test_pypy_c/test_instance.py b/pypy/module/pypyjit/test_pypy_c/test_instance.py --- a/pypy/module/pypyjit/test_pypy_c/test_instance.py +++ b/pypy/module/pypyjit/test_pypy_c/test_instance.py @@ -201,3 +201,28 @@ loop, = log.loops_by_filename(self.filepath) assert loop.match_by_id("compare", "") # optimized away + def test_super(self): + def main(): + class A(object): + def m(self, x): + return x + 1 + class B(A): + def m(self, x): + return super(B, self).m(x) + i = 0 + while i < 300: + i = B().m(i) + return i + + log = self.run(main, []) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i78 = int_lt(i72, 300) + guard_true(i78, descr=...) + guard_not_invalidated(descr=...) + i79 = force_token() + i80 = force_token() + i81 = int_add(i72, 1) + --TICK-- + jump(..., descr=...) + """) diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_fastpath.py @@ -97,6 +97,16 @@ tf_b.errcheck = errcheck assert tf_b(-126) == 'hello' + def test_array_to_ptr(self): + ARRAY = c_int * 8 + func = dll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [ARRAY] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + ptr = func(array) + assert ptr[0] == 1 + assert ptr[7] == 8 + class TestFallbackToSlowpath(BaseCTypesTestChecker): diff --git a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py --- a/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py +++ b/pypy/module/test_lib_pypy/ctypes_tests/test_prototypes.py @@ -246,6 +246,14 @@ def func(): pass CFUNCTYPE(None, c_int * 3)(func) + def test_array_to_ptr_wrongtype(self): + ARRAY = c_byte * 8 + func = testdll._testfunc_ai8 + func.restype = POINTER(c_int) + func.argtypes = [c_int * 8] + array = ARRAY(1, 2, 3, 4, 5, 6, 7, 8) + py.test.raises(ArgumentError, "func(array)") + ################################################################ if __name__ == '__main__': diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -1,7 +1,10 @@ """Additional tests for datetime.""" +import py + import time -import datetime +from lib_pypy import datetime +import copy import os def test_utcfromtimestamp(): @@ -26,3 +29,18 @@ def test_utcfromtimestamp_microsecond(): dt = datetime.datetime.utcfromtimestamp(0) assert isinstance(dt.microsecond, int) + + +def test_integer_args(): + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10.) + with py.test.raises(TypeError): + datetime.datetime(10, 10, 10, 10, 10, 10.) + +def test_utcnow_microsecond(): + dt = datetime.datetime.utcnow() + assert type(dt.microsecond) is int + + copy.copy(dt) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -326,4 +326,5 @@ return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.sys.defaultencoding = 'ascii' FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/flow/flowcontext.py b/pypy/objspace/flow/flowcontext.py --- a/pypy/objspace/flow/flowcontext.py +++ b/pypy/objspace/flow/flowcontext.py @@ -410,7 +410,7 @@ w_new = Constant(newvalue) f = self.crnt_frame stack_items_w = f.locals_stack_w - for i in range(f.valuestackdepth-1, f.nlocals-1, -1): + for i in range(f.valuestackdepth-1, f.pycode.co_nlocals-1, -1): w_v = stack_items_w[i] if isinstance(w_v, Constant): if w_v.value is oldvalue: diff --git a/pypy/objspace/flow/test/test_framestate.py b/pypy/objspace/flow/test/test_framestate.py --- a/pypy/objspace/flow/test/test_framestate.py +++ b/pypy/objspace/flow/test/test_framestate.py @@ -25,7 +25,7 @@ dummy = Constant(None) #dummy.dummy = True arg_list = ([Variable() for i in range(formalargcount)] + - [dummy] * (frame.nlocals - formalargcount)) + [dummy] * (frame.pycode.co_nlocals - formalargcount)) frame.setfastscope(arg_list) return frame @@ -42,7 +42,7 @@ def test_neq_hacked_framestate(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1 != fs2 @@ -55,7 +55,7 @@ def test_union_on_hacked_framestates(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) assert fs1.union(fs2) == fs2 # fs2 is more general assert fs2.union(fs1) == fs2 # fs2 is more general @@ -63,7 +63,7 @@ def test_restore_frame(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs1.restoreframe(frame) assert fs1 == FrameState(frame) @@ -82,7 +82,7 @@ def test_getoutputargs(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Variable() + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Variable() fs2 = FrameState(frame) outputargs = fs1.getoutputargs(fs2) # 'x' -> 'x' is a Variable @@ -92,16 +92,16 @@ def test_union_different_constants(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(42) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(42) fs2 = FrameState(frame) fs3 = fs1.union(fs2) fs3.restoreframe(frame) - assert isinstance(frame.locals_stack_w[frame.nlocals-1], Variable) - # ^^^ generalized + assert isinstance(frame.locals_stack_w[frame.pycode.co_nlocals-1], + Variable) # generalized def test_union_spectag(self): frame = self.getframe(self.func_simple) fs1 = FrameState(frame) - frame.locals_stack_w[frame.nlocals-1] = Constant(SpecTag()) + frame.locals_stack_w[frame.pycode.co_nlocals-1] = Constant(SpecTag()) fs2 = FrameState(frame) assert fs1.union(fs2) is None # UnionError diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -142,6 +142,17 @@ else: return result + def popitem(self, w_dict): + # this is a bad implementation: if we call popitem() repeatedly, + # it ends up taking n**2 time, because the next() calls below + # will take longer and longer. But all interesting strategies + # provide a better one. + space = self.space + iterator = self.iter(w_dict) + w_key, w_value = iterator.next() + self.delitem(w_dict, w_key) + return (w_key, w_value) + def clear(self, w_dict): strategy = self.space.fromcache(EmptyDictStrategy) storage = strategy.get_empty_storage() diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -3,7 +3,7 @@ from pypy.objspace.std.dictmultiobject import W_DictMultiObject, IteratorImplementation from pypy.objspace.std.dictmultiobject import DictStrategy from pypy.objspace.std.typeobject import unwrap_cell -from pypy.interpreter.error import OperationError +from pypy.interpreter.error import OperationError, operationerrfmt from pypy.rlib import rerased @@ -44,7 +44,8 @@ raise if not w_type.is_cpytype(): raise - # xxx obscure workaround: allow cpyext to write to type->tp_dict. + # xxx obscure workaround: allow cpyext to write to type->tp_dict + # xxx even in the case of a builtin type. # xxx like CPython, we assume that this is only done early after # xxx the type is created, and we don't invalidate any cache. w_type.dict_w[key] = w_value @@ -86,8 +87,14 @@ for (key, w_value) in self.unerase(w_dict.dstorage).dict_w.iteritems()] def clear(self, w_dict): - self.unerase(w_dict.dstorage).dict_w.clear() - self.unerase(w_dict.dstorage).mutated(None) + space = self.space + w_type = self.unerase(w_dict.dstorage) + if (not space.config.objspace.std.mutable_builtintypes + and not w_type.is_heaptype()): + msg = "can't clear dictionary of type '%s'" + raise operationerrfmt(space.w_TypeError, msg, w_type.name) + w_type.dict_w.clear() + w_type.mutated(None) class DictProxyIteratorImplementation(IteratorImplementation): def __init__(self, space, strategy, dictimplementation): diff --git a/pypy/objspace/std/test/test_dictproxy.py b/pypy/objspace/std/test/test_dictproxy.py --- a/pypy/objspace/std/test/test_dictproxy.py +++ b/pypy/objspace/std/test/test_dictproxy.py @@ -22,6 +22,9 @@ assert NotEmpty.string == 1 raises(TypeError, 'NotEmpty.__dict__.setdefault(15, 1)') + key, value = NotEmpty.__dict__.popitem() + assert (key == 'a' and value == 1) or (key == 'b' and value == 4) + def test_dictproxyeq(self): class a(object): pass @@ -43,6 +46,11 @@ assert s1 == s2 assert s1.startswith('{') and s1.endswith('}') + def test_immutable_dict_on_builtin_type(self): + raises(TypeError, "int.__dict__['a'] = 1") + raises(TypeError, int.__dict__.popitem) + raises(TypeError, int.__dict__.clear) + class AppTestUserObjectMethodCache(AppTestUserObject): def setup_class(cls): cls.space = gettestobjspace( diff --git a/pypy/objspace/std/test/test_typeobject.py b/pypy/objspace/std/test/test_typeobject.py --- a/pypy/objspace/std/test/test_typeobject.py +++ b/pypy/objspace/std/test/test_typeobject.py @@ -993,7 +993,9 @@ raises(TypeError, setattr, list, 'append', 42) raises(TypeError, setattr, list, 'foobar', 42) raises(TypeError, delattr, dict, 'keys') - + raises(TypeError, 'int.__dict__["a"] = 1') + raises(TypeError, 'int.__dict__.clear()') + def test_nontype_in_mro(self): class OldStyle: pass diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -345,9 +345,9 @@ return w_self._lookup_where(name) + @unroll_safe def lookup_starting_at(w_self, w_starttype, name): space = w_self.space - # XXX Optimize this with method cache look = False for w_class in w_self.mro_w: if w_class is w_starttype: diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -19,14 +19,24 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) -def fatalerror(msg, traceback=False): +def fatalerror(msg): + # print the RPython traceback and abort with a fatal error from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop - if traceback: - llop.debug_print_traceback(lltype.Void) + llop.debug_print_traceback(lltype.Void) llop.debug_fatalerror(lltype.Void, msg) fatalerror._dont_inline_ = True -fatalerror._annspecialcase_ = 'specialize:arg(1)' +fatalerror._jit_look_inside_ = False +fatalerror._annenforceargs_ = [str] + +def fatalerror_notb(msg): + # a variant of fatalerror() that doesn't print the RPython traceback + from pypy.rpython.lltypesystem import lltype + from pypy.rpython.lltypesystem.lloperation import llop + llop.debug_fatalerror(lltype.Void, msg) +fatalerror_notb._dont_inline_ = True +fatalerror_notb._jit_look_inside_ = False +fatalerror_notb._annenforceargs_ = [str] class DebugLog(list): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -450,6 +450,7 @@ assert v in self.reds self._alllivevars = dict.fromkeys( [name for name in self.greens + self.reds if '.' not in name]) + self._heuristic_order = {} # check if 'reds' and 'greens' are ordered self._make_extregistryentries() self.get_jitcell_at = get_jitcell_at self.set_jitcell_at = set_jitcell_at @@ -461,13 +462,61 @@ def _freeze_(self): return True + def _check_arguments(self, livevars): + assert dict.fromkeys(livevars) == self._alllivevars + # check heuristically that 'reds' and 'greens' are ordered as + # the JIT will need them to be: first INTs, then REFs, then + # FLOATs. + if len(self._heuristic_order) < len(livevars): + from pypy.rlib.rarithmetic import (r_singlefloat, r_longlong, + r_ulonglong, r_uint) + added = False + for var, value in livevars.items(): + if var not in self._heuristic_order: + if (r_ulonglong is not r_uint and + isinstance(value, (r_longlong, r_ulonglong))): + assert 0, ("should not pass a r_longlong argument for " + "now, because on 32-bit machines it needs " + "to be ordered as a FLOAT but on 64-bit " + "machines as an INT") + elif isinstance(value, (int, long, r_singlefloat)): + kind = '1:INT' + elif isinstance(value, float): + kind = '3:FLOAT' + elif isinstance(value, (str, unicode)) and len(value) != 1: + kind = '2:REF' + elif isinstance(value, (list, dict)): + kind = '2:REF' + elif (hasattr(value, '__class__') + and value.__class__.__module__ != '__builtin__'): + if hasattr(value, '_freeze_'): + continue # value._freeze_() is better not called + elif getattr(value, '_alloc_flavor_', 'gc') == 'gc': + kind = '2:REF' + else: + kind = '1:INT' + else: + continue + self._heuristic_order[var] = kind + added = True + if added: + for color in ('reds', 'greens'): + lst = getattr(self, color) + allkinds = [self._heuristic_order.get(name, '?') + for name in lst] + kinds = [k for k in allkinds if k != '?'] + assert kinds == sorted(kinds), ( + "bad order of %s variables in the jitdriver: " + "must be INTs, REFs, FLOATs; got %r" % + (color, allkinds)) + def jit_merge_point(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def can_enter_jit(_self, **livevars): # special-cased by ExtRegistryEntry - assert dict.fromkeys(livevars) == _self._alllivevars + _self._check_arguments(livevars) def loop_header(self): # special-cased by ExtRegistryEntry diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -238,7 +238,7 @@ self = jit.promote(self) if argchain.numargs != len(self.argtypes): raise TypeError, 'Wrong number of arguments: %d expected, got %d' %\ - (argchain.numargs, len(self.argtypes)) + (len(self.argtypes), argchain.numargs) ll_args = self._prepare() i = 0 arg = argchain.first diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -23,9 +23,11 @@ class _Specialize(object): def memo(self): - """ Specialize functions based on argument values. All arguments has - to be constant at the compile time. The whole function call is replaced - by a call result then. + """ Specialize the function based on argument values. All arguments + have to be either constants or PBCs (i.e. instances of classes with a + _freeze_ method returning True). The function call is replaced by + just its result, or in case several PBCs are used, by some fast + look-up of the result. """ def decorated_func(func): func._annspecialcase_ = 'specialize:memo' @@ -33,8 +35,8 @@ return decorated_func def arg(self, *args): - """ Specialize function based on values of given positions of arguments. - They must be compile-time constants in order to work. + """ Specialize the function based on the values of given positions + of arguments. They must be compile-time constants in order to work. There will be a copy of provided function for each combination of given arguments on positions in args (that can lead to @@ -82,8 +84,7 @@ return decorated_func def ll_and_arg(self, *args): - """ This is like ll(), but instead of specializing on all arguments, - specializes on only the arguments at the given positions + """ This is like ll(), and additionally like arg(...). """ def decorated_func(func): func._annspecialcase_ = 'specialize:ll_and_arg' + self._wrap(args) diff --git a/pypy/rlib/test/test_jit.py b/pypy/rlib/test/test_jit.py --- a/pypy/rlib/test/test_jit.py +++ b/pypy/rlib/test/test_jit.py @@ -2,6 +2,7 @@ from pypy.conftest import option from pypy.rlib.jit import hint, we_are_jitted, JitDriver, elidable_promote from pypy.rlib.jit import JitHintError, oopspec, isconstant +from pypy.rlib.rarithmetic import r_uint from pypy.translator.translator import TranslationContext, graphof from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin from pypy.rpython.lltypesystem import lltype @@ -146,6 +147,43 @@ res = self.interpret(f, [-234]) assert res == 1 + def test_argument_order_ok(self): + myjitdriver = JitDriver(greens=['i1', 'r1', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=A(), f1=3.5) + # assert did not raise + + def test_argument_order_wrong(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'f1'], reds=[]) + class A(object): + pass + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), f1=3.5) + + def test_argument_order_more_precision_later(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=None, f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '?', '3:FLOAT']" in repr(e.value) + + def test_argument_order_more_precision_later_2(self): + myjitdriver = JitDriver(greens=['r1', 'i1', 'r2', 'f1'], reds=[]) + class A(object): + pass + myjitdriver.jit_merge_point(i1=42, r1=None, r2=A(), f1=3.5) + e = raises(AssertionError, + myjitdriver.jit_merge_point, i1=42, r1=A(), r2=None, f1=3.5) + assert "got ['2:REF', '1:INT', '2:REF', '3:FLOAT']" in repr(e.value) + + def test_argument_order_accept_r_uint(self): + # this used to fail on 64-bit, because r_uint == r_ulonglong + myjitdriver = JitDriver(greens=['i1'], reds=[]) + myjitdriver.jit_merge_point(i1=r_uint(42)) + class TestJITLLtype(BaseTestJIT, LLRtypeMixin): pass diff --git a/pypy/rpython/memory/gc/generation.py b/pypy/rpython/memory/gc/generation.py --- a/pypy/rpython/memory/gc/generation.py +++ b/pypy/rpython/memory/gc/generation.py @@ -41,8 +41,8 @@ # the following values override the default arguments of __init__ when # translating to a real backend. - TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # XXX adjust - 'nursery_size': 896*1024, + TRANSLATION_PARAMS = {'space_size': 8*1024*1024, # 8 MB + 'nursery_size': 3*1024*1024, # 3 MB 'min_nursery_size': 48*1024, 'auto_nursery_size': True} @@ -92,8 +92,9 @@ # the GC is fully setup now. The rest can make use of it. if self.auto_nursery_size: newsize = nursery_size_from_env() - if newsize <= 0: - newsize = env.estimate_best_nursery_size() + #if newsize <= 0: + # ---disabled--- just use the default value. + # newsize = env.estimate_best_nursery_size() if newsize > 0: self.set_nursery_size(newsize) diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -387,7 +387,7 @@ m = re.search('guard \d+', comm) name = m.group(0) else: - name = comm[2:comm.find(':')-1] + name = " ".join(comm[2:].split(" ", 2)[:2]) if name in dumps: bname, start_ofs, dump = dumps[name] loop.force_asm = (lambda dump=dump, start_ofs=start_ofs, diff --git a/pypy/tool/release/package.py b/pypy/tool/release/package.py --- a/pypy/tool/release/package.py +++ b/pypy/tool/release/package.py @@ -82,6 +82,9 @@ for file in ['LICENSE', 'README']: shutil.copy(str(basedir.join(file)), str(pypydir)) pypydir.ensure('include', dir=True) + if sys.platform == 'win32': + shutil.copyfile(str(pypy_c.dirpath().join("libpypy-c.lib")), + str(pypydir.join('include/python27.lib'))) # we want to put there all *.h and *.inl from trunk/include # and from pypy/_interfaces includedir = basedir.join('include') diff --git a/pypy/translator/c/database.py b/pypy/translator/c/database.py --- a/pypy/translator/c/database.py +++ b/pypy/translator/c/database.py @@ -28,11 +28,13 @@ gctransformer = None def __init__(self, translator=None, standalone=False, + cpython_extension=False, gcpolicyclass=None, thread_enabled=False, sandbox=False): self.translator = translator self.standalone = standalone + self.cpython_extension = cpython_extension self.sandbox = sandbox if gcpolicyclass is None: gcpolicyclass = gc.RefcountingGcPolicy diff --git a/pypy/translator/c/dlltool.py b/pypy/translator/c/dlltool.py --- a/pypy/translator/c/dlltool.py +++ b/pypy/translator/c/dlltool.py @@ -14,11 +14,14 @@ CBuilder.__init__(self, *args, **kwds) def getentrypointptr(self): + entrypoints = [] bk = self.translator.annotator.bookkeeper - graphs = [bk.getdesc(f).cachedgraph(None) for f, _ in self.functions] - return [getfunctionptr(graph) for graph in graphs] + for f, _ in self.functions: + graph = bk.getdesc(f).getuniquegraph() + entrypoints.append(getfunctionptr(graph)) + return entrypoints - def gen_makefile(self, targetdir): + def gen_makefile(self, targetdir, exe_name=None): pass # XXX finish def compile(self): diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -106,7 +106,7 @@ yield ('RPYTHON_EXCEPTION_MATCH', exceptiondata.fn_exception_match) yield ('RPYTHON_TYPE_OF_EXC_INST', exceptiondata.fn_type_of_exc_inst) yield ('RPYTHON_RAISE_OSERROR', exceptiondata.fn_raise_OSError) - if not db.standalone: + if db.cpython_extension: yield ('RPYTHON_PYEXCCLASS2EXC', exceptiondata.fn_pyexcclass2exc) yield ('RPyExceptionOccurred1', exctransformer.rpyexc_occured_ptr.value) diff --git a/pypy/translator/c/gcc/trackgcroot.py b/pypy/translator/c/gcc/trackgcroot.py --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -471,8 +471,8 @@ return [] IGNORE_OPS_WITH_PREFIXES = dict.fromkeys([ - 'cmp', 'test', 'set', 'sahf', 'lahf', 'cltd', 'cld', 'std', - 'rep', 'movs', 'lods', 'stos', 'scas', 'cwtl', 'cwde', 'prefetch', + 'cmp', 'test', 'set', 'sahf', 'lahf', 'cld', 'std', + 'rep', 'movs', 'movhp', 'lods', 'stos', 'scas', 'cwde', 'prefetch', # floating-point operations cannot produce GC pointers 'f', 'cvt', 'ucomi', 'comi', 'subs', 'subp' , 'adds', 'addp', 'xorp', @@ -484,7 +484,9 @@ 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', - 'paddq', 'pinsr', + 'paddq', 'pinsr', 'pmul', 'psrl', + # sign-extending moves should not produce GC pointers + 'cbtw', 'cwtl', 'cwtd', 'cltd', 'cltq', 'cqto', # zero-extending moves should not produce GC pointers 'movz', # locked operations should not move GC pointers, at least so far @@ -1695,6 +1697,8 @@ } """ elif self.format in ('elf64', 'darwin64'): + if self.format == 'elf64': # gentoo patch: hardened systems + print >> output, "\t.section .note.GNU-stack,\"\",%progbits" print >> output, "\t.text" print >> output, "\t.globl %s" % _globalname('pypy_asm_stackwalk') _variant(elf64='.type pypy_asm_stackwalk, @function', diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -111,6 +111,7 @@ _compiled = False modulename = None split = False + cpython_extension = False def __init__(self, translator, entrypoint, config, gcpolicy=None, secondary_entrypoints=()): @@ -138,6 +139,7 @@ raise NotImplementedError("--gcrootfinder=asmgcc requires standalone") db = LowLevelDatabase(translator, standalone=self.standalone, + cpython_extension=self.cpython_extension, gcpolicyclass=gcpolicyclass, thread_enabled=self.config.translation.thread, sandbox=self.config.translation.sandbox) @@ -236,6 +238,8 @@ CBuilder.have___thread = self.translator.platform.check___thread() if not self.standalone: assert not self.config.translation.instrument + if self.cpython_extension: + defines['PYPY_CPYTHON_EXTENSION'] = 1 else: defines['PYPY_STANDALONE'] = db.get(pf) if self.config.translation.instrument: @@ -307,13 +311,18 @@ class CExtModuleBuilder(CBuilder): standalone = False + cpython_extension = True _module = None _wrapper = None def get_eci(self): from distutils import sysconfig python_inc = sysconfig.get_python_inc() - eci = ExternalCompilationInfo(include_dirs=[python_inc]) + eci = ExternalCompilationInfo( + include_dirs=[python_inc], + includes=["Python.h", + ], + ) return eci.merge(CBuilder.get_eci(self)) def getentrypointptr(self): # xxx diff --git a/pypy/translator/c/src/asm_gcc_x86.h b/pypy/translator/c/src/asm_gcc_x86.h --- a/pypy/translator/c/src/asm_gcc_x86.h +++ b/pypy/translator/c/src/asm_gcc_x86.h @@ -102,6 +102,12 @@ #endif /* !PYPY_CPU_HAS_STANDARD_PRECISION */ +#ifdef PYPY_X86_CHECK_SSE2 +#define PYPY_X86_CHECK_SSE2_DEFINED +extern void pypy_x86_check_sse2(void); +#endif + + /* implementations */ #ifndef PYPY_NOT_MAIN_FILE @@ -113,4 +119,25 @@ } # endif +# ifdef PYPY_X86_CHECK_SSE2 +void pypy_x86_check_sse2(void) +{ + //Read the CPU features. + int features; + asm("mov $1, %%eax\n" + "cpuid\n" + "mov %%edx, %0" + : "=g"(features) : : "eax", "ebx", "edx", "ecx"); + + //Check bits 25 and 26, this indicates SSE2 support + if (((features & (1 << 25)) == 0) || ((features & (1 << 26)) == 0)) + { + fprintf(stderr, "Old CPU with no SSE2 support, cannot continue.\n" + "You need to re-translate with " + "'--jit-backend=x86-without-sse2'\n"); + abort(); + } +} +# endif + #endif diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -1,3 +1,4 @@ +#define PYPY_NOT_MAIN_FILE #include #include diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: exceptions ***/ -#if !defined(PYPY_STANDALONE) && !defined(PYPY_NOT_MAIN_FILE) +#if defined(PYPY_CPYTHON_EXTENSION) && !defined(PYPY_NOT_MAIN_FILE) PyObject *RPythonError; #endif @@ -74,7 +74,7 @@ RPyRaiseException(RPYTHON_TYPE_OF_EXC_INST(rexc), rexc); } -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION void RPyConvertExceptionFromCPython(void) { /* convert the CPython exception to an RPython one */ diff --git a/pypy/translator/c/src/g_include.h b/pypy/translator/c/src/g_include.h --- a/pypy/translator/c/src/g_include.h +++ b/pypy/translator/c/src/g_include.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header file for code produced by genc.py ***/ -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION # include "Python.h" # include "compile.h" # include "frameobject.h" diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -5,8 +5,6 @@ #ifdef PYPY_STANDALONE # include "src/commondefs.h" -#else -# include "Python.h" #endif #ifdef _WIN32 diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -36,6 +36,9 @@ RPyListOfString *list; pypy_asm_stack_bottom(); +#ifdef PYPY_X86_CHECK_SSE2_DEFINED + pypy_x86_check_sse2(); +#endif instrument_setup(); if (sizeof(void*) != SIZEOF_LONG) { diff --git a/pypy/translator/c/src/pyobj.h b/pypy/translator/c/src/pyobj.h --- a/pypy/translator/c/src/pyobj.h +++ b/pypy/translator/c/src/pyobj.h @@ -2,7 +2,7 @@ /************************************************************/ /*** C header subsection: untyped operations ***/ /*** as OP_XXX() macros calling the CPython API ***/ - +#ifdef PYPY_CPYTHON_EXTENSION #define op_bool(r,what) { \ int _retval = what; \ @@ -261,3 +261,5 @@ } #endif + +#endif /* PYPY_CPYTHON_EXTENSION */ diff --git a/pypy/translator/c/src/support.h b/pypy/translator/c/src/support.h --- a/pypy/translator/c/src/support.h +++ b/pypy/translator/c/src/support.h @@ -104,7 +104,7 @@ # define RPyBareItem(array, index) ((array)[index]) #endif -#ifndef PYPY_STANDALONE +#ifdef PYPY_CPYTHON_EXTENSION /* prototypes */ diff --git a/pypy/translator/c/test/test_dlltool.py b/pypy/translator/c/test/test_dlltool.py --- a/pypy/translator/c/test/test_dlltool.py +++ b/pypy/translator/c/test/test_dlltool.py @@ -2,7 +2,6 @@ from pypy.translator.c.dlltool import DLLDef from ctypes import CDLL import py -py.test.skip("fix this if needed") class TestDLLTool(object): def test_basic(self): @@ -16,8 +15,8 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() dll = CDLL(str(so)) - assert dll.f(3) == 3 - assert dll.b(10) == 12 + assert dll.pypy_g_f(3) == 3 + assert dll.pypy_g_b(10) == 12 def test_split_criteria(self): def f(x): @@ -28,4 +27,5 @@ d = DLLDef('lib', [(f, [int]), (b, [int])]) so = d.compile() - assert py.path.local(so).dirpath().join('implement.c').check() + dirpath = py.path.local(so).dirpath() + assert dirpath.join('translator_c_test_test_dlltool.c').check() diff --git a/pypy/translator/driver.py b/pypy/translator/driver.py --- a/pypy/translator/driver.py +++ b/pypy/translator/driver.py @@ -331,6 +331,7 @@ raise Exception("stand-alone program entry point must return an " "int (and not, e.g., None or always raise an " "exception).") + annotator.complete() annotator.simplify() return s @@ -558,6 +559,9 @@ newsoname = newexename.new(basename=soname.basename) shutil.copy(str(soname), str(newsoname)) self.log.info("copied: %s" % (newsoname,)) + if sys.platform == 'win32': + shutil.copyfile(str(soname.new(ext='lib')), + str(newsoname.new(ext='lib'))) self.c_entryp = newexename self.log.info('usession directory: %s' % (udir,)) self.log.info("created: %s" % (self.c_entryp,)) diff --git a/pypy/translator/sandbox/test/test_sandbox.py b/pypy/translator/sandbox/test/test_sandbox.py --- a/pypy/translator/sandbox/test/test_sandbox.py +++ b/pypy/translator/sandbox/test/test_sandbox.py @@ -145,9 +145,9 @@ g = pipe.stdin f = pipe.stdout expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GENERATIONGC_NURSERY",), None) - if sys.platform.startswith('linux'): # on Mac, uses another (sandboxsafe) approach - expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), - OSError(5232, "xyz")) + #if sys.platform.startswith('linux'): + # expect(f, g, "ll_os.ll_os_open", ("/proc/cpuinfo", 0, 420), + # OSError(5232, "xyz")) expect(f, g, "ll_os.ll_os_getenv", ("PYPY_GC_DEBUG",), None) g.close() tail = f.read() From noreply at buildbot.pypy.org Sun Feb 26 04:48:21 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 04:48:21 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: write a test for BUILD_LIST_FROM_ARG Message-ID: <20120226034821.6DEB682366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52906:1de4281656cf Date: 2012-02-25 19:48 -0800 http://bitbucket.org/pypy/pypy/changeset/1de4281656cf/ Log: write a test for BUILD_LIST_FROM_ARG diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -908,3 +908,17 @@ return d['f'](5) """) assert 'generator' in space.str_w(space.repr(w_generator)) + + def test_list_comprehension(self): + source = "def f(): [i for i in l]" + source2 = "def f(): [i for i in l for j in l]" + source3 = "def f(): [i for i in l if i]" + counts = self.count_instructions(source) + assert ops.BUILD_LIST not in counts + assert counts[ops.BUILD_LIST_FROM_ARG] == 1 + counts = self.count_instructions(source2) + assert counts[ops.BUILD_LIST] == 1 + assert ops.BUILD_LIST_FROM_ARG not in counts + counts = self.count_instructions(source3) + assert counts[ops.BUILD_LIST] == 1 + assert ops.BUILD_LIST_FROM_ARG not in counts From noreply at buildbot.pypy.org Sun Feb 26 05:02:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:02:41 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: fix test Message-ID: <20120226040241.80AA982366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52907:a73088e83e62 Date: 2012-02-25 20:00 -0800 http://bitbucket.org/pypy/pypy/changeset/a73088e83e62/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -258,4 +258,4 @@ assert res == f(37) # There is the one actual field on a, plus several fields on the list # itself - self.check_resops(getfield_gc=10) + self.check_simple_loop(getfield_gc=4) From noreply at buildbot.pypy.org Sun Feb 26 05:02:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:02:42 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: Remove dead code Message-ID: <20120226040242.AE93F82366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52908:e127af981680 Date: 2012-02-25 20:00 -0800 http://bitbucket.org/pypy/pypy/changeset/e127af981680/ Log: Remove dead code diff --git a/REVIEW.rst b/REVIEW.rst --- a/REVIEW.rst +++ b/REVIEW.rst @@ -2,7 +2,6 @@ ============ -* explicit tests for the generated bytecode * dead code in jit/codewriter/support.py? * Do we need __length_hint__? * len_w instead of int_w(len()) diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -130,42 +130,8 @@ # ____________________________________________________________ # -# Manually map oopspec'ed operations back to their ll implementation -# coming from modules like pypy.rpython.rlist. The following -# functions are fished from the globals() by setup_extra_builtin(). - -def _ll_0_newlist(LIST): - return LIST.ll_newlist(0) -def _ll_1_newlist(LIST, count): - return LIST.ll_newlist(count) -def _ll_2_newlist(LIST, count, item): - return rlist.ll_alloc_and_set(LIST, count, item) -_ll_0_newlist.need_result_type = True -_ll_1_newlist.need_result_type = True -_ll_2_newlist.need_result_type = True - -def _ll_1_list_len(l): - return l.ll_length() -def _ll_2_list_getitem(l, index): - return rlist.ll_getitem(rlist.dum_checkidx, l, index) -def _ll_3_list_setitem(l, index, newitem): - rlist.ll_setitem(rlist.dum_checkidx, l, index, newitem) -def _ll_2_list_delitem(l, index): - rlist.ll_delitem(rlist.dum_checkidx, l, index) -def _ll_1_list_pop(l): - return rlist.ll_pop_default(rlist.dum_checkidx, l) -def _ll_2_list_pop(l, index): - return rlist.ll_pop(rlist.dum_checkidx, l, index) -_ll_2_list_append = rlist.ll_append -_ll_2_list_extend = rlist.ll_extend -_ll_3_list_insert = rlist.ll_insert_nonneg -_ll_4_list_setslice = rlist.ll_listsetslice -_ll_2_list_delslice_startonly = rlist.ll_listdelslice_startonly -_ll_3_list_delslice_startstop = rlist.ll_listdelslice_startstop -_ll_2_list_inplace_mul = rlist.ll_inplace_mul - -_ll_2_list_getitem_foldable = _ll_2_list_getitem -_ll_1_list_len_foldable = _ll_1_list_len +# The following functions are fished from the globals() by +# setup_extra_builtin(). _ll_5_list_ll_arraycopy = rgc.ll_arraycopy From noreply at buildbot.pypy.org Sun Feb 26 05:02:43 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:02:43 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: improve the comment Message-ID: <20120226040243.D8CD882366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52909:9ac6f6b1f4eb Date: 2012-02-25 20:01 -0800 http://bitbucket.org/pypy/pypy/changeset/9ac6f6b1f4eb/ Log: improve the comment diff --git a/pypy/jit/metainterp/test/test_list.py b/pypy/jit/metainterp/test/test_list.py --- a/pypy/jit/metainterp/test/test_list.py +++ b/pypy/jit/metainterp/test/test_list.py @@ -256,6 +256,7 @@ return a * b res = self.meta_interp(f, [37]) assert res == f(37) - # There is the one actual field on a, plus several fields on the list - # itself + # by unrolling there should be no actual getfields on a in the + # loop. if caches were invalidated, there will be 2. + # getfields are 2 reads of items and 2 of length self.check_simple_loop(getfield_gc=4) From noreply at buildbot.pypy.org Sun Feb 26 05:02:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:02:45 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: use len_w Message-ID: <20120226040245.1369782366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52910:8d47eca2be46 Date: 2012-02-25 20:01 -0800 http://bitbucket.org/pypy/pypy/changeset/8d47eca2be46/ Log: use len_w diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -718,7 +718,7 @@ # value last_val = self.popvalue() try: - lgt = self.space.int_w(self.space.len(last_val)) + lgt = self.space.len_w(last_val) except OperationError, e: if e.async(space): raise From noreply at buildbot.pypy.org Sun Feb 26 05:02:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:02:46 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: fix name error Message-ID: <20120226040246.43D6282366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52911:4269c3b7dcf0 Date: 2012-02-25 20:02 -0800 http://bitbucket.org/pypy/pypy/changeset/4269c3b7dcf0/ Log: fix name error diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -720,7 +720,7 @@ try: lgt = self.space.len_w(last_val) except OperationError, e: - if e.async(space): + if e.async(self.space): raise lgt = 0 # oh well self.pushvalue(self.space.newlist([], sizehint=lgt)) From noreply at buildbot.pypy.org Sun Feb 26 05:03:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 05:03:29 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: remove review Message-ID: <20120226040329.D94CC82366@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52912:8d61267703d0 Date: 2012-02-25 20:03 -0800 http://bitbucket.org/pypy/pypy/changeset/8d61267703d0/ Log: remove review diff --git a/REVIEW.rst b/REVIEW.rst deleted file mode 100644 --- a/REVIEW.rst +++ /dev/null @@ -1,8 +0,0 @@ -Review notes -============ - - -* dead code in jit/codewriter/support.py? -* Do we need __length_hint__? -* len_w instead of int_w(len()) -* share some code with _unpackiterable_unknown_length From noreply at buildbot.pypy.org Sun Feb 26 13:27:07 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Sun, 26 Feb 2012 13:27:07 +0100 (CET) Subject: [pypy-commit] lang-scheme default: fix off by one in errormessage Message-ID: <20120226122707.96549820B1@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r41:ae3af9ef8b39 Date: 2012-02-18 11:39 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/ae3af9ef8b39/ Log: fix off by one in errormessage diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -12,7 +12,7 @@ def procedure(self, ctx, lst): if len(lst) == 0: if self.default_result is None: - raise WrongArgsNumber(len(lst), ">1") + raise WrongArgsNumber(len(lst), ">=1") return self.default_result @@ -405,7 +405,7 @@ def procedure_tr(self, ctx, lst): if len(lst) < 2: - raise WrongArgsNumber(len(lst), ">2") + raise WrongArgsNumber(len(lst), ">=2") w_proc = lst[0] if not isinstance(w_proc, W_Procedure): From noreply at buildbot.pypy.org Sun Feb 26 13:27:08 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Sun, 26 Feb 2012 13:27:08 +0100 (CET) Subject: [pypy-commit] lang-scheme default: minor testfix Message-ID: <20120226122708.AD5C78236C@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r42:62a632e86746 Date: 2012-02-26 13:26 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/62a632e86746/ Log: minor testfix diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -821,7 +821,7 @@ w_result = eval_(ctx, "(apply list '((+ 2 3) (* 3 4)))") assert w_result.equal(parse_("((+ 2 3) (* 3 4))")) - py.test.raises(WrongArgsNumber, eval_, ctx, "(apply 1)") + py.test.raises(WrongArgsNumber, eval_, ctx, "(apply +)") py.test.raises(WrongArgType, eval_, ctx, "(apply 1 '(1))") py.test.raises(WrongArgType, eval_, ctx, "(apply + 42)") From noreply at buildbot.pypy.org Sun Feb 26 13:37:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 26 Feb 2012 13:37:05 +0100 (CET) Subject: [pypy-commit] pypy stm-gc: Backed out changeset ae34644cc94c Message-ID: <20120226123705.7FB21820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-gc Changeset: r52913:306a16bdd3a2 Date: 2012-02-26 13:36 +0100 http://bitbucket.org/pypy/pypy/changeset/306a16bdd3a2/ Log: Backed out changeset ae34644cc94c diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -1,5 +1,5 @@ from pypy.rpython.memory.gctransform.framework import FrameworkGCTransformer -from pypy.rpython.memory.gctransform.shadowstack import ShadowStackRootWalker +from pypy.rpython.memory.gctransform.framework import BaseRootWalker from pypy.rpython.lltypesystem import llmemory from pypy.annotation import model as annmodel @@ -28,6 +28,12 @@ self.gcdata.gc.commit_transaction.im_func, [s_gc], annmodel.s_None) + def push_roots(self, hop, keep_current_args=False): + pass + + def pop_roots(self, hop, livevars): + pass + def build_root_walker(self): return StmStackRootWalker(self) @@ -63,5 +69,7 @@ hop.genop("direct_call", [self.stm_commit_ptr, self.c_const_gc]) -class StmStackRootWalker(ShadowStackRootWalker): - pass +class StmStackRootWalker(BaseRootWalker): + + def walk_stack_roots(self, collect_stack_root): + raise NotImplementedError From noreply at buildbot.pypy.org Sun Feb 26 18:48:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 18:48:00 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: maybe fix translation Message-ID: <20120226174800.F3E81820B1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52914:7b5761103d00 Date: 2012-02-26 09:47 -0800 http://bitbucket.org/pypy/pypy/changeset/7b5761103d00/ Log: maybe fix translation diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -33,7 +33,7 @@ return W_ListObject.from_storage_and_strategy(space, storage, strategy) @jit.look_inside_iff(lambda space, list_w: jit.isconstant(len(list_w)) and len(list_w) < UNROLL_CUTOFF) -def get_strategy_from_list_objects(space, list_w, sizehint): +def get_strategy_from_list_objects(space, list_w, sizehint=-1): if not list_w: if sizehint != -1: return SizeListStrategy(space, sizehint) From noreply at buildbot.pypy.org Sun Feb 26 18:49:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 18:49:48 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: maybe fix translation Message-ID: <20120226174948.A55B3820B1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52915:d38fdafbd462 Date: 2012-02-26 09:49 -0800 http://bitbucket.org/pypy/pypy/changeset/d38fdafbd462/ Log: maybe fix translation diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -32,8 +32,8 @@ storage = strategy.erase(None) return W_ListObject.from_storage_and_strategy(space, storage, strategy) - at jit.look_inside_iff(lambda space, list_w: jit.isconstant(len(list_w)) and len(list_w) < UNROLL_CUTOFF) -def get_strategy_from_list_objects(space, list_w, sizehint=-1): + at jit.look_inside_iff(lambda space, list_w, sizehint: jit.isconstant(len(list_w)) and len(list_w) < UNROLL_CUTOFF) +def get_strategy_from_list_objects(space, list_w, sizehint): if not list_w: if sizehint != -1: return SizeListStrategy(space, sizehint) From noreply at buildbot.pypy.org Sun Feb 26 19:30:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 26 Feb 2012 19:30:46 +0100 (CET) Subject: [pypy-commit] pypy speedup-list-comprehension: add few hints to make those functions pass Message-ID: <20120226183046.E2870820B1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: speedup-list-comprehension Changeset: r52916:4c4c935ebd9b Date: 2012-02-26 10:30 -0800 http://bitbucket.org/pypy/pypy/changeset/4c4c935ebd9b/ Log: add few hints to make those functions pass diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -321,6 +321,7 @@ def ll_fixed_length(l): return len(l) +ll_fixed_length._always_inline_ = True def ll_fixed_items(l): return l @@ -328,10 +329,12 @@ def ll_fixed_getitem_fast(l, index): ll_assert(index < len(l), "fixed getitem out of bounds") return l[index] +ll_fixed_getitem_fast._always_inline_ = True def ll_fixed_setitem_fast(l, index, item): ll_assert(index < len(l), "fixed setitem out of bounds") l[index] = item +ll_fixed_setitem_fast._always_inline_ = True def newlist(llops, r_list, items_v, v_sizehint=None): LIST = r_list.LIST diff --git a/pypy/rpython/rlist.py b/pypy/rpython/rlist.py --- a/pypy/rpython/rlist.py +++ b/pypy/rpython/rlist.py @@ -559,7 +559,7 @@ def ll_len_foldable(l): return l.ll_length() -ll_len_foldable.oopspec = 'list.len_foldable(l)' +ll_len_foldable._always_inline_ = True def ll_list_is_true_foldable(l): return bool(l) and ll_len_foldable(l) != 0 @@ -618,7 +618,6 @@ res = l.ll_getitem_fast(index) ll_delitem_nonneg(dum_nocheck, l, index) return res -ll_pop_nonneg.oopspec = 'list.pop(l, index)' def ll_pop_default(func, l): length = l.ll_length() @@ -652,7 +651,6 @@ l.ll_setitem_fast(newlength, null) l._ll_resize_le(newlength) return res -ll_pop_zero.oopspec = 'list.pop(l, 0)' def ll_pop(func, l, index): length = l.ll_length() @@ -712,7 +710,7 @@ def ll_getitem_foldable_nonneg(l, index): ll_assert(index >= 0, "unexpectedly negative list getitem index") return l.ll_getitem_fast(index) -ll_getitem_foldable_nonneg.oopspec = 'list.getitem_foldable(l, index)' +ll_getitem_foldable_nonneg._always_inline_ = True def ll_getitem_foldable(l, index): if index < 0: From noreply at buildbot.pypy.org Mon Feb 27 10:50:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 10:50:25 +0100 (CET) Subject: [pypy-commit] pypy default: c_longdouble is not supported by PyPy. Skip a related test Message-ID: <20120227095025.A583D820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52917:c80ff2d04fb3 Date: 2012-02-27 10:50 +0100 http://bitbucket.org/pypy/pypy/changeset/c80ff2d04fb3/ Log: c_longdouble is not supported by PyPy. Skip a related test diff --git a/lib-python/modified-2.7/ctypes/test/test_arrays.py b/lib-python/modified-2.7/ctypes/test/test_arrays.py --- a/lib-python/modified-2.7/ctypes/test/test_arrays.py +++ b/lib-python/modified-2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. From noreply at buildbot.pypy.org Mon Feb 27 11:17:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 11:17:24 +0100 (CET) Subject: [pypy-commit] pypy default: Hack to display again the log of the loop before sending it to the Message-ID: <20120227101724.A9197820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52918:8e55816f5dfe Date: 2012-02-27 11:15 +0100 http://bitbucket.org/pypy/pypy/changeset/8e55816f5dfe/ Log: Hack to display again the log of the loop before sending it to the backend, in addition to after sending it to the backend. That's necessary to make any sense of the error if the backend crashes. diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -289,8 +289,21 @@ assert isinstance(token, TargetToken) assert token.original_jitcell_token is None token.original_jitcell_token = trace.original_jitcell_token - - + + +def do_compile_loop(metainterp_sd, inputargs, operations, looptoken, + log=True, name=''): + metainterp_sd.logger_ops.log_loop(inputargs, operations, -2, + 'compiling', name=name) + return metainterp_sd.cpu.compile_loop(inputargs, operations, looptoken, + log=log, name=name) + +def do_compile_bridge(metainterp_sd, faildescr, inputargs, operations, + original_loop_token, log=True): + metainterp_sd.logger_ops.log_bridge(inputargs, operations, -2) + return metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, + original_loop_token, log=log) + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,9 +332,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, - name=loopname) + asminfo = do_compile_loop(metainterp_sd, loop.inputargs, + operations, original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() @@ -333,7 +346,6 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - loopname = jitdriver_sd.warmstate.get_location_str(greenkey) if asminfo is not None: ops_offset = asminfo.ops_offset else: @@ -365,9 +377,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, - operations, - original_loop_token) + asminfo = do_compile_bridge(metainterp_sd, faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -18,6 +18,10 @@ debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") + elif number == -2: + debug_start("jit-log-compiling-loop") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-loop") else: debug_start("jit-log-opt-loop") debug_print("# Loop", number, '(%s)' % name , ":", type, @@ -31,6 +35,10 @@ debug_start("jit-log-noopt-bridge") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-bridge") + elif number == -2: + debug_start("jit-log-compiling-bridge") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-bridge") else: debug_start("jit-log-opt-bridge") debug_print("# bridge out of Guard", number, From noreply at buildbot.pypy.org Mon Feb 27 11:17:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 11:17:26 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120227101726.2D00B820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52919:5a84240692b4 Date: 2012-02-27 11:15 +0100 http://bitbucket.org/pypy/pypy/changeset/5a84240692b4/ Log: merge heads diff --git a/lib-python/modified-2.7/ctypes/test/test_arrays.py b/lib-python/modified-2.7/ctypes/test/test_arrays.py --- a/lib-python/modified-2.7/ctypes/test/test_arrays.py +++ b/lib-python/modified-2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. From noreply at buildbot.pypy.org Mon Feb 27 11:45:08 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:08 +0100 (CET) Subject: [pypy-commit] pypy py3k: start to kill the references to the module/_file, which will be killed soon. Message-ID: <20120227104508.2B9E2820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52920:319e160de1b9 Date: 2012-02-27 11:30 +0100 http://bitbucket.org/pypy/pypy/changeset/319e160de1b9/ Log: start to kill the references to the module/_file, which will be killed soon. module/marshal used to special-case W_File for performances: now the fast path is disabled, we should evenutally redo it for _io files. diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -13,14 +13,14 @@ and not p.basename.startswith('test')] essential_modules = dict.fromkeys( - ["exceptions", "_file", "sys", "builtins", "posix"] + ["exceptions", "_io", "sys", "builtins", "posix"] ) default_modules = essential_modules.copy() default_modules.update(dict.fromkeys( ["_codecs", "atexit", "gc", "_weakref", "marshal", "errno", "imp", "math", "cmath", "_sre", "_pickle_support", "operator", - "parser", "symbol", "token", "_ast", "_io", "_random", "__pypy__", + "parser", "symbol", "token", "_ast", "_random", "__pypy__", "_string", "_testing"])) diff --git a/pypy/module/array/interp_array.py b/pypy/module/array/interp_array.py --- a/pypy/module/array/interp_array.py +++ b/pypy/module/array/interp_array.py @@ -4,7 +4,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import GetSetProperty, make_weakref_descr -from pypy.module._file.interp_file import W_File from pypy.objspace.std.model import W_Object from pypy.objspace.std.multimethod import FailedToImplement from pypy.objspace.std.stdtypedef import SMM, StdTypeDef diff --git a/pypy/module/marshal/interp_marshal.py b/pypy/module/marshal/interp_marshal.py --- a/pypy/module/marshal/interp_marshal.py +++ b/pypy/module/marshal/interp_marshal.py @@ -1,19 +1,16 @@ from pypy.interpreter.error import OperationError from pypy.rlib.rarithmetic import intmask from pypy.rlib import rstackovf -from pypy.module._file.interp_file import W_File Py_MARSHAL_VERSION = 2 def dump(space, w_data, w_f, w_version=Py_MARSHAL_VERSION): """Write the 'data' object into the open file 'f'.""" - # special case real files for performance - file = space.interpclass_w(w_f) - if isinstance(file, W_File): - writer = DirectStreamWriter(space, file) - else: - writer = FileWriter(space, w_f) + # XXX: before py3k, we special-cased W_File to use a more performant + # FileWriter class. Should we do the same for py3k? Look also at + # DirectStreamWriter + writer = FileWriter(space, w_f) try: # note: bound methods are currently not supported, # so we have to pass the instance in, instead. @@ -32,12 +29,10 @@ def load(space, w_f): """Read one value from the file 'f' and return it.""" - # special case real files for performance - file = space.interpclass_w(w_f) - if isinstance(file, W_File): - reader = DirectStreamReader(space, file) - else: - reader = FileReader(space, w_f) + # XXX: before py3k, we special-cased W_File to use a more performant + # FileWriter class. Should we do the same for py3k? Look also at + # DirectStreamReader + reader = FileReader(space, w_f) try: u = Unmarshaller(space, reader) return u.load_w_obj() @@ -120,10 +115,16 @@ self.file.unlock() class DirectStreamWriter(StreamReaderWriter): + """ + XXX: this class is unused right now. Look at the comment in dump() + """ def write(self, data): self.file.do_direct_write(data) class DirectStreamReader(StreamReaderWriter): + """ + XXX: this class is unused right now. Look at the comment in dump() + """ def read(self, n): data = self.file.direct_read(n) if len(data) < n: From noreply at buildbot.pypy.org Mon Feb 27 11:45:09 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:09 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax. The test is still skipped, but this happens also on default Message-ID: <20120227104509.6180F820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52921:ad992ca38af4 Date: 2012-02-27 11:37 +0100 http://bitbucket.org/pypy/pypy/changeset/ad992ca38af4/ Log: fix syntax. The test is still skipped, but this happens also on default diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -364,13 +364,13 @@ import sys, os p = os.path.join(sys.path[-1], 'readonly') try: - os.chmod(p, 0555) + os.chmod(p, 0o555) except: skip("cannot chmod() the test directory to read-only") try: import readonly.x # cannot write x.pyc, but should not crash finally: - os.chmod(p, 0775) + os.chmod(p, 0o775) def test__import__empty_string(self): raises(ValueError, __import__, "") From noreply at buildbot.pypy.org Mon Feb 27 11:45:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:10 +0100 (CET) Subject: [pypy-commit] pypy default: 'typo'. The directory hierarchy created by the test is put in sys.path[0], not [-1]. This test has been silently skipped for ages probably Message-ID: <20120227104510.90A81820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52922:b081147f76f3 Date: 2012-02-27 11:41 +0100 http://bitbucket.org/pypy/pypy/changeset/b081147f76f3/ Log: 'typo'. The directory hierarchy created by the test is put in sys.path[0], not [-1]. This test has been silently skipped for ages probably diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -357,7 +357,7 @@ def test_cannot_write_pyc(self): import sys, os - p = os.path.join(sys.path[-1], 'readonly') + p = os.path.join(sys.path[0], 'readonly') try: os.chmod(p, 0555) except: From noreply at buildbot.pypy.org Mon Feb 27 11:45:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:12 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120227104512.93610820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52923:9e1b5a6a8752 Date: 2012-02-27 11:43 +0100 http://bitbucket.org/pypy/pypy/changeset/9e1b5a6a8752/ Log: hg merge default diff --git a/lib-python/modified-2.7/ctypes/test/test_arrays.py b/lib-python/modified-2.7/ctypes/test/test_arrays.py --- a/lib-python/modified-2.7/ctypes/test/test_arrays.py +++ b/lib-python/modified-2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -33,7 +33,7 @@ from pypy.jit.backend.x86.support import values_array from pypy.jit.backend.x86 import support from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints, fatalerror_notb) + have_debug_prints) from pypy.rlib import rgc from pypy.rlib.clibffi import FFI_DEFAULT_ABI from pypy.jit.backend.x86.jump import remap_frame_layout @@ -104,7 +104,6 @@ self._debug = v def setup_once(self): - self._check_sse2() # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() @@ -162,28 +161,6 @@ debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') - _CHECK_SSE2_FUNC_PTR = lltype.Ptr(lltype.FuncType([], lltype.Signed)) - - def _check_sse2(self): - if WORD == 8: - return # all x86-64 CPUs support SSE2 - if not self.cpu.supports_floats: - return # the CPU doesn't support float, so we don't need SSE2 - # - from pypy.jit.backend.x86.detect_sse2 import INSNS - mc = codebuf.MachineCodeBlockWrapper() - for c in INSNS: - mc.writechar(c) - rawstart = mc.materialize(self.cpu.asmmemmgr, []) - fnptr = rffi.cast(self._CHECK_SSE2_FUNC_PTR, rawstart) - features = fnptr() - if bool(features & (1<<25)) and bool(features & (1<<26)): - return # CPU supports SSE2 - fatalerror_notb( - "This version of PyPy was compiled for a x86 CPU supporting SSE2.\n" - "Your CPU is too old. Please translate a PyPy with the option:\n" - "--jit-backend=x86-without-sse2") - def _build_float_constants(self): datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, []) float_constants = datablockwrapper.malloc_aligned(32, alignment=16) diff --git a/pypy/jit/backend/x86/detect_sse2.py b/pypy/jit/backend/x86/detect_sse2.py --- a/pypy/jit/backend/x86/detect_sse2.py +++ b/pypy/jit/backend/x86/detect_sse2.py @@ -1,18 +1,17 @@ import autopath +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib.rmmap import alloc, free -INSNS = ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 - "\x53" # PUSH EBX - "\x0F\xA2" # CPUID - "\x5B" # POP EBX - "\x92" # XCHG EAX, EDX - "\xC3") # RET def detect_sse2(): - from pypy.rpython.lltypesystem import lltype, rffi - from pypy.rlib.rmmap import alloc, free data = alloc(4096) pos = 0 - for c in INSNS: + for c in ("\xB8\x01\x00\x00\x00" # MOV EAX, 1 + "\x53" # PUSH EBX + "\x0F\xA2" # CPUID + "\x5B" # POP EBX + "\x92" # XCHG EAX, EDX + "\xC3"): # RET data[pos] = c pos += 1 fnptr = rffi.cast(lltype.Ptr(lltype.FuncType([], lltype.Signed)), data) diff --git a/pypy/jit/backend/x86/support.py b/pypy/jit/backend/x86/support.py --- a/pypy/jit/backend/x86/support.py +++ b/pypy/jit/backend/x86/support.py @@ -1,6 +1,7 @@ import sys from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo +from pypy.jit.backend.x86.arch import WORD def values_array(TP, size): @@ -37,8 +38,13 @@ if sys.platform == 'win32': ensure_sse2_floats = lambda : None + # XXX check for SSE2 on win32 too else: + if WORD == 4: + extra = ['-DPYPY_X86_CHECK_SSE2'] + else: + extra = [] ensure_sse2_floats = rffi.llexternal_use_eci(ExternalCompilationInfo( compile_extra = ['-msse2', '-mfpmath=sse', - '-DPYPY_CPU_HAS_STANDARD_PRECISION'], + '-DPYPY_CPU_HAS_STANDARD_PRECISION'] + extra, )) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe def test_get_current_qmut_instance(): @@ -480,6 +480,32 @@ assert res == 1 self.check_jitcell_token_count(2) + def test_for_loop_array(self): + myjitdriver = JitDriver(greens=[], reds=["n", "i"]) + class Foo(object): + _immutable_fields_ = ["x?[*]"] + def __init__(self, x): + self.x = x + f = Foo([1, 3, 5, 6]) + @unroll_safe + def g(v): + for x in f.x: + if x & 1 == 0: + v += 1 + return v + def main(n): + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + i = g(i) + return i + res = self.meta_interp(main, [10]) + assert res == 10 + self.check_resops({ + "int_add": 2, "int_lt": 2, "jump": 1, "guard_true": 2, + "guard_not_invalidated": 2 + }) + class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -170,7 +170,7 @@ space = make_objspace(config) space.appexec([space.wrap(str(tmpfile))], """(tmpfile): import io - f = io.open(tmpfile, 'w') + f = io.open(tmpfile, 'w', encoding='ascii') f.write('42') # no flush() and no close() import sys; sys._keepalivesomewhereobscure = f diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -406,7 +406,7 @@ }.items(): GLOBALS['Py%s_Type#' % (cpyname, )] = ('PyTypeObject*', pypyexpr) - for cpyname in 'Method List Int Long Dict Tuple'.split(): + for cpyname in 'Method List Long Dict Tuple'.split(): FORWARD_DECLS.append('typedef struct { PyObject_HEAD } ' 'Py%sObject' % (cpyname, )) build_exported_objects() diff --git a/pypy/module/cpyext/include/intobject.h b/pypy/module/cpyext/include/intobject.h --- a/pypy/module/cpyext/include/intobject.h +++ b/pypy/module/cpyext/include/intobject.h @@ -7,6 +7,11 @@ extern "C" { #endif +typedef struct { + PyObject_HEAD + long ob_ival; +} PyIntObject; + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -2,11 +2,37 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.interpreter.error import OperationError from pypy.module.cpyext.api import ( - cpython_api, build_type_checkers, PyObject, - CONST_STRING, CANNOT_FAIL, Py_ssize_t) + cpython_api, cpython_struct, build_type_checkers, bootstrap_function, + PyObject, PyObjectFields, CONST_STRING, CANNOT_FAIL, Py_ssize_t) +from pypy.module.cpyext.pyobject import ( + make_typedescr, track_reference, RefcountState, from_ref) from pypy.rlib.rarithmetic import r_uint, intmask, LONG_TEST +from pypy.objspace.std.intobject import W_IntObject import sys +PyIntObjectStruct = lltype.ForwardReference() +PyIntObject = lltype.Ptr(PyIntObjectStruct) +PyIntObjectFields = PyObjectFields + \ + (("ob_ival", rffi.LONG),) +cpython_struct("PyIntObject", PyIntObjectFields, PyIntObjectStruct) + + at bootstrap_function +def init_intobject(space): + "Type description of PyIntObject" + make_typedescr(space.w_int.instancetypedef, + basestruct=PyIntObject.TO, + realize=int_realize) + +def int_realize(space, obj): + intval = rffi.cast(lltype.Signed, rffi.cast(PyIntObject, obj).c_ob_ival) + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(W_IntObject, w_type) + w_obj.__init__(intval) + track_reference(space, obj, w_obj) + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj + PyInt_Check, PyInt_CheckExact = build_type_checkers("Int") @cpython_api([], lltype.Signed, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -193,7 +193,7 @@ if not obj: PyErr_NoMemory(space) obj.c_ob_type = type - _Py_NewReference(space, obj) + obj.c_ob_refcnt = 1 return obj @cpython_api([PyVarObject, PyTypeObjectPtr, Py_ssize_t], PyObject) diff --git a/pypy/module/cpyext/pyobject.py b/pypy/module/cpyext/pyobject.py --- a/pypy/module/cpyext/pyobject.py +++ b/pypy/module/cpyext/pyobject.py @@ -17,6 +17,7 @@ class BaseCpyTypedescr(object): basestruct = PyObject.TO + W_BaseObject = W_ObjectObject def get_dealloc(self, space): from pypy.module.cpyext.typeobject import subtype_dealloc @@ -51,10 +52,14 @@ def attach(self, space, pyobj, w_obj): pass - def realize(self, space, ref): - # For most types, a reference cannot exist without - # a real interpreter object - raise InvalidPointerException(str(ref)) + def realize(self, space, obj): + w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) + w_obj = space.allocate_instance(self.W_BaseObject, w_type) + track_reference(space, obj, w_obj) + if w_type is not space.gettypefor(self.W_BaseObject): + state = space.fromcache(RefcountState) + state.set_lifeline(w_obj, obj) + return w_obj typedescr_cache = {} @@ -369,13 +374,7 @@ obj.c_ob_refcnt = 1 w_type = from_ref(space, rffi.cast(PyObject, obj.c_ob_type)) assert isinstance(w_type, W_TypeObject) - if w_type.is_cpytype(): - w_obj = space.allocate_instance(W_ObjectObject, w_type) - track_reference(space, obj, w_obj) - state = space.fromcache(RefcountState) - state.set_lifeline(w_obj, obj) - else: - assert False, "Please add more cases in _Py_NewReference()" + get_typedescr(w_type.instancetypedef).realize(space, obj) def _Py_Dealloc(space, obj): from pypy.module.cpyext.api import generic_cpy_call_dont_decref diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -117,8 +117,12 @@ flags = lltype.malloc(PyCompilerFlags, flavor='raw') flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) w_globals = space.newdict() - api.PyRun_StringFlags("a = u'caf\xc3\xa9'", Py_single_input, - w_globals, w_globals, flags) + buf = rffi.str2charp("a = u'caf\xc3\xa9'") + try: + api.PyRun_StringFlags(buf, Py_single_input, + w_globals, w_globals, flags) + finally: + rffi.free_charp(buf) w_a = space.getitem(w_globals, space.wrap("a")) assert space.unwrap(w_a) == u'caf\xe9' lltype.free(flags, flavor='raw') diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -65,4 +65,97 @@ values = module.values() types = [type(x) for x in values] assert types == [int, long, int, int] - + + def test_int_subtype(self): + module = self.import_extension( + 'foo', [ + ("newEnum", "METH_VARARGS", + """ + EnumObject *enumObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "Oi", &name, &intval)) + return NULL; + + PyType_Ready(&Enum_Type); + enumObj = PyObject_New(EnumObject, &Enum_Type); + if (!enumObj) { + return NULL; + } + + enumObj->ob_ival = intval; + Py_INCREF(name); + enumObj->ob_name = name; + + return (PyObject *)enumObj; + """), + ], + prologue=""" + typedef struct + { + PyObject_HEAD + long ob_ival; + PyObject* ob_name; + } EnumObject; + + static void + enum_dealloc(EnumObject *op) + { + Py_DECREF(op->ob_name); + Py_TYPE(op)->tp_free((PyObject *)op); + } + + static PyMemberDef enum_members[] = { + {"name", T_OBJECT, offsetof(EnumObject, ob_name), 0, NULL}, + {NULL} /* Sentinel */ + }; + + PyTypeObject Enum_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "Enum", + /*tp_basicsize*/ sizeof(EnumObject), + /*tp_itemsize*/ 0, + /*tp_dealloc*/ enum_dealloc, + /*tp_print*/ 0, + /*tp_getattr*/ 0, + /*tp_setattr*/ 0, + /*tp_compare*/ 0, + /*tp_repr*/ 0, + /*tp_as_number*/ 0, + /*tp_as_sequence*/ 0, + /*tp_as_mapping*/ 0, + /*tp_hash*/ 0, + /*tp_call*/ 0, + /*tp_str*/ 0, + /*tp_getattro*/ 0, + /*tp_setattro*/ 0, + /*tp_as_buffer*/ 0, + /*tp_flags*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, + /*tp_doc*/ 0, + /*tp_traverse*/ 0, + /*tp_clear*/ 0, + /*tp_richcompare*/ 0, + /*tp_weaklistoffset*/ 0, + /*tp_iter*/ 0, + /*tp_iternext*/ 0, + /*tp_methods*/ 0, + /*tp_members*/ enum_members, + /*tp_getset*/ 0, + /*tp_base*/ &PyInt_Type, + /*tp_dict*/ 0, + /*tp_descr_get*/ 0, + /*tp_descr_set*/ 0, + /*tp_dictoffset*/ 0, + /*tp_init*/ 0, + /*tp_alloc*/ 0, + /*tp_new*/ 0 + }; + """) + + a = module.newEnum("ULTIMATE_ANSWER", 42) + assert type(a).__name__ == "Enum" + assert isinstance(a, int) + assert a == int(a) == 42 + assert a.name == "ULTIMATE_ANSWER" diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -362,7 +362,7 @@ def test_cannot_write_pyc(self): import sys, os - p = os.path.join(sys.path[-1], 'readonly') + p = os.path.join(sys.path[0], 'readonly') try: os.chmod(p, 0o555) except: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -779,8 +779,6 @@ """ Intermediate class for performing binary operations. """ - _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -856,8 +854,6 @@ self.right.create_sig(), done_func) class AxisReduce(Call2): - _immutable_fields_ = ['left', 'right'] - def __init__(self, ufunc, name, identity, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) diff --git a/pypy/module/pypyjit/test_pypy_c/test_instance.py b/pypy/module/pypyjit/test_pypy_c/test_instance.py --- a/pypy/module/pypyjit/test_pypy_c/test_instance.py +++ b/pypy/module/pypyjit/test_pypy_c/test_instance.py @@ -201,3 +201,28 @@ loop, = log.loops_by_filename(self.filepath) assert loop.match_by_id("compare", "") # optimized away + def test_super(self): + def main(): + class A(object): + def m(self, x): + return x + 1 + class B(A): + def m(self, x): + return super(B, self).m(x) + i = 0 + while i < 300: + i = B().m(i) + return i + + log = self.run(main, []) + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i78 = int_lt(i72, 300) + guard_true(i78, descr=...) + guard_not_invalidated(descr=...) + i79 = force_token() + i80 = force_token() + i81 = int_add(i72, 1) + --TICK-- + jump(..., descr=...) + """) diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -103,6 +103,7 @@ 'terminator', '_version_tag?', 'name?', + 'mro_w?[*]', ] # for config.objspace.std.getattributeshortcut @@ -345,9 +346,9 @@ return w_self._lookup_where(name) + @unroll_safe def lookup_starting_at(w_self, w_starttype, name): space = w_self.space - # XXX Optimize this with method cache look = False for w_class in w_self.mro_w: if w_class is w_starttype: diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -392,7 +392,11 @@ ('list', r_list.lowleveltype), ('index', Signed))) self.ll_listiter = ll_listiter - self.ll_listnext = ll_listnext + if (isinstance(r_list, FixedSizeListRepr) + and not r_list.listitem.mutated): + self.ll_listnext = ll_listnext_foldable + else: + self.ll_listnext = ll_listnext self.ll_getnextindex = ll_getnextindex def ll_listiter(ITERPTR, lst): @@ -409,5 +413,14 @@ iter.index = index + 1 # cannot overflow because index < l.length return l.ll_getitem_fast(index) +def ll_listnext_foldable(iter): + from pypy.rpython.rlist import ll_getitem_foldable_nonneg + l = iter.list + index = iter.index + if index >= l.ll_length(): + raise StopIteration + iter.index = index + 1 # cannot overflow because index < l.length + return ll_getitem_foldable_nonneg(l, index) + def ll_getnextindex(iter): return iter.index diff --git a/pypy/rpython/test/test_rlist.py b/pypy/rpython/test/test_rlist.py --- a/pypy/rpython/test/test_rlist.py +++ b/pypy/rpython/test/test_rlist.py @@ -8,6 +8,7 @@ from pypy.rpython.rlist import * from pypy.rpython.lltypesystem.rlist import ListRepr, FixedSizeListRepr, ll_newlist, ll_fixed_newlist from pypy.rpython.lltypesystem import rlist as ll_rlist +from pypy.rpython.llinterp import LLException from pypy.rpython.ootypesystem import rlist as oo_rlist from pypy.rpython.rint import signed_repr from pypy.objspace.flow.model import Constant, Variable @@ -1477,6 +1478,80 @@ assert func1.oopspec == 'list.getitem_foldable(l, index)' assert not hasattr(func2, 'oopspec') + def test_iterate_over_immutable_list(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + lst = list('abcdef') + def dummyfn(): + total = 0 + for c in lst: + total += ord(c) + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + e = raises(LLException, self.interpret, dummyfn, []) + assert 'KeyError' in str(e.value) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + def test_iterate_over_immutable_list_quasiimmut_attr(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + class Foo: + _immutable_fields_ = ['lst?[*]'] + lst = list('abcdef') + foo = Foo() + def dummyfn(): + total = 0 + for c in foo.lst: + total += ord(c) + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + e = raises(LLException, self.interpret, dummyfn, []) + assert 'KeyError' in str(e.value) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + def test_iterate_over_mutable_list(self): + from pypy.rpython import rlist + class MyException(Exception): + pass + lst = list('abcdef') + def dummyfn(): + total = 0 + for c in lst: + total += ord(c) + lst[0] = 'x' + return total + # + prev = rlist.ll_getitem_foldable_nonneg + try: + def seen_ok(l, index): + if index == 5: + raise KeyError # expected case + return prev(l, index) + rlist.ll_getitem_foldable_nonneg = seen_ok + res = self.interpret(dummyfn, []) + assert res == sum(map(ord, 'abcdef')) + finally: + rlist.ll_getitem_foldable_nonneg = prev + + class TestOOtype(BaseTestRlist, OORtypeMixin): rlist = oo_rlist type_system = 'ootype' diff --git a/pypy/translator/c/src/asm_gcc_x86.h b/pypy/translator/c/src/asm_gcc_x86.h --- a/pypy/translator/c/src/asm_gcc_x86.h +++ b/pypy/translator/c/src/asm_gcc_x86.h @@ -102,6 +102,12 @@ #endif /* !PYPY_CPU_HAS_STANDARD_PRECISION */ +#ifdef PYPY_X86_CHECK_SSE2 +#define PYPY_X86_CHECK_SSE2_DEFINED +extern void pypy_x86_check_sse2(void); +#endif + + /* implementations */ #ifndef PYPY_NOT_MAIN_FILE @@ -113,4 +119,25 @@ } # endif +# ifdef PYPY_X86_CHECK_SSE2 +void pypy_x86_check_sse2(void) +{ + //Read the CPU features. + int features; + asm("mov $1, %%eax\n" + "cpuid\n" + "mov %%edx, %0" + : "=g"(features) : : "eax", "ebx", "edx", "ecx"); + + //Check bits 25 and 26, this indicates SSE2 support + if (((features & (1 << 25)) == 0) || ((features & (1 << 26)) == 0)) + { + fprintf(stderr, "Old CPU with no SSE2 support, cannot continue.\n" + "You need to re-translate with " + "'--jit-backend=x86-without-sse2'\n"); + abort(); + } +} +# endif + #endif diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -1,3 +1,4 @@ +#define PYPY_NOT_MAIN_FILE #include #include diff --git a/pypy/translator/c/src/dtoa.c b/pypy/translator/c/src/dtoa.c --- a/pypy/translator/c/src/dtoa.c +++ b/pypy/translator/c/src/dtoa.c @@ -46,13 +46,13 @@ * of return type *Bigint all return NULL to indicate a malloc failure. * Similarly, rv_alloc and nrv_alloc (return type char *) return NULL on * failure. bigcomp now has return type int (it used to be void) and - * returns -1 on failure and 0 otherwise. _Py_dg_dtoa returns NULL - * on failure. _Py_dg_strtod indicates failure due to malloc failure + * returns -1 on failure and 0 otherwise. __Py_dg_dtoa returns NULL + * on failure. __Py_dg_strtod indicates failure due to malloc failure * by returning -1.0, setting errno=ENOMEM and *se to s00. * * 4. The static variable dtoa_result has been removed. Callers of - * _Py_dg_dtoa are expected to call _Py_dg_freedtoa to free - * the memory allocated by _Py_dg_dtoa. + * __Py_dg_dtoa are expected to call __Py_dg_freedtoa to free + * the memory allocated by __Py_dg_dtoa. * * 5. The code has been reformatted to better fit with Python's * C style guide (PEP 7). @@ -61,7 +61,7 @@ * that hasn't been MALLOC'ed, private_mem should only be used when k <= * Kmax. * - * 7. _Py_dg_strtod has been modified so that it doesn't accept strings with + * 7. __Py_dg_strtod has been modified so that it doesn't accept strings with * leading whitespace. * ***************************************************************/ @@ -283,7 +283,7 @@ #define Big0 (Frac_mask1 | Exp_msk1*(DBL_MAX_EXP+Bias-1)) #define Big1 0xffffffff -/* struct BCinfo is used to pass information from _Py_dg_strtod to bigcomp */ +/* struct BCinfo is used to pass information from __Py_dg_strtod to bigcomp */ typedef struct BCinfo BCinfo; struct @@ -494,7 +494,7 @@ /* convert a string s containing nd decimal digits (possibly containing a decimal separator at position nd0, which is ignored) to a Bigint. This - function carries on where the parsing code in _Py_dg_strtod leaves off: on + function carries on where the parsing code in __Py_dg_strtod leaves off: on entry, y9 contains the result of converting the first 9 digits. Returns NULL on failure. */ @@ -1050,7 +1050,7 @@ } /* Convert a scaled double to a Bigint plus an exponent. Similar to d2b, - except that it accepts the scale parameter used in _Py_dg_strtod (which + except that it accepts the scale parameter used in __Py_dg_strtod (which should be either 0 or 2*P), and the normalization for the return value is different (see below). On input, d should be finite and nonnegative, and d / 2**scale should be exactly representable as an IEEE 754 double. @@ -1351,9 +1351,9 @@ /* The bigcomp function handles some hard cases for strtod, for inputs with more than STRTOD_DIGLIM digits. It's called once an initial estimate for the double corresponding to the input string has - already been obtained by the code in _Py_dg_strtod. + already been obtained by the code in __Py_dg_strtod. - The bigcomp function is only called after _Py_dg_strtod has found a + The bigcomp function is only called after __Py_dg_strtod has found a double value rv such that either rv or rv + 1ulp represents the correctly rounded value corresponding to the original string. It determines which of these two values is the correct one by @@ -1368,12 +1368,12 @@ s0 points to the first significant digit of the input string. rv is a (possibly scaled) estimate for the closest double value to the - value represented by the original input to _Py_dg_strtod. If + value represented by the original input to __Py_dg_strtod. If bc->scale is nonzero, then rv/2^(bc->scale) is the approximation to the input value. bc is a struct containing information gathered during the parsing and - estimation steps of _Py_dg_strtod. Description of fields follows: + estimation steps of __Py_dg_strtod. Description of fields follows: bc->e0 gives the exponent of the input value, such that dv = (integer given by the bd->nd digits of s0) * 10**e0 @@ -1505,7 +1505,7 @@ } static double -_Py_dg_strtod(const char *s00, char **se) +__Py_dg_strtod(const char *s00, char **se) { int bb2, bb5, bbe, bd2, bd5, bs2, c, dsign, e, e1, error; int esign, i, j, k, lz, nd, nd0, odd, sign; @@ -1849,7 +1849,7 @@ for(;;) { - /* This is the main correction loop for _Py_dg_strtod. + /* This is the main correction loop for __Py_dg_strtod. We've got a decimal value tdv, and a floating-point approximation srv=rv/2^bc.scale to tdv. The aim is to determine whether srv is @@ -2283,7 +2283,7 @@ */ static void -_Py_dg_freedtoa(char *s) +__Py_dg_freedtoa(char *s) { Bigint *b = (Bigint *)((int *)s - 1); b->maxwds = 1 << (b->k = *(int*)b); @@ -2325,11 +2325,11 @@ */ /* Additional notes (METD): (1) returns NULL on failure. (2) to avoid memory - leakage, a successful call to _Py_dg_dtoa should always be matched by a - call to _Py_dg_freedtoa. */ + leakage, a successful call to __Py_dg_dtoa should always be matched by a + call to __Py_dg_freedtoa. */ static char * -_Py_dg_dtoa(double dd, int mode, int ndigits, +__Py_dg_dtoa(double dd, int mode, int ndigits, int *decpt, int *sign, char **rve) { /* Arguments ndigits, decpt, sign are similar to those @@ -2926,7 +2926,7 @@ if (b) Bfree(b); if (s0) - _Py_dg_freedtoa(s0); + __Py_dg_freedtoa(s0); return NULL; } @@ -2947,7 +2947,7 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_strtod(s00, se); + result = __Py_dg_strtod(s00, se); _PyPy_SET_53BIT_PRECISION_END; return result; } @@ -2959,14 +2959,14 @@ _PyPy_SET_53BIT_PRECISION_HEADER; _PyPy_SET_53BIT_PRECISION_START; - result = _Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); + result = __Py_dg_dtoa(dd, mode, ndigits, decpt, sign, rve); _PyPy_SET_53BIT_PRECISION_END; return result; } void _PyPy_dg_freedtoa(char *s) { - _Py_dg_freedtoa(s); + __Py_dg_freedtoa(s); } /* End PYPY hacks */ diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -36,6 +36,9 @@ RPyListOfString *list; pypy_asm_stack_bottom(); +#ifdef PYPY_X86_CHECK_SSE2_DEFINED + pypy_x86_check_sse2(); +#endif instrument_setup(); if (sizeof(void*) != SIZEOF_LONG) { From noreply at buildbot.pypy.org Mon Feb 27 11:45:13 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:13 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120227104513.BEC6C820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52924:29f8443e00ec Date: 2012-02-27 11:44 +0100 http://bitbucket.org/pypy/pypy/changeset/29f8443e00ec/ Log: merge heads diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -289,8 +289,21 @@ assert isinstance(token, TargetToken) assert token.original_jitcell_token is None token.original_jitcell_token = trace.original_jitcell_token - - + + +def do_compile_loop(metainterp_sd, inputargs, operations, looptoken, + log=True, name=''): + metainterp_sd.logger_ops.log_loop(inputargs, operations, -2, + 'compiling', name=name) + return metainterp_sd.cpu.compile_loop(inputargs, operations, looptoken, + log=log, name=name) + +def do_compile_bridge(metainterp_sd, faildescr, inputargs, operations, + original_loop_token, log=True): + metainterp_sd.logger_ops.log_bridge(inputargs, operations, -2) + return metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, + original_loop_token, log=log) + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,9 +332,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, - name=loopname) + asminfo = do_compile_loop(metainterp_sd, loop.inputargs, + operations, original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() @@ -333,7 +346,6 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - loopname = jitdriver_sd.warmstate.get_location_str(greenkey) if asminfo is not None: ops_offset = asminfo.ops_offset else: @@ -365,9 +377,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, - operations, - original_loop_token) + asminfo = do_compile_bridge(metainterp_sd, faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -18,6 +18,10 @@ debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") + elif number == -2: + debug_start("jit-log-compiling-loop") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-loop") else: debug_start("jit-log-opt-loop") debug_print("# Loop", number, '(%s)' % name , ":", type, @@ -31,6 +35,10 @@ debug_start("jit-log-noopt-bridge") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-bridge") + elif number == -2: + debug_start("jit-log-compiling-bridge") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-bridge") else: debug_start("jit-log-opt-bridge") debug_print("# bridge out of Guard", number, From noreply at buildbot.pypy.org Mon Feb 27 11:45:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 11:45:14 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default again Message-ID: <20120227104514.F19D2820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52925:de6a79306bfe Date: 2012-02-27 11:44 +0100 http://bitbucket.org/pypy/pypy/changeset/de6a79306bfe/ Log: hg merge default again diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -289,8 +289,21 @@ assert isinstance(token, TargetToken) assert token.original_jitcell_token is None token.original_jitcell_token = trace.original_jitcell_token - - + + +def do_compile_loop(metainterp_sd, inputargs, operations, looptoken, + log=True, name=''): + metainterp_sd.logger_ops.log_loop(inputargs, operations, -2, + 'compiling', name=name) + return metainterp_sd.cpu.compile_loop(inputargs, operations, looptoken, + log=log, name=name) + +def do_compile_bridge(metainterp_sd, faildescr, inputargs, operations, + original_loop_token, log=True): + metainterp_sd.logger_ops.log_bridge(inputargs, operations, -2) + return metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, + original_loop_token, log=log) + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,9 +332,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, - name=loopname) + asminfo = do_compile_loop(metainterp_sd, loop.inputargs, + operations, original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() @@ -333,7 +346,6 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - loopname = jitdriver_sd.warmstate.get_location_str(greenkey) if asminfo is not None: ops_offset = asminfo.ops_offset else: @@ -365,9 +377,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, - operations, - original_loop_token) + asminfo = do_compile_bridge(metainterp_sd, faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -18,6 +18,10 @@ debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") + elif number == -2: + debug_start("jit-log-compiling-loop") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-loop") else: debug_start("jit-log-opt-loop") debug_print("# Loop", number, '(%s)' % name , ":", type, @@ -31,6 +35,10 @@ debug_start("jit-log-noopt-bridge") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-bridge") + elif number == -2: + debug_start("jit-log-compiling-bridge") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-bridge") else: debug_start("jit-log-opt-bridge") debug_print("# bridge out of Guard", number, From noreply at buildbot.pypy.org Mon Feb 27 14:41:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 14:41:59 +0100 (CET) Subject: [pypy-commit] pypy default: Test and fix. Probably fixes . Message-ID: <20120227134159.8E604820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52926:e91c5cdac576 Date: 2012-02-27 14:41 +0100 http://bitbucket.org/pypy/pypy/changeset/e91c5cdac576/ Log: Test and fix. Probably fixes . diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -1,6 +1,6 @@ import sys from pypy.rlib.rarithmetic import ovfcheck -from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr +from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr, BoxInt from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.symbolic import WORD @@ -96,8 +96,8 @@ def handle_new_fixedsize(self, descr, op): assert isinstance(descr, SizeDescr) size = descr.size - self.gen_malloc_nursery(size, op.result) - self.gen_initialize_tid(op.result, descr.tid) + in_nursery = self.gen_malloc_nursery(size, op.result) + self.gen_initialize_tid(op.result, descr.tid, in_nursery) def handle_new_array(self, arraydescr, op): v_length = op.getarg(0) @@ -113,8 +113,8 @@ elif arraydescr.itemsize == 0: total_size = arraydescr.basesize if 0 <= total_size <= 0xffffff: # up to 16MB, arbitrarily - self.gen_malloc_nursery(total_size, op.result) - self.gen_initialize_tid(op.result, arraydescr.tid) + in_nursery = self.gen_malloc_nursery(total_size, op.result) + self.gen_initialize_tid(op.result, arraydescr.tid, in_nursery) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) elif self.gc_ll_descr.kind == 'boehm': self.gen_boehm_malloc_array(arraydescr, v_length, op.result) @@ -212,7 +212,7 @@ size = self.round_up_for_allocation(size) if not self.gc_ll_descr.can_use_nursery_malloc(size): self.gen_malloc_fixedsize(size, v_result) - return + return False # op = None if self._op_malloc_nursery is not None: @@ -238,12 +238,26 @@ self._previous_size = size self._v_last_malloced_nursery = v_result self.recent_mallocs[v_result] = None + return True - def gen_initialize_tid(self, v_newgcobj, tid): + def gen_initialize_tid(self, v_newgcobj, tid, in_nursery): if self.gc_ll_descr.fielddescr_tid is not None: # produce a SETFIELD to initialize the GC header + v_tid = ConstInt(tid) + if not in_nursery: + # important: must preserve the gcflags! rare case. + v_tidbase = BoxInt() + v_tidcombined = BoxInt() + op = ResOperation(rop.GETFIELD_RAW, + [v_newgcobj], v_tidbase, + descr=self.gc_ll_descr.fielddescr_tid) + self.newops.append(op) + op = ResOperation(rop.INT_OR, + [v_tidbase, v_tid], v_tidcombined) + self.newops.append(op) + v_tid = v_tidcombined op = ResOperation(rop.SETFIELD_GC, - [v_newgcobj, ConstInt(tid)], None, + [v_newgcobj, v_tid], None, descr=self.gc_ll_descr.fielddescr_tid) self.newops.append(op) diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py --- a/pypy/jit/backend/llsupport/test/test_rewrite.py +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -371,7 +371,9 @@ p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ %(bdescr.basesize + 104)d, \ descr=malloc_fixedsize_descr) - setfield_gc(p0, 8765, descr=tiddescr) + i0 = getfield_raw(p0, descr=tiddescr) + i1 = int_or(i0, 8765) + setfield_gc(p0, i1, descr=tiddescr) setfield_gc(p0, 103, descr=blendescr) jump() """) @@ -437,7 +439,9 @@ [p1] p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ descr=malloc_fixedsize_descr) - setfield_gc(p0, 9315, descr=tiddescr) + i0 = getfield_raw(p0, descr=tiddescr) + i1 = int_or(i0, 9315) + setfield_gc(p0, i1, descr=tiddescr) setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) jump() """) From noreply at buildbot.pypy.org Mon Feb 27 15:56:09 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Mon, 27 Feb 2012 15:56:09 +0100 (CET) Subject: [pypy-commit] pypy faster-str-decode-escape: Try to speed up string's decode escape by using a string builder and appending unescaped text in slices Message-ID: <20120227145609.48272820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: faster-str-decode-escape Changeset: r52927:6f5ea64c8b8d Date: 2012-02-27 07:55 -0700 http://bitbucket.org/pypy/pypy/changeset/6f5ea64c8b8d/ Log: Try to speed up string's decode escape by using a string builder and appending unescaped text in slices diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -115,21 +115,24 @@ the string is UTF-8 encoded and should be re-encoded in the specified encoding. """ - lis = [] + from pypy.rlib.rstring import StringBuilder + builder = StringBuilder(len(s)) ps = 0 end = len(s) - while ps < end: - if s[ps] != '\\': - # note that the C code has a label here. - # the logic is the same. + while 1: + ps2 = ps + while ps < end and s[ps] != '\\': if recode_encoding and ord(s[ps]) & 0x80: w, ps = decode_utf8(space, s, ps, end, recode_encoding) - # Append bytes to output buffer. - lis.append(w) + builder.append(w) + ps2 = ps else: - lis.append(s[ps]) ps += 1 - continue + if ps > ps2: + builder.append_slice(s, ps2, ps) + if ps == end: + break + ps += 1 if ps == end: raise_app_valueerror(space, 'Trailing \\ in string') @@ -140,25 +143,25 @@ if ch == '\n': pass elif ch == '\\': - lis.append('\\') + builder.append('\\') elif ch == "'": - lis.append("'") + builder.append("'") elif ch == '"': - lis.append('"') + builder.append('"') elif ch == 'b': - lis.append("\010") + builder.append("\010") elif ch == 'f': - lis.append('\014') # FF + builder.append('\014') # FF elif ch == 't': - lis.append('\t') + builder.append('\t') elif ch == 'n': - lis.append('\n') + builder.append('\n') elif ch == 'r': - lis.append('\r') + builder.append('\r') elif ch == 'v': - lis.append('\013') # VT + builder.append('\013') # VT elif ch == 'a': - lis.append('\007') # BEL, not classic C + builder.append('\007') # BEL, not classic C elif ch in '01234567': # Look for up to two more octal digits span = ps @@ -168,13 +171,13 @@ # emulate a strange wrap-around behavior of CPython: # \400 is the same as \000 because 0400 == 256 num = int(octal, 8) & 0xFF - lis.append(chr(num)) + builder.append(chr(num)) ps = span elif ch == 'x': if ps+2 <= end and isxdigit(s[ps]) and isxdigit(s[ps + 1]): hexa = s[ps : ps + 2] num = int(hexa, 16) - lis.append(chr(num)) + builder.append(chr(num)) ps += 2 else: raise_app_valueerror(space, 'invalid \\x escape') @@ -184,13 +187,13 @@ # this was not an escape, so the backslash # has to be added, and we start over in # non-escape mode. - lis.append('\\') + builder.append('\\') ps -= 1 assert ps >= 0 continue # an arbitry number of unescaped UTF-8 bytes may follow. - buf = ''.join(lis) + buf = builder.build() return buf From noreply at buildbot.pypy.org Mon Feb 27 16:14:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 16:14:54 +0100 (CET) Subject: [pypy-commit] pypy default: Fixes fixes fixes. Message-ID: <20120227151454.BBCCB820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52928:b9733690c4de Date: 2012-02-27 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/b9733690c4de/ Log: Fixes fixes fixes. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -769,11 +769,21 @@ self.generate_function('malloc_unicode', malloc_unicode, [lltype.Signed]) - # Rarely called: allocate a fixed-size amount of bytes, but - # not in the nursery, because it is too big. Implemented like - # malloc_nursery_slowpath() above. - self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, - [lltype.Signed]) + # Never called as far as I can tell, but there for completeness: + # allocate a fixed-size object, but not in the nursery, because + # it is too big. + def malloc_big_fixedsize(size, tid): + """Allocate 'size' null bytes out of the nursery. + Note that the fast path is typically inlined by the backend.""" + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_big_fixedsize', malloc_big_fixedsize, + [lltype.Signed] * 2) def _bh_malloc(self, sizedescr): from pypy.rpython.memory.gctypelayout import check_typeid diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -96,8 +96,10 @@ def handle_new_fixedsize(self, descr, op): assert isinstance(descr, SizeDescr) size = descr.size - in_nursery = self.gen_malloc_nursery(size, op.result) - self.gen_initialize_tid(op.result, descr.tid, in_nursery) + if self.gen_malloc_nursery(size, op.result): + self.gen_initialize_tid(op.result, descr.tid) + else: + self.gen_malloc_fixedsize(size, descr.tid, op.result) def handle_new_array(self, arraydescr, op): v_length = op.getarg(0) @@ -112,9 +114,9 @@ pass # total_size is still -1 elif arraydescr.itemsize == 0: total_size = arraydescr.basesize - if 0 <= total_size <= 0xffffff: # up to 16MB, arbitrarily - in_nursery = self.gen_malloc_nursery(total_size, op.result) - self.gen_initialize_tid(op.result, arraydescr.tid, in_nursery) + if (0 <= total_size <= 0xffffff and # up to 16MB, arbitrarily + self.gen_malloc_nursery(total_size, op.result)): + self.gen_initialize_tid(op.result, arraydescr.tid) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) elif self.gc_ll_descr.kind == 'boehm': self.gen_boehm_malloc_array(arraydescr, v_length, op.result) @@ -147,13 +149,22 @@ # mark 'v_result' as freshly malloced self.recent_mallocs[v_result] = None - def gen_malloc_fixedsize(self, size, v_result): - """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, Const(size)). - Note that with the framework GC, this should be called very rarely. + def gen_malloc_fixedsize(self, size, typeid, v_result): + """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, ...). + Used on Boehm, and on the framework GC for large fixed-size + mallocs. (For all I know this latter case never occurs in + practice, but better safe than sorry.) """ - addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') - self._gen_call_malloc_gc([ConstInt(addr), ConstInt(size)], v_result, - self.gc_ll_descr.malloc_fixedsize_descr) + if self.gc_ll_descr.fielddescr_tid is not None: # framework GC + assert (size & (WORD-1)) == 0, "size not aligned?" + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_big_fixedsize') + args = [ConstInt(addr), ConstInt(size), ConstInt(typeid)] + descr = self.gc_ll_descr.malloc_big_fixedsize_descr + else: # Boehm + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') + args = [ConstInt(addr), ConstInt(size)] + descr = self.gc_ll_descr.malloc_fixedsize_descr + self._gen_call_malloc_gc(args, v_result, descr) def gen_boehm_malloc_array(self, arraydescr, v_num_elem, v_result): """Generate a CALL_MALLOC_GC(malloc_array_fn, ...) for Boehm.""" @@ -211,7 +222,6 @@ """ size = self.round_up_for_allocation(size) if not self.gc_ll_descr.can_use_nursery_malloc(size): - self.gen_malloc_fixedsize(size, v_result) return False # op = None @@ -240,24 +250,11 @@ self.recent_mallocs[v_result] = None return True - def gen_initialize_tid(self, v_newgcobj, tid, in_nursery): + def gen_initialize_tid(self, v_newgcobj, tid): if self.gc_ll_descr.fielddescr_tid is not None: # produce a SETFIELD to initialize the GC header - v_tid = ConstInt(tid) - if not in_nursery: - # important: must preserve the gcflags! rare case. - v_tidbase = BoxInt() - v_tidcombined = BoxInt() - op = ResOperation(rop.GETFIELD_RAW, - [v_newgcobj], v_tidbase, - descr=self.gc_ll_descr.fielddescr_tid) - self.newops.append(op) - op = ResOperation(rop.INT_OR, - [v_tidbase, v_tid], v_tidcombined) - self.newops.append(op) - v_tid = v_tidcombined op = ResOperation(rop.SETFIELD_GC, - [v_newgcobj, v_tid], None, + [v_newgcobj, ConstInt(tid)], None, descr=self.gc_ll_descr.fielddescr_tid) self.newops.append(op) diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py --- a/pypy/jit/backend/llsupport/test/test_rewrite.py +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -119,12 +119,19 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(adescr.basesize + 10 * adescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=alendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(adescr.basesize)d, \ + 10, \ + %(adescr.itemsize)d, \ + %(adescr.lendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(adescr.basesize + 10 * adescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=alendescr) def test_new_array_variable(self): self.check_rewrite(""" @@ -178,13 +185,20 @@ jump() """, """ [i1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(unicodedescr.basesize + \ - 10 * unicodedescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=unicodelendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(unicodedescr.basesize)d, \ + 10, \ + %(unicodedescr.itemsize)d, \ + %(unicodelendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(unicodedescr.basesize + \ +## 10 * unicodedescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=unicodelendescr) class TestFramework(RewriteTests): @@ -203,7 +217,7 @@ # class FakeCPU(object): def sizeof(self, STRUCT): - descr = SizeDescrWithVTable(102) + descr = SizeDescrWithVTable(104) descr.tid = 9315 return descr self.cpu = FakeCPU() @@ -368,13 +382,9 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(bdescr.basesize + 104)d, \ - descr=malloc_fixedsize_descr) - i0 = getfield_raw(p0, descr=tiddescr) - i1 = int_or(i0, 8765) - setfield_gc(p0, i1, descr=tiddescr) - setfield_gc(p0, 103, descr=blendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, 103, \ + descr=malloc_array_descr) jump() """) @@ -437,11 +447,8 @@ jump() """, """ [p1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ - descr=malloc_fixedsize_descr) - i0 = getfield_raw(p0, descr=tiddescr) - i1 = int_or(i0, 9315) - setfield_gc(p0, i1, descr=tiddescr) + p0 = call_malloc_gc(ConstClass(malloc_big_fixedsize), 104, 9315, \ + descr=malloc_big_fixedsize_descr) setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) jump() """) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -608,6 +608,10 @@ specified as 0 if the object is not varsized. The returned object is fully initialized and zero-filled.""" # + # Here we really need a valid 'typeid'. + ll_assert(rffi.cast(lltype.Signed, typeid) != 0, + "external_malloc: typeid == 0") + # # Compute the total size, carefully checking for overflows. size_gc_header = self.gcheaderbuilder.size_gc_header nonvarsize = size_gc_header + self.fixed_size(typeid) From noreply at buildbot.pypy.org Mon Feb 27 16:18:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 16:18:11 +0100 (CET) Subject: [pypy-commit] pypy default: Fix and comment. Message-ID: <20120227151811.216D1820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52929:4991f822e3a7 Date: 2012-02-27 16:17 +0100 http://bitbucket.org/pypy/pypy/changeset/4991f822e3a7/ Log: Fix and comment. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -608,8 +608,9 @@ specified as 0 if the object is not varsized. The returned object is fully initialized and zero-filled.""" # - # Here we really need a valid 'typeid'. - ll_assert(rffi.cast(lltype.Signed, typeid) != 0, + # Here we really need a valid 'typeid', not 0 (as the JIT might + # try to send us if there is still a bug). + ll_assert(bool(self.combine(typeid, 0)), "external_malloc: typeid == 0") # # Compute the total size, carefully checking for overflows. From noreply at buildbot.pypy.org Mon Feb 27 16:28:47 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 16:28:47 +0100 (CET) Subject: [pypy-commit] pypy default: Kill wrong copy-pasted comment. Message-ID: <20120227152847.8B29D820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52930:096e5c2052c1 Date: 2012-02-27 16:28 +0100 http://bitbucket.org/pypy/pypy/changeset/096e5c2052c1/ Log: Kill wrong copy-pasted comment. diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -773,8 +773,6 @@ # allocate a fixed-size object, but not in the nursery, because # it is too big. def malloc_big_fixedsize(size, tid): - """Allocate 'size' null bytes out of the nursery. - Note that the fast path is typically inlined by the backend.""" if self.DEBUG: self._random_usage_of_xmm_registers() type_id = llop.extract_ushort(llgroup.HALFWORD, tid) From noreply at buildbot.pypy.org Mon Feb 27 16:30:30 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 16:30:30 +0100 (CET) Subject: [pypy-commit] pypy default: Remove the 16MB boundary logic which is pointless now. Message-ID: <20120227153030.59959820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52931:c08753e77c67 Date: 2012-02-27 16:30 +0100 http://bitbucket.org/pypy/pypy/changeset/c08753e77c67/ Log: Remove the 16MB boundary logic which is pointless now. diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -114,7 +114,7 @@ pass # total_size is still -1 elif arraydescr.itemsize == 0: total_size = arraydescr.basesize - if (0 <= total_size <= 0xffffff and # up to 16MB, arbitrarily + if (total_size >= 0 and self.gen_malloc_nursery(total_size, op.result)): self.gen_initialize_tid(op.result, arraydescr.tid) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) From noreply at buildbot.pypy.org Mon Feb 27 16:34:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 16:34:27 +0100 (CET) Subject: [pypy-commit] pypy default: Remove again unused import. Message-ID: <20120227153427.89230820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52932:27e3f649d735 Date: 2012-02-27 16:34 +0100 http://bitbucket.org/pypy/pypy/changeset/27e3f649d735/ Log: Remove again unused import. diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -1,6 +1,6 @@ import sys from pypy.rlib.rarithmetic import ovfcheck -from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr, BoxInt +from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.symbolic import WORD From noreply at buildbot.pypy.org Mon Feb 27 17:00:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:24 +0100 (CET) Subject: [pypy-commit] pypy py3k: pff, yak shaving. Since we are now passing an explicit globals() for exec(), Message-ID: <20120227160024.C7DA2820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52933:50dddd4ecf08 Date: 2012-02-27 14:16 +0100 http://bitbucket.org/pypy/pypy/changeset/50dddd4ecf08/ Log: pff, yak shaving. Since we are now passing an explicit globals() for exec(), the __name__ was not set. This caused the imp module to be confused, and the test to fail. It took 2 hours to track it down :-( diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -422,17 +422,18 @@ assert pkg.pkg1.__package__ == 'pkg.pkg1' def test_future_relative_import_error_when_in_non_package(self): - ns = {} + ns = {'__name__': __name__} exec("""def imp(): + print('__name__ =', __name__) from .string import inpackage - """.rstrip(), ns) + """, ns) raises(ValueError, ns['imp']) def test_future_relative_import_error_when_in_non_package2(self): - ns = {} + ns = {'__name__': __name__} exec("""def imp(): from .. import inpackage - """.rstrip(), ns) + """, ns) raises(ValueError, ns['imp']) def test_relative_import_with___name__(self): From noreply at buildbot.pypy.org Mon Feb 27 17:00:26 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: sys.setdefaultencoding is no longer there, use settrace for this test Message-ID: <20120227160026.127BF8236C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52934:b7200ae0e860 Date: 2012-02-27 14:20 +0100 http://bitbucket.org/pypy/pypy/changeset/b7200ae0e860/ Log: sys.setdefaultencoding is no longer there, use settrace for this test diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -553,14 +553,14 @@ import sys oldpath = sys.path try: - del sys.setdefaultencoding + del sys.settrace except AttributeError: pass reload(sys) assert sys.path is oldpath - assert 'setdefaultencoding' in dir(sys) + assert 'settrace' in dir(sys) def test_reload_infinite(self): import infinite_reload From noreply at buildbot.pypy.org Mon Feb 27 17:00:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: these two tests were really meant to be run against itertools, because it's a Message-ID: <20120227160027.5029C820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52935:aca4ff218220 Date: 2012-02-27 14:51 +0100 http://bitbucket.org/pypy/pypy/changeset/aca4ff218220/ Log: these two tests were really meant to be run against itertools, because it's a builtin module. It seems that a20df1bb1bb8 did a s/itertools/queue, but I can't see why. Revert it. diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -38,7 +38,7 @@ test_reload = "def test():\n raise ValueError\n", infinite_reload = "import infinite_reload; reload(infinite_reload)", del_sys_module = "import sys\ndel sys.modules['del_sys_module']\n", - queue = "hello_world = 42\n", + itertools = "hello_world = 42\n", gc = "should_never_be_seen = 42\n", ) root.ensure("notapackage", dir=1) # empty, no __init__.py @@ -151,7 +151,7 @@ class AppTestImport: def setup_class(cls): # interpreter-level - #cls.space = gettestobjspace(usemodules=['itertools']) + cls.space = gettestobjspace(usemodules=['itertools']) cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) cls.saved_modules = _setup(cls.space) #XXX Compile class @@ -606,32 +606,32 @@ def test_shadow_extension_1(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import queue' is supposed to find queue.py if there is + # 'import itertools' is supposed to find itertools.py if there is # one in sys.path. import sys - assert 'queue' not in sys.modules - import queue - assert hasattr(queue, 'hello_world') - assert not hasattr(queue, 'count') - assert '(built-in)' not in repr(queue) - del sys.modules['queue'] + assert 'itertools' not in sys.modules + import itertools + assert hasattr(itertools, 'hello_world') + assert not hasattr(itertools, 'count') + assert '(built-in)' not in repr(itertools) + del sys.modules['itertools'] def test_shadow_extension_2(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import queue' is supposed to find the built-in module even + # 'import itertools' is supposed to find the built-in module even # if there is also one in sys.path as long as it is *after* the # special entry '.../lib_pypy/__extensions__'. import sys - assert 'queue' not in sys.modules + assert 'itertools' not in sys.modules sys.path.append(sys.path.pop(0)) try: - import queue - assert not hasattr(queue, 'hello_world') - assert hasattr(queue, 'izip') - assert '(built-in)' in repr(queue) + import itertools + assert not hasattr(itertools, 'hello_world') + assert hasattr(itertools, 'izip') + assert '(built-in)' in repr(itertools) finally: sys.path.insert(0, sys.path.pop()) - del sys.modules['queue'] + del sys.modules['itertools'] class TestAbi: From noreply at buildbot.pypy.org Mon Feb 27 17:00:28 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:28 +0100 (CET) Subject: [pypy-commit] pypy py3k: urlparse and compiler.misc no longer exists. Replace also distutils.core with Message-ID: <20120227160028.90CC6820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52936:e838b0057e5d Date: 2012-02-27 15:02 +0100 http://bitbucket.org/pypy/pypy/changeset/e838b0057e5d/ Log: urlparse and compiler.misc no longer exists. Replace also distutils.core with html.parser, because it's much faster to import diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -1135,7 +1135,7 @@ sys.path_hooks.append(ImpWrapper) sys.path_importer_cache.clear() try: - mnames = ("colorsys", "urlparse", "distutils.core", "compiler.misc") + mnames = ("colorsys", "html.parser") for mname in mnames: parent = mname.split(".")[0] for n in sys.modules.keys(): From noreply at buildbot.pypy.org Mon Feb 27 17:00:29 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: datetime is now imported from lib-python/3.2, which in turns does other two imports from _datetime. Use 'math' instead, which is builtin and so we are sure it doesn't do any import Message-ID: <20120227160029.D7243820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52937:fb993f9644b1 Date: 2012-02-27 15:33 +0100 http://bitbucket.org/pypy/pypy/changeset/fb993f9644b1/ Log: datetime is now imported from lib-python/3.2, which in turns does other two imports from _datetime. Use 'math' instead, which is builtin and so we are sure it doesn't do any import diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -1012,24 +1012,24 @@ os.environ['LANG'] = oldlang class AppTestImportHooks(object): - def test_meta_path(self): + def test_meta_path_1(self): tried_imports = [] class Importer(object): def find_module(self, fullname, path=None): tried_imports.append((fullname, path)) - import sys, datetime - del sys.modules["datetime"] + import sys, math + del sys.modules["math"] sys.meta_path.append(Importer()) try: - import datetime + import math assert len(tried_imports) == 1 package_name = '.'.join(__name__.split('.')[:-1]) if package_name: - assert tried_imports[0][0] == package_name + ".datetime" + assert tried_imports[0][0] == package_name + ".math" else: - assert tried_imports[0][0] == "datetime" + assert tried_imports[0][0] == "math" finally: sys.meta_path.pop() From noreply at buildbot.pypy.org Mon Feb 27 17:00:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, we can't marshal a host code object and then expect to unmarshal a PyCode with our own implementation. Instead, convert the host code object to PyCode, and marshal it with out own impl Message-ID: <20120227160031.232F4820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52938:372725be34a9 Date: 2012-02-27 16:12 +0100 http://bitbucket.org/pypy/pypy/changeset/372725be34a9/ Log: bah, we can't marshal a host code object and then expect to unmarshal a PyCode with our own implementation. Instead, convert the host code object to PyCode, and marshal it with out own impl diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -2,7 +2,7 @@ from pypy.interpreter.module import Module from pypy.interpreter import gateway from pypy.interpreter.error import OperationError -import pypy.interpreter.pycode +from pypy.interpreter.pycode import PyCode from pypy.tool.udir import udir from pypy.rlib import streamio from pypy.conftest import gettestobjspace @@ -649,13 +649,18 @@ x = marshal.dumps(data) return x[-4:] -def _testfile(magic, mtime, co=None): +def _testfile(space, magic, mtime, co=None): cpathname = str(udir.join('test.pyc')) f = file(cpathname, "wb") f.write(_getlong(magic)) f.write(_getlong(mtime)) if co: - marshal.dump(co, f) + # marshal the code object with the PyPy marshal impl + pyco = PyCode._from_code(space, co) + w_marshal = space.getbuiltinmodule('marshal') + w_marshaled_code = space.call_method(w_marshal, 'dumps', pyco) + marshaled_code = space.bytes_w(w_marshaled_code) + f.write(marshaled_code) f.close() return cpathname @@ -712,7 +717,7 @@ space = self.space mtime = 12345 co = compile('x = 42', '?', 'exec') - cpathname = _testfile(importing.get_pyc_magic(space), mtime, co) + cpathname = _testfile(space, importing.get_pyc_magic(space), mtime, co) stream = streamio.open_file_as_stream(cpathname, "rb") try: stream.seek(8, 0) @@ -721,7 +726,7 @@ pycode = space.interpclass_w(w_code) finally: stream.close() - assert type(pycode) is pypy.interpreter.pycode.PyCode + assert type(pycode) is PyCode w_dic = space.newdict() pycode.exec_code(space, w_dic, w_dic) w_ret = space.getitem(w_dic, space.wrap('x')) @@ -764,7 +769,7 @@ finally: stream.close() pycode = space.interpclass_w(w_ret) - assert type(pycode) is pypy.interpreter.pycode.PyCode + assert type(pycode) is PyCode w_dic = space.newdict() pycode.exec_code(space, w_dic, w_dic) w_ret = space.getitem(w_dic, space.wrap('x')) @@ -908,7 +913,7 @@ finally: stream.close() pycode = space.interpclass_w(w_ret) - assert type(pycode) is pypy.interpreter.pycode.PyCode + assert type(pycode) is PyCode cpathname = str(udir.join('cpathname.pyc')) mode = 0777 From noreply at buildbot.pypy.org Mon Feb 27 17:00:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: forgot to fix the invocation of _testfile after I changed its signature Message-ID: <20120227160032.61F09820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52939:3133f7b5d52b Date: 2012-02-27 16:37 +0100 http://bitbucket.org/pypy/pypy/changeset/3133f7b5d52b/ Log: forgot to fix the invocation of _testfile after I changed its signature diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -737,7 +737,7 @@ space = self.space mtime = 12345 co = compile('x = 42', '?', 'exec') - cpathname = _testfile(importing.get_pyc_magic(space), mtime, co) + cpathname = _testfile(space, importing.get_pyc_magic(space), mtime, co) w_modulename = space.wrap('somemodule') stream = streamio.open_file_as_stream(cpathname, "rb") try: From noreply at buildbot.pypy.org Mon Feb 27 17:00:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: StringIO is no longer there, use cmd instead; fix the syntax of an octal number Message-ID: <20120227160033.A1730820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52940:58feb84d8e37 Date: 2012-02-27 16:46 +0100 http://bitbucket.org/pypy/pypy/changeset/58feb84d8e37/ Log: StringIO is no longer there, use cmd instead; fix the syntax of an octal number diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -25,7 +25,7 @@ def test_find_module(self): import os - file, pathname, description = self.imp.find_module('StringIO') + file, pathname, description = self.imp.find_module('cmd') assert file is not None file.close() assert os.path.exists(pathname) @@ -63,7 +63,7 @@ assert not self.imp.is_frozen('hello.world.this.is.never.a.frozen.module.name') - def test_load_module_py(self): + def test_load_module_pyx(self): fn = self._py_file() descr = ('.py', 'U', self.imp.PY_SOURCE) f = open(fn, 'U') @@ -164,7 +164,7 @@ with open(file_name, "wb") as f: f.write(code) compiled_name = file_name + ("c" if __debug__ else "o") - chmod(file_name, 0777) + chmod(file_name, 0o777) # Setup sys_path = path[:] From noreply at buildbot.pypy.org Mon Feb 27 17:00:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix typo Message-ID: <20120227160034.E1553820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52941:96b007a12d00 Date: 2012-02-27 16:50 +0100 http://bitbucket.org/pypy/pypy/changeset/96b007a12d00/ Log: fix typo diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -63,7 +63,7 @@ assert not self.imp.is_frozen('hello.world.this.is.never.a.frozen.module.name') - def test_load_module_pyx(self): + def test_load_module_py(self): fn = self._py_file() descr = ('.py', 'U', self.imp.PY_SOURCE) f = open(fn, 'U') From noreply at buildbot.pypy.org Mon Feb 27 17:00:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 27 Feb 2012 17:00:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that we write *bytes* when marshaling; test_load_module_pyc_1 makes a bit of more progress, now it fails because we don't know how to handle _io files Message-ID: <20120227160036.31F1E820B1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52942:138699bd5038 Date: 2012-02-27 16:57 +0100 http://bitbucket.org/pypy/pypy/changeset/138699bd5038/ Log: make sure that we write *bytes* when marshaling; test_load_module_pyc_1 makes a bit of more progress, now it fails because we don't know how to handle _io files diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -18,7 +18,7 @@ co = compile("marker=42", "x.py", "exec") f = open('@TEST.pyc', 'wb') f.write(imp.get_magic()) - f.write('\x00\x00\x00\x00') + f.write(b'\x00\x00\x00\x00') marshal.dump(co, f) f.close() return '@TEST.pyc' diff --git a/pypy/module/marshal/interp_marshal.py b/pypy/module/marshal/interp_marshal.py --- a/pypy/module/marshal/interp_marshal.py +++ b/pypy/module/marshal/interp_marshal.py @@ -81,7 +81,7 @@ def write(self, data): space = self.space - space.call_function(self.func, space.wrap(data)) + space.call_function(self.func, space.wrapbytes(data)) class FileReader(AbstractReaderWriter): From noreply at buildbot.pypy.org Mon Feb 27 17:24:08 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 17:24:08 +0100 (CET) Subject: [pypy-commit] pypy default: Test from issue1073. Message-ID: <20120227162408.6D3DF820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52943:092ee39048af Date: 2012-02-27 17:23 +0100 http://bitbucket.org/pypy/pypy/changeset/092ee39048af/ Log: Test from issue1073. diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -60,6 +60,9 @@ stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() + if getattr(pipe, 'returncode', 0) < 0: + raise IOError("subprocess was killed by signal %d" % ( + pipe.returncode,)) if stderr.startswith('SKIP:'): py.test.skip(stderr) if stderr.startswith('debug_alloc.h:'): # lldebug builds diff --git a/pypy/module/pypyjit/test_pypy_c/test_alloc.py b/pypy/module/pypyjit/test_pypy_c/test_alloc.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test_pypy_c/test_alloc.py @@ -0,0 +1,26 @@ +import py, sys +from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC + +class TestAlloc(BaseTestPyPyC): + + SIZES = dict.fromkeys([2 ** n for n in range(26)] + # up to 32MB + [2 ** n - 1 for n in range(26)]) + + def test_newstr_constant_size(self): + for size in TestAlloc.SIZES: + yield self.newstr_constant_size, size + + def newstr_constant_size(self, size): + src = """if 1: + N = %(size)d + part_a = 'a' * N + part_b = 'b' * N + for i in xrange(20): + ao = '%%s%%s' %% (part_a, part_b) + def main(): + return 42 +""" % {'size': size} + log = self.run(src, [], threshold=10) + assert log.result == 42 + loop, = log.loops_by_filename(self.filepath) + # assert did not crash From noreply at buildbot.pypy.org Mon Feb 27 18:21:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 27 Feb 2012 18:21:21 +0100 (CET) Subject: [pypy-commit] pypy miniscan: Check-in intermediate progress for completeness, and close this Message-ID: <20120227172121.78321820B1@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52944:698dc6400468 Date: 2012-02-27 18:20 +0100 http://bitbucket.org/pypy/pypy/changeset/698dc6400468/ Log: Check-in intermediate progress for completeness, and close this branch as abandoned. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -1300,10 +1300,17 @@ # then the write_barrier must have ensured that the prebuilt # GcStruct is in the list self.old_objects_pointing_to_young. debug_start("gc-minor-walkroots") + if self.root_walker.conservative_stack_roots: + self.prepare_conservative_stack_roots_minor() + stack_roots = MiniMarkGC.conservative_stack_roots_minor + else: + stack_roots = MiniMarkGC._trace_drag_out1 self.root_walker.walk_roots( - MiniMarkGC._trace_drag_out1, # stack roots + stack_roots, # stack roots MiniMarkGC._trace_drag_out1, # static in prebuilt non-gc None) # static in prebuilt gc + if self.root_walker.conservative_stack_roots: + self.finish_conservative_stack_roots_minor() debug_stop("gc-minor-walkroots") def collect_cardrefs_to_nursery(self): @@ -1708,10 +1715,17 @@ self.objects_to_trace) # # Add the roots from the other sources. + if self.root_walker.conservative_stack_roots: + self.prepare_conservative_stack_roots_major() + stack_roots = MiniMarkGC.conservative_stack_roots_major + else: + stack_roots = MiniMarkGC._collect_ref_stk self.root_walker.walk_roots( - MiniMarkGC._collect_ref_stk, # stack roots + stack_roots, # stack roots MiniMarkGC._collect_ref_stk, # static in prebuilt non-gc structures None) # we don't need the static in all prebuilt gc objects + if self.root_walker.conservative_stack_roots: + self.finish_conservative_stack_roots_major() # # If we are in an inner collection caused by a call to a finalizer, # the 'run_finalizers' objects also need to be kept alive. @@ -2022,6 +2036,42 @@ self.old_objects_with_weakrefs.delete() self.old_objects_with_weakrefs = new_with_weakref + # ---------- + # Conservative stack scanning, for --gcrootfinder=scan + + SMALL_VALUE_MAX = 4095 + + def prepare_conservative_stack_roots_minor(self): + ... + + def conservative_stack_roots_minor(self, start, stop): + """Called during a minor collection. Must conservatively find + addresses from the stack, between 'start' and 'stop', that + point to young objects. These objects must be pinned down. + """ + scan = start + while scan != stop: + addr = scan.address[0] # maybe an address + addrint = llmemory.cast_adr_to_int(addr) + scan += llmemory.sizeof(llmemory.Address) + # + # A first quick check for NULLs or small positive integers + if r_uint(addrint) <= r_uint(self.SMALL_VALUE_MAX): + continue + # + # If it's not aligned to a WORD, no chance + if addrint & (WORD-1) != 0: + continue + # + # Is it in the nursery? + if self.is_in_nursery(addr): + # + # ././. + + + def conservative_stack_roots_major(self, start, stop): + xxx + # ____________________________________________________________ diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -1341,6 +1341,7 @@ class BaseRootWalker(object): need_root_stack = False thread_setup = None + conservative_stack_roots = False def __init__(self, gctransformer): self.gcdata = gctransformer.gcdata diff --git a/pypy/rpython/memory/gctransform/scan.py b/pypy/rpython/memory/gctransform/scan.py --- a/pypy/rpython/memory/gctransform/scan.py +++ b/pypy/rpython/memory/gctransform/scan.py @@ -35,6 +35,7 @@ class ScanStackRootWalker(BaseRootWalker): + conservative_stack_roots = True def __init__(self, gctransformer): BaseRootWalker.__init__(self, gctransformer) @@ -44,25 +45,50 @@ self._asm_callback = _asm_callback #def need_stacklet_support(self, gctransformer, getfn): - # anything needed? + # xxx #def need_thread_support(self, gctransformer, getfn): # xxx - def walk_stack_roots(self, collect_stack_root): + def walk_stack_roots(self, collect_stack_root_range): gcdata = self.gcdata - gcdata._gc_collect_stack_root = collect_stack_root + gcdata._gc_collect_stack_root_range = collect_stack_root_range pypy_asm_close_for_scanning( llhelper(ASM_CALLBACK_PTR, self._asm_callback)) def walk_stack_from(self): - raise NotImplementedError + bottom = pypy_get_asm_tmp_stack_bottom() # highest address + top = pypy_get_asm_stackptr() # lowest address + collect_stack_root_range = self.gcdata._gc_collect_stack_root_range + collect_stack_root_range(self.gc, top, bottom) eci = ExternalCompilationInfo( - post_include_bits = ["extern void pypy_asm_close_for_scanning(void*);\n"], + post_include_bits = [''' +extern void pypy_asm_close_for_scanning(void*); +extern void *pypy_asm_tmp_stack_bottom; +#define pypy_get_asm_tmp_stack_bottom() pypy_asm_tmp_stack_bottom + +#if defined(__amd64__) +# define _pypy_get_asm_stackptr(result) asm("movq %%rsp, %0" : "=g"(result)) +#else +# define _pypy_get_asm_stackptr(result) asm("movl %%esp, %0" : "=g"(result)) +#endif + +static void *pypy_get_asm_stackptr(void) +{ + /* might return a "esp" whose value is slightly smaller than necessary, + due to the extra function call. */ + void *result; + _pypy_get_asm_stackptr(result); + return result; +} + +'''], separate_module_sources = [''' +void *pypy_asm_tmp_stack_bottom = 0; /* temporary */ + void pypy_asm_close_for_scanning(void *fn) { /* We have to do the call by clobbering all registers. This is @@ -87,3 +113,13 @@ _nowrapper=True, random_effects_on_gcobjs=True, compilation_info=eci) +pypy_get_asm_tmp_stack_bottom =rffi.llexternal('pypy_get_asm_tmp_stack_bottom', + [], llmemory.Address, + sandboxsafe=True, + _nowrapper=True, + compilation_info=eci) +pypy_get_asm_stackptr = rffi.llexternal('pypy_get_asm_stackptr', + [], llmemory.Address, + sandboxsafe=True, + _nowrapper=True, + compilation_info=eci) diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -412,6 +412,11 @@ def GC_KEEPALIVE(self, funcgen, v): return 'pypy_asm_keepalive(%s);' % funcgen.expr(v) + def OP_GC_STACK_BOTTOM(self, funcgen, op): + # XXX temporary + return ('assert(!pypy_asm_tmp_stack_bottom); /* temporary */\n' + + '_pypy_get_asm_stackptr(pypy_asm_tmp_stack_bottom);' % asm) + def OP_GC_RELOAD_POSSIBLY_MOVED(self, funcgen, op): raise Exception("should not be produced with --gcrootfinder=scan") diff --git a/pypy/translator/c/test/test_scan.py b/pypy/translator/c/test/test_scan.py --- a/pypy/translator/c/test/test_scan.py +++ b/pypy/translator/c/test/test_scan.py @@ -1,5 +1,6 @@ from pypy.translator.c.test import test_newgc -class TestMiniMarkGC(test_newgc.TestMiniMarkGC): +class TestScanMiniMarkGC(test_newgc.TestMiniMarkGC): + gcpolicy = "minimark" gcrootfinder = "scan" From noreply at buildbot.pypy.org Tue Feb 28 01:50:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 28 Feb 2012 01:50:33 +0100 (CET) Subject: [pypy-commit] pypy dead-code-optimization: implement dead ops removal Message-ID: <20120228005033.F13DB820B1@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: dead-code-optimization Changeset: r52945:58bcd5dd0b05 Date: 2012-02-27 16:50 -0800 http://bitbucket.org/pypy/pypy/changeset/58bcd5dd0b05/ Log: implement dead ops removal diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -9,6 +9,7 @@ from pypy.jit.metainterp.optimizeopt.simplify import OptSimplify from pypy.jit.metainterp.optimizeopt.pure import OptPure from pypy.jit.metainterp.optimizeopt.earlyforce import OptEarlyForce +from pypy.jit.metainterp.optimizeopt.deadops import remove_dead_ops from pypy.rlib.jit import PARAMETERS from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.debug import debug_start, debug_stop, debug_print @@ -65,6 +66,7 @@ else: optimizer = Optimizer(metainterp_sd, loop, optimizations) optimizer.propagate_all_forward() + remove_dead_ops(loop) finally: debug_stop("jit-optimize") diff --git a/pypy/jit/metainterp/optimizeopt/deadops.py b/pypy/jit/metainterp/optimizeopt/deadops.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/deadops.py @@ -0,0 +1,16 @@ + +def remove_dead_ops(loop): + newops = [] + seen = {} + for i in range(len(loop.operations) -1, -1, -1): + op = loop.operations[i] + if op.has_no_side_effect() and op.result not in seen: + continue + for arg in op.getarglist(): + seen[arg] = None + if op.getfailargs(): + for arg in op.getfailargs(): + seen[arg] = None + newops.append(op) + newops.reverse() + loop.operations[:] = newops diff --git a/pypy/jit/metainterp/optimizeopt/test/test_deadops.py b/pypy/jit/metainterp/optimizeopt/test/test_deadops.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_deadops.py @@ -0,0 +1,57 @@ + +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.optimizeopt.test.test_optimizebasic import BaseTestBasic + +class TestRemoveDeadOps(BaseTestBasic, LLtypeMixin): + def test_deadops(self): + ops = """ + [i0] + i1 = int_add(i0, 1) + jump() + """ + expected = """ + [i0] + jump() + """ + self.optimize_loop(ops, expected) + + def test_not_deadops(self): + ops = """ + [i0] + i1 = int_add(i0, 1) + jump(i1) + """ + expected = """ + [i0] + i1 = int_add(i0, 1) + jump(i1) + """ + self.optimize_loop(ops, expected) + + def test_not_deadops_1(self): + ops = """ + [i0] + i1 = int_add(i0, 1) + guard_true(i0) [i1] + jump() + """ + expected = """ + [i0] + i1 = int_add(i0, 1) + guard_true(i0) [i1] + jump() + """ + self.optimize_loop(ops, expected) + + def test_not_deadops_2(self): + ops = """ + [p0, i0] + setfield_gc(p0, i0) + jump() + """ + expected = """ + [p0, i0] + setfield_gc(p0, i0) + jump() + """ + self.optimize_loop(ops, expected) From noreply at buildbot.pypy.org Tue Feb 28 04:24:33 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 04:24:33 +0100 (CET) Subject: [pypy-commit] pypy faster-str-decode-escape: merge in default Message-ID: <20120228032433.E4BB1820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: faster-str-decode-escape Changeset: r52946:c095ae1d0b83 Date: 2012-02-27 20:24 -0700 http://bitbucket.org/pypy/pypy/changeset/c095ae1d0b83/ Log: merge in default diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -769,11 +769,19 @@ self.generate_function('malloc_unicode', malloc_unicode, [lltype.Signed]) - # Rarely called: allocate a fixed-size amount of bytes, but - # not in the nursery, because it is too big. Implemented like - # malloc_nursery_slowpath() above. - self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, - [lltype.Signed]) + # Never called as far as I can tell, but there for completeness: + # allocate a fixed-size object, but not in the nursery, because + # it is too big. + def malloc_big_fixedsize(size, tid): + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_big_fixedsize', malloc_big_fixedsize, + [lltype.Signed] * 2) def _bh_malloc(self, sizedescr): from pypy.rpython.memory.gctypelayout import check_typeid diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -1,6 +1,6 @@ import sys from pypy.rlib.rarithmetic import ovfcheck -from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr, BoxInt +from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.symbolic import WORD @@ -96,8 +96,10 @@ def handle_new_fixedsize(self, descr, op): assert isinstance(descr, SizeDescr) size = descr.size - in_nursery = self.gen_malloc_nursery(size, op.result) - self.gen_initialize_tid(op.result, descr.tid, in_nursery) + if self.gen_malloc_nursery(size, op.result): + self.gen_initialize_tid(op.result, descr.tid) + else: + self.gen_malloc_fixedsize(size, descr.tid, op.result) def handle_new_array(self, arraydescr, op): v_length = op.getarg(0) @@ -112,9 +114,9 @@ pass # total_size is still -1 elif arraydescr.itemsize == 0: total_size = arraydescr.basesize - if 0 <= total_size <= 0xffffff: # up to 16MB, arbitrarily - in_nursery = self.gen_malloc_nursery(total_size, op.result) - self.gen_initialize_tid(op.result, arraydescr.tid, in_nursery) + if (total_size >= 0 and + self.gen_malloc_nursery(total_size, op.result)): + self.gen_initialize_tid(op.result, arraydescr.tid) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) elif self.gc_ll_descr.kind == 'boehm': self.gen_boehm_malloc_array(arraydescr, v_length, op.result) @@ -147,13 +149,22 @@ # mark 'v_result' as freshly malloced self.recent_mallocs[v_result] = None - def gen_malloc_fixedsize(self, size, v_result): - """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, Const(size)). - Note that with the framework GC, this should be called very rarely. + def gen_malloc_fixedsize(self, size, typeid, v_result): + """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, ...). + Used on Boehm, and on the framework GC for large fixed-size + mallocs. (For all I know this latter case never occurs in + practice, but better safe than sorry.) """ - addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') - self._gen_call_malloc_gc([ConstInt(addr), ConstInt(size)], v_result, - self.gc_ll_descr.malloc_fixedsize_descr) + if self.gc_ll_descr.fielddescr_tid is not None: # framework GC + assert (size & (WORD-1)) == 0, "size not aligned?" + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_big_fixedsize') + args = [ConstInt(addr), ConstInt(size), ConstInt(typeid)] + descr = self.gc_ll_descr.malloc_big_fixedsize_descr + else: # Boehm + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') + args = [ConstInt(addr), ConstInt(size)] + descr = self.gc_ll_descr.malloc_fixedsize_descr + self._gen_call_malloc_gc(args, v_result, descr) def gen_boehm_malloc_array(self, arraydescr, v_num_elem, v_result): """Generate a CALL_MALLOC_GC(malloc_array_fn, ...) for Boehm.""" @@ -211,7 +222,6 @@ """ size = self.round_up_for_allocation(size) if not self.gc_ll_descr.can_use_nursery_malloc(size): - self.gen_malloc_fixedsize(size, v_result) return False # op = None @@ -240,24 +250,11 @@ self.recent_mallocs[v_result] = None return True - def gen_initialize_tid(self, v_newgcobj, tid, in_nursery): + def gen_initialize_tid(self, v_newgcobj, tid): if self.gc_ll_descr.fielddescr_tid is not None: # produce a SETFIELD to initialize the GC header - v_tid = ConstInt(tid) - if not in_nursery: - # important: must preserve the gcflags! rare case. - v_tidbase = BoxInt() - v_tidcombined = BoxInt() - op = ResOperation(rop.GETFIELD_RAW, - [v_newgcobj], v_tidbase, - descr=self.gc_ll_descr.fielddescr_tid) - self.newops.append(op) - op = ResOperation(rop.INT_OR, - [v_tidbase, v_tid], v_tidcombined) - self.newops.append(op) - v_tid = v_tidcombined op = ResOperation(rop.SETFIELD_GC, - [v_newgcobj, v_tid], None, + [v_newgcobj, ConstInt(tid)], None, descr=self.gc_ll_descr.fielddescr_tid) self.newops.append(op) diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py --- a/pypy/jit/backend/llsupport/test/test_rewrite.py +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -119,12 +119,19 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(adescr.basesize + 10 * adescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=alendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(adescr.basesize)d, \ + 10, \ + %(adescr.itemsize)d, \ + %(adescr.lendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(adescr.basesize + 10 * adescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=alendescr) def test_new_array_variable(self): self.check_rewrite(""" @@ -178,13 +185,20 @@ jump() """, """ [i1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(unicodedescr.basesize + \ - 10 * unicodedescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=unicodelendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(unicodedescr.basesize)d, \ + 10, \ + %(unicodedescr.itemsize)d, \ + %(unicodelendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(unicodedescr.basesize + \ +## 10 * unicodedescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=unicodelendescr) class TestFramework(RewriteTests): @@ -203,7 +217,7 @@ # class FakeCPU(object): def sizeof(self, STRUCT): - descr = SizeDescrWithVTable(102) + descr = SizeDescrWithVTable(104) descr.tid = 9315 return descr self.cpu = FakeCPU() @@ -368,13 +382,9 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(bdescr.basesize + 104)d, \ - descr=malloc_fixedsize_descr) - i0 = getfield_raw(p0, descr=tiddescr) - i1 = int_or(i0, 8765) - setfield_gc(p0, i1, descr=tiddescr) - setfield_gc(p0, 103, descr=blendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, 103, \ + descr=malloc_array_descr) jump() """) @@ -437,11 +447,8 @@ jump() """, """ [p1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ - descr=malloc_fixedsize_descr) - i0 = getfield_raw(p0, descr=tiddescr) - i1 = int_or(i0, 9315) - setfield_gc(p0, i1, descr=tiddescr) + p0 = call_malloc_gc(ConstClass(malloc_big_fixedsize), 104, 9315, \ + descr=malloc_big_fixedsize_descr) setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) jump() """) diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -60,6 +60,9 @@ stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() + if getattr(pipe, 'returncode', 0) < 0: + raise IOError("subprocess was killed by signal %d" % ( + pipe.returncode,)) if stderr.startswith('SKIP:'): py.test.skip(stderr) if stderr.startswith('debug_alloc.h:'): # lldebug builds diff --git a/pypy/module/pypyjit/test_pypy_c/test_alloc.py b/pypy/module/pypyjit/test_pypy_c/test_alloc.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test_pypy_c/test_alloc.py @@ -0,0 +1,26 @@ +import py, sys +from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC + +class TestAlloc(BaseTestPyPyC): + + SIZES = dict.fromkeys([2 ** n for n in range(26)] + # up to 32MB + [2 ** n - 1 for n in range(26)]) + + def test_newstr_constant_size(self): + for size in TestAlloc.SIZES: + yield self.newstr_constant_size, size + + def newstr_constant_size(self, size): + src = """if 1: + N = %(size)d + part_a = 'a' * N + part_b = 'b' * N + for i in xrange(20): + ao = '%%s%%s' %% (part_a, part_b) + def main(): + return 42 +""" % {'size': size} + log = self.run(src, [], threshold=10) + assert log.result == 42 + loop, = log.loops_by_filename(self.filepath) + # assert did not crash diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -608,6 +608,11 @@ specified as 0 if the object is not varsized. The returned object is fully initialized and zero-filled.""" # + # Here we really need a valid 'typeid', not 0 (as the JIT might + # try to send us if there is still a bug). + ll_assert(bool(self.combine(typeid, 0)), + "external_malloc: typeid == 0") + # # Compute the total size, carefully checking for overflows. size_gc_header = self.gcheaderbuilder.size_gc_header nonvarsize = size_gc_header + self.fixed_size(typeid) From noreply at buildbot.pypy.org Tue Feb 28 04:25:28 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 04:25:28 +0100 (CET) Subject: [pypy-commit] pypy default: merge in faster-str-decode-escape branch Message-ID: <20120228032528.1C231820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: Changeset: r52947:80e3d375b9ec Date: 2012-02-27 20:25 -0700 http://bitbucket.org/pypy/pypy/changeset/80e3d375b9ec/ Log: merge in faster-str-decode-escape branch diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -115,21 +115,24 @@ the string is UTF-8 encoded and should be re-encoded in the specified encoding. """ - lis = [] + from pypy.rlib.rstring import StringBuilder + builder = StringBuilder(len(s)) ps = 0 end = len(s) - while ps < end: - if s[ps] != '\\': - # note that the C code has a label here. - # the logic is the same. + while 1: + ps2 = ps + while ps < end and s[ps] != '\\': if recode_encoding and ord(s[ps]) & 0x80: w, ps = decode_utf8(space, s, ps, end, recode_encoding) - # Append bytes to output buffer. - lis.append(w) + builder.append(w) + ps2 = ps else: - lis.append(s[ps]) ps += 1 - continue + if ps > ps2: + builder.append_slice(s, ps2, ps) + if ps == end: + break + ps += 1 if ps == end: raise_app_valueerror(space, 'Trailing \\ in string') @@ -140,25 +143,25 @@ if ch == '\n': pass elif ch == '\\': - lis.append('\\') + builder.append('\\') elif ch == "'": - lis.append("'") + builder.append("'") elif ch == '"': - lis.append('"') + builder.append('"') elif ch == 'b': - lis.append("\010") + builder.append("\010") elif ch == 'f': - lis.append('\014') # FF + builder.append('\014') # FF elif ch == 't': - lis.append('\t') + builder.append('\t') elif ch == 'n': - lis.append('\n') + builder.append('\n') elif ch == 'r': - lis.append('\r') + builder.append('\r') elif ch == 'v': - lis.append('\013') # VT + builder.append('\013') # VT elif ch == 'a': - lis.append('\007') # BEL, not classic C + builder.append('\007') # BEL, not classic C elif ch in '01234567': # Look for up to two more octal digits span = ps @@ -168,13 +171,13 @@ # emulate a strange wrap-around behavior of CPython: # \400 is the same as \000 because 0400 == 256 num = int(octal, 8) & 0xFF - lis.append(chr(num)) + builder.append(chr(num)) ps = span elif ch == 'x': if ps+2 <= end and isxdigit(s[ps]) and isxdigit(s[ps + 1]): hexa = s[ps : ps + 2] num = int(hexa, 16) - lis.append(chr(num)) + builder.append(chr(num)) ps += 2 else: raise_app_valueerror(space, 'invalid \\x escape') @@ -184,13 +187,13 @@ # this was not an escape, so the backslash # has to be added, and we start over in # non-escape mode. - lis.append('\\') + builder.append('\\') ps -= 1 assert ps >= 0 continue # an arbitry number of unescaped UTF-8 bytes may follow. - buf = ''.join(lis) + buf = builder.build() return buf From noreply at buildbot.pypy.org Tue Feb 28 04:26:44 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 04:26:44 +0100 (CET) Subject: [pypy-commit] pypy default: close branch Message-ID: <20120228032644.983C0820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: Changeset: r52948:034e2cd62f1c Date: 2012-02-27 20:26 -0700 http://bitbucket.org/pypy/pypy/changeset/034e2cd62f1c/ Log: close branch From noreply at buildbot.pypy.org Tue Feb 28 04:28:50 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 04:28:50 +0100 (CET) Subject: [pypy-commit] pypy faster-str-decode-escape: close this branch Message-ID: <20120228032850.71E06820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: faster-str-decode-escape Changeset: r52949:6cd6773cb83a Date: 2012-02-27 20:28 -0700 http://bitbucket.org/pypy/pypy/changeset/6cd6773cb83a/ Log: close this branch From noreply at buildbot.pypy.org Tue Feb 28 04:37:55 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 04:37:55 +0100 (CET) Subject: [pypy-commit] pypy default: move import outside of function Message-ID: <20120228033755.0D5BA820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: Changeset: r52950:1604c8f1d469 Date: 2012-02-27 20:37 -0700 http://bitbucket.org/pypy/pypy/changeset/1604c8f1d469/ Log: move import outside of function diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -109,13 +109,14 @@ result = "0" + result return result +from pypy.rlib.rstring import StringBuilder + def PyString_DecodeEscape(space, s, recode_encoding): """ Unescape a backslash-escaped string. If recode_encoding is non-zero, the string is UTF-8 encoded and should be re-encoded in the specified encoding. """ - from pypy.rlib.rstring import StringBuilder builder = StringBuilder(len(s)) ps = 0 end = len(s) From noreply at buildbot.pypy.org Tue Feb 28 05:09:39 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Tue, 28 Feb 2012 05:09:39 +0100 (CET) Subject: [pypy-commit] pypy default: move import to top... Message-ID: <20120228040939.BBBE1820B1@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: Changeset: r52951:079551524b28 Date: 2012-02-27 21:09 -0700 http://bitbucket.org/pypy/pypy/changeset/079551524b28/ Log: move import to top... diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter import unicodehelper +from pypy.rlib.rstring import StringBuilder def parsestr(space, encoding, s, unicode_literals=False): # compiler.transformer.Transformer.decode_literal depends on what @@ -109,8 +110,6 @@ result = "0" + result return result -from pypy.rlib.rstring import StringBuilder - def PyString_DecodeEscape(space, s, recode_encoding): """ Unescape a backslash-escaped string. If recode_encoding is non-zero, From noreply at buildbot.pypy.org Tue Feb 28 07:38:24 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 28 Feb 2012 07:38:24 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: refactoring Message-ID: <20120228063824.A25638203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52952:23a40fce87d8 Date: 2012-02-27 14:19 -0800 http://bitbucket.org/pypy/pypy/changeset/23a40fce87d8/ Log: refactoring diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -167,9 +167,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(self.rffiptype, address) x[0] = self._unwrap_object(space, w_obj) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = self.typecode + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = self.typecode class VoidConverter(TypeConverter): @@ -360,9 +359,8 @@ x = rffi.cast(rffi.LONGP, address) arg = space.str_w(w_obj) x[0] = rffi.cast(rffi.LONG, rffi.str2charp(arg)) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'o' + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = 'o' def from_memory(self, space, w_obj, w_type, offset): address = self._get_raw_address(space, w_obj, offset) @@ -379,9 +377,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'a' + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = 'a' def convert_argument_libffi(self, space, w_obj, argchain): argchain.arg(get_rawobject(space, w_obj)) @@ -393,9 +390,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'p' + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = 'p' class VoidPtrRefConverter(TypeConverter): @@ -404,9 +400,8 @@ def convert_argument(self, space, w_obj, address): x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj)) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'r' + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = 'r' class ShortArrayConverter(ArrayTypeConverterMixin, TypeConverter): @@ -516,9 +511,8 @@ x = rffi.cast(rffi.VOIDPP, address) x[0] = rffi.cast(rffi.VOIDP, self._unwrap_object(space, w_obj)) address = rffi.cast(capi.C_OBJECT, address) - typecode = rffi.cast(rffi.CCHARP, - capi.direct_ptradd(address, capi.c_function_arg_typeoffset())) - typecode[0] = 'o' + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = 'o' def convert_argument_libffi(self, space, w_obj, argchain): argchain.arg(self._unwrap_object(space, w_obj)) From noreply at buildbot.pypy.org Tue Feb 28 07:38:26 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 28 Feb 2012 07:38:26 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: probably no need for builtin vector with new TApplication enabled ... Message-ID: <20120228063826.39C758203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52953:ebbf2f8ead62 Date: 2012-02-27 17:32 -0800 http://bitbucket.org/pypy/pypy/changeset/ebbf2f8ead62/ Log: probably no need for builtin vector with new TApplication enabled ... diff --git a/pypy/module/cppyy/test/stltypes_LinkDef.h b/pypy/module/cppyy/test/stltypes_LinkDef.h --- a/pypy/module/cppyy/test/stltypes_LinkDef.h +++ b/pypy/module/cppyy/test/stltypes_LinkDef.h @@ -4,9 +4,9 @@ #pragma link off all classes; #pragma link off all functions; -#pragma link C++ class std::vector; -#pragma link C++ class std::vector::iterator; -#pragma link C++ class std::vector::const_iterator; +//#pragma link C++ class std::vector; +//#pragma link C++ class std::vector::iterator; +//#pragma link C++ class std::vector::const_iterator; #pragma link C++ class std::vector; #pragma link C++ class std::vector::iterator; #pragma link C++ class std::vector::const_iterator; From noreply at buildbot.pypy.org Tue Feb 28 07:38:27 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 28 Feb 2012 07:38:27 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: refactoring and cleanup (Reflex backend): the idea is to speed up the slow path by handing out the stub functions rather than method indices; it should also help with stability in the case of additional methods to a namespace (since the stubs don't move, whereas the indices could change) Message-ID: <20120228063827.775CA8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52954:e7c4a93f01f3 Date: 2012-02-27 17:34 -0800 http://bitbucket.org/pypy/pypy/changeset/e7c4a93f01f3/ Log: refactoring and cleanup (Reflex backend): the idea is to speed up the slow path by handing out the stub functions rather than method indices; it should also help with stability in the case of additional methods to a namespace (since the stubs don't move, whereas the indices could change) diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -9,11 +9,17 @@ _C_OPAQUE_PTR = rffi.LONG _C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO -C_TYPEHANDLE = _C_OPAQUE_PTR -C_NULL_TYPEHANDLE = rffi.cast(C_TYPEHANDLE, _C_OPAQUE_NULL) +C_SCOPE = _C_OPAQUE_PTR +C_NULL_SCOPE = rffi.cast(C_SCOPE, _C_OPAQUE_NULL) + +C_TYPE = C_SCOPE +C_NULL_TYPE = C_NULL_SCOPE + C_OBJECT = _C_OPAQUE_PTR C_NULL_OBJECT = rffi.cast(C_OBJECT, _C_OPAQUE_NULL) +C_METHOD = _C_OPAQUE_PTR + C_METHPTRGETTER = lltype.FuncType([C_OBJECT], rffi.VOIDP) C_METHPTRGETTER_PTR = lltype.Ptr(C_METHPTRGETTER) @@ -26,128 +32,85 @@ c_load_dictionary = backend.c_load_dictionary -c_get_typehandle = rffi.llexternal( - "cppyy_get_typehandle", - [rffi.CCHARP], C_OBJECT, +# name to opaque C++ scope representation ------------------------------------ +c_get_scope = rffi.llexternal( + "cppyy_get_scope", + [rffi.CCHARP], C_SCOPE, compilation_info=backend.eci) -c_get_templatehandle = rffi.llexternal( - "cppyy_get_templatehandle", - [rffi.CCHARP], C_TYPEHANDLE, +c_get_template = rffi.llexternal( + "cppyy_get_template", + [rffi.CCHARP], C_TYPE, compilation_info=backend.eci) +# memory management ---------------------------------------------------------- c_allocate = rffi.llexternal( "cppyy_allocate", - [C_TYPEHANDLE], C_OBJECT, + [C_TYPE], C_OBJECT, compilation_info=backend.eci) c_deallocate = rffi.llexternal( "cppyy_deallocate", - [C_TYPEHANDLE, C_OBJECT], lltype.Void, + [C_TYPE, C_OBJECT], lltype.Void, compilation_info=backend.eci) c_destruct = rffi.llexternal( "cppyy_destruct", - [C_TYPEHANDLE, C_OBJECT], lltype.Void, + [C_TYPE, C_OBJECT], lltype.Void, compilation_info=backend.eci) -c_is_namespace = rffi.llexternal( - "cppyy_is_namespace", - [C_TYPEHANDLE], rffi.INT, - compilation_info=backend.eci) -c_final_name = rffi.llexternal( - "cppyy_final_name", - [C_TYPEHANDLE], rffi.CCHARP, - compilation_info=backend.eci) - -c_has_complex_hierarchy = rffi.llexternal( - "cppyy_has_complex_hierarchy", - [C_TYPEHANDLE], rffi.INT, - compilation_info=backend.eci) -c_num_bases = rffi.llexternal( - "cppyy_num_bases", - [C_TYPEHANDLE], rffi.INT, - compilation_info=backend.eci) -c_base_name = rffi.llexternal( - "cppyy_base_name", - [C_TYPEHANDLE, rffi.INT], rffi.CCHARP, - compilation_info=backend.eci) - -_c_is_subtype = rffi.llexternal( - "cppyy_is_subtype", - [C_TYPEHANDLE, C_TYPEHANDLE], rffi.INT, - compilation_info=backend.eci, - elidable_function=True) - - at jit.elidable_promote() -def c_is_subtype(td, tb): - if td == tb: - return 1 - return _c_is_subtype(td, tb) - -_c_base_offset = rffi.llexternal( - "cppyy_base_offset", - [C_TYPEHANDLE, C_TYPEHANDLE, C_OBJECT], rffi.SIZE_T, - compilation_info=backend.eci, - elidable_function=True) - - at jit.elidable_promote() -def c_base_offset(td, tb, address): - if td == tb: - return 0 - return _c_base_offset(td, tb, address) - +# method/function dispatching ------------------------------------------------ c_call_v = rffi.llexternal( "cppyy_call_v", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], lltype.Void, - compilation_info=backend.eci) -c_call_o = rffi.llexternal( - "cppyy_call_o", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP, C_TYPEHANDLE], rffi.LONG, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], lltype.Void, compilation_info=backend.eci) c_call_b = rffi.llexternal( "cppyy_call_b", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.INT, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.INT, compilation_info=backend.eci) c_call_c = rffi.llexternal( "cppyy_call_c", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CHAR, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CHAR, compilation_info=backend.eci) c_call_h = rffi.llexternal( "cppyy_call_h", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.SHORT, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.SHORT, compilation_info=backend.eci) c_call_i = rffi.llexternal( "cppyy_call_i", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.INT, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.INT, compilation_info=backend.eci) - c_call_l = rffi.llexternal( "cppyy_call_l", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.LONG, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.LONG, compilation_info=backend.eci) c_call_f = rffi.llexternal( "cppyy_call_f", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, compilation_info=backend.eci) c_call_d = rffi.llexternal( "cppyy_call_d", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, compilation_info=backend.eci) c_call_r = rffi.llexternal( "cppyy_call_r", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.VOIDP, + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.VOIDP, + compilation_info=backend.eci) +c_call_s = rffi.llexternal( + "cppyy_call_s", + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CCHARP, compilation_info=backend.eci) -c_call_s = rffi.llexternal( - "cppyy_call_s", - [C_TYPEHANDLE, rffi.INT, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.CCHARP, +c_call_o = rffi.llexternal( + "cppyy_call_o", + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP, C_TYPE], rffi.LONG, compilation_info=backend.eci) c_get_methptr_getter = rffi.llexternal( "cppyy_get_methptr_getter", - [C_TYPEHANDLE, rffi.INT], C_METHPTRGETTER_PTR, + [C_SCOPE, rffi.INT], C_METHPTRGETTER_PTR, compilation_info=backend.eci, elidable_function=True) +# handling of function argument buffer --------------------------------------- c_allocate_function_args = rffi.llexternal( "cppyy_allocate_function_args", [rffi.SIZE_T], rffi.VOIDP, @@ -167,80 +130,136 @@ compilation_info=backend.eci, elidable_function=True) +# scope reflection information ----------------------------------------------- +c_is_namespace = rffi.llexternal( + "cppyy_is_namespace", + [C_SCOPE], rffi.INT, + compilation_info=backend.eci) + +# type/class reflection information ------------------------------------------ +c_final_name = rffi.llexternal( + "cppyy_final_name", + [C_TYPE], rffi.CCHARP, + compilation_info=backend.eci) +c_has_complex_hierarchy = rffi.llexternal( + "cppyy_has_complex_hierarchy", + [C_TYPE], rffi.INT, + compilation_info=backend.eci) +c_num_bases = rffi.llexternal( + "cppyy_num_bases", + [C_TYPE], rffi.INT, + compilation_info=backend.eci) +c_base_name = rffi.llexternal( + "cppyy_base_name", + [C_TYPE, rffi.INT], rffi.CCHARP, + compilation_info=backend.eci) + +_c_is_subtype = rffi.llexternal( + "cppyy_is_subtype", + [C_TYPE, C_TYPE], rffi.INT, + compilation_info=backend.eci, + elidable_function=True) + + at jit.elidable_promote() +def c_is_subtype(derived, base): + if derived == base: + return 1 + return _c_is_subtype(derived, base) + +_c_base_offset = rffi.llexternal( + "cppyy_base_offset", + [C_TYPE, C_TYPE, C_OBJECT], rffi.SIZE_T, + compilation_info=backend.eci, + elidable_function=True) + + at jit.elidable_promote() +def c_base_offset(derived, base, address): + if derived == base: + return 0 + return _c_base_offset(derived, base, address) + +# method/function reflection information ------------------------------------- c_num_methods = rffi.llexternal( "cppyy_num_methods", - [C_TYPEHANDLE], rffi.INT, + [C_SCOPE], rffi.INT, compilation_info=backend.eci) c_method_name = rffi.llexternal( "cppyy_method_name", - [C_TYPEHANDLE, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) c_method_result_type = rffi.llexternal( "cppyy_method_result_type", - [C_TYPEHANDLE, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) c_method_num_args = rffi.llexternal( "cppyy_method_num_args", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_SCOPE, rffi.INT], rffi.INT, compilation_info=backend.eci) c_method_req_args = rffi.llexternal( "cppyy_method_req_args", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_SCOPE, rffi.INT], rffi.INT, compilation_info=backend.eci) c_method_arg_type = rffi.llexternal( "cppyy_method_arg_type", - [C_TYPEHANDLE, rffi.INT, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) c_method_arg_default = rffi.llexternal( "cppyy_method_arg_default", - [C_TYPEHANDLE, rffi.INT, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) +c_get_method = rffi.llexternal( + "cppyy_get_method", + [C_SCOPE, rffi.INT], C_METHOD, + compilation_info=backend.eci) + +# method properties ---------------------------------------------------------- c_is_constructor = rffi.llexternal( "cppyy_is_constructor", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_TYPE, rffi.INT], rffi.INT, compilation_info=backend.eci) c_is_staticmethod = rffi.llexternal( "cppyy_is_staticmethod", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_TYPE, rffi.INT], rffi.INT, compilation_info=backend.eci) +# data member reflection information ----------------------------------------- c_num_data_members = rffi.llexternal( "cppyy_num_data_members", - [C_TYPEHANDLE], rffi.INT, + [C_SCOPE], rffi.INT, compilation_info=backend.eci) c_data_member_name = rffi.llexternal( "cppyy_data_member_name", - [C_TYPEHANDLE, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) c_data_member_type = rffi.llexternal( "cppyy_data_member_type", - [C_TYPEHANDLE, rffi.INT], rffi.CCHARP, + [C_SCOPE, rffi.INT], rffi.CCHARP, compilation_info=backend.eci) c_data_member_offset = rffi.llexternal( "cppyy_data_member_offset", - [C_TYPEHANDLE, rffi.INT], rffi.SIZE_T, + [C_SCOPE, rffi.INT], rffi.SIZE_T, compilation_info=backend.eci) +# data member properties ----------------------------------------------------- c_is_publicdata = rffi.llexternal( "cppyy_is_publicdata", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_SCOPE, rffi.INT], rffi.INT, compilation_info=backend.eci) c_is_staticdata = rffi.llexternal( "cppyy_is_staticdata", - [C_TYPEHANDLE, rffi.INT], rffi.INT, + [C_SCOPE, rffi.INT], rffi.INT, compilation_info=backend.eci) +# misc helpers --------------------------------------------------------------- c_strtoll = rffi.llexternal( "cppyy_strtoll", [rffi.CCHARP], rffi.LONGLONG, compilation_info=backend.eci) - c_strtoull = rffi.llexternal( "cppyy_strtoull", [rffi.CCHARP], rffi.ULONGLONG, compilation_info=backend.eci) - c_free = rffi.llexternal( "cppyy_free", [rffi.VOIDP], lltype.Void, @@ -256,12 +275,10 @@ "cppyy_charp2stdstring", [rffi.CCHARP], C_OBJECT, compilation_info=backend.eci) - c_stdstring2stdstring = rffi.llexternal( "cppyy_stdstring2stdstring", [C_OBJECT], C_OBJECT, compilation_info=backend.eci) - c_free_stdstring = rffi.llexternal( "cppyy_free_stdstring", [C_OBJECT], lltype.Void, diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -18,9 +18,8 @@ def __init__(self, space, name, cpptype): self.name = name - def execute(self, space, w_returntype, func, cppthis, num_args, args): - rtype = capi.charp2str_free(capi.c_method_result_type(func.cpptype.handle, func.method_index)) - raise TypeError('return type not available or supported ("%s")' % rtype) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + raise TypeError('return type not available or supported ("%s")' % self.name) def execute_libffi(self, space, w_returntype, libffifunc, argchain): from pypy.module.cppyy.interp_cppyy import FastCallNotPossible @@ -31,10 +30,10 @@ _immutable_ = True typecode = 'P' - def execute(self, space, w_returntype, func, cppthis, num_args, args): + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): if hasattr(space, "fake"): raise NotImplementedError - lresult = capi.c_call_l(func.cpptype.handle, func.method_index, cppthis, num_args, args) + lresult = capi.c_call_l(cppmethod, cppthis, num_args, args) address = rffi.cast(rffi.ULONG, lresult) arr = space.interp_w(W_Array, unpack_simple_shape(space, space.wrap(self.typecode))) return arr.fromaddress(space, address, sys.maxint) @@ -44,8 +43,8 @@ _immutable_ = True libffitype = libffi.types.void - def execute(self, space, w_returntype, func, cppthis, num_args, args): - capi.c_call_v(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + capi.c_call_v(cppmethod, cppthis, num_args, args) return space.w_None def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -57,8 +56,8 @@ _immutable_ = True libffitype = libffi.types.schar - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_b(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_b(cppmethod, cppthis, num_args, args) return space.wrap(result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -69,8 +68,8 @@ _immutable_ = True libffitype = libffi.types.schar - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_c(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_c(cppmethod, cppthis, num_args, args) return space.wrap(result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -81,8 +80,8 @@ _immutable_ = True libffitype = libffi.types.sshort - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_h(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_h(cppmethod, cppthis, num_args, args) return space.wrap(result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -96,8 +95,8 @@ def _wrap_result(self, space, result): return space.wrap(result) - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_i(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_i(cppmethod, cppthis, num_args, args) return self._wrap_result(space, result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -111,8 +110,8 @@ def _wrap_result(self, space, result): return space.wrap(rffi.cast(rffi.UINT, result)) - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_l(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_l(cppmethod, cppthis, num_args, args) return self._wrap_result(space, result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -126,8 +125,8 @@ def _wrap_result(self, space, result): return space.wrap(result) - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_l(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_l(cppmethod, cppthis, num_args, args) return self._wrap_result(space, result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -153,8 +152,8 @@ intptr = rffi.cast(rffi.INTP, result) return space.wrap(intptr[0]) - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_r(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_r(cppmethod, cppthis, num_args, args) return self._wrap_result(space, result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -177,8 +176,8 @@ _immutable_ = True libffitype = libffi.types.float - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_f(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_f(cppmethod, cppthis, num_args, args) return space.wrap(float(result)) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -189,8 +188,8 @@ _immutable_ = True libffitype = libffi.types.double - def execute(self, space, w_returntype, func, cppthis, num_args, args): - result = capi.c_call_d(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + result = capi.c_call_d(cppmethod, cppthis, num_args, args) return space.wrap(result) def execute_libffi(self, space, w_returntype, libffifunc, argchain): @@ -201,8 +200,8 @@ class CStringExecutor(FunctionExecutor): _immutable_ = True - def execute(self, space, w_returntype, func, cppthis, num_args, args): - lresult = capi.c_call_l(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + lresult = capi.c_call_l(cppmethod, cppthis, num_args, args) ccpresult = rffi.cast(rffi.CCHARP, lresult) result = rffi.charp2str(ccpresult) # TODO: make it a choice to free return space.wrap(result) @@ -245,10 +244,9 @@ FunctionExecutor.__init__(self, space, name, cpptype) self.cpptype = cpptype - def execute(self, space, w_returntype, func, cppthis, num_args, args): + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): from pypy.module.cppyy import interp_cppyy - long_result = capi.c_call_l( - func.cpptype.handle, func.method_index, cppthis, num_args, args) + long_result = capi.c_call_l(cppmethod, cppthis, num_args, args) ptr_result = rffi.cast(capi.C_OBJECT, long_result) return interp_cppyy.new_instance(space, w_returntype, self.cpptype, ptr_result, False) @@ -261,10 +259,9 @@ class InstanceExecutor(InstancePtrExecutor): _immutable_ = True - def execute(self, space, w_returntype, func, cppthis, num_args, args): + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): from pypy.module.cppyy import interp_cppyy - long_result = capi.c_call_o( - func.cpptype.handle, func.method_index, cppthis, num_args, args, self.cpptype.handle) + long_result = capi.c_call_o(cppmethod, cppthis, num_args, args, self.cpptype.handle) ptr_result = rffi.cast(capi.C_OBJECT, long_result) return interp_cppyy.new_instance(space, w_returntype, self.cpptype, ptr_result, True) @@ -276,8 +273,8 @@ class StdStringExecutor(InstancePtrExecutor): _immutable_ = True - def execute(self, space, w_returntype, func, cppthis, num_args, args): - charp_result = capi.c_call_s(func.cpptype.handle, func.method_index, cppthis, num_args, args) + def execute(self, space, w_returntype, cppmethod, cppthis, num_args, args): + charp_result = capi.c_call_s(cppmethod, cppthis, num_args, args) return space.wrap(capi.charp2str_free(charp_result)) def execute_libffi(self, space, w_returntype, libffifunc, argchain): diff --git a/pypy/module/cppyy/helper.py b/pypy/module/cppyy/helper.py --- a/pypy/module/cppyy/helper.py +++ b/pypy/module/cppyy/helper.py @@ -99,7 +99,7 @@ # is put at the end only as it is unlikely and may trigger unwanted # errors in class loaders in the backend, because a typical operator # name is illegal as a class name) - handle = capi.c_get_typehandle(op) + handle = capi.c_get_scope(op) if handle: op = capi.charp2str_free(capi.c_final_name(handle)) diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -7,76 +7,81 @@ extern "C" { #endif // ifdef __cplusplus - typedef void* cppyy_typehandle_t; - typedef void* cppyy_object_t; + typedef long cppyy_scope_t; + typedef cppyy_scope_t cppyy_type_t; + typedef long cppyy_object_t; + typedef long cppyy_method_t; typedef void* (*cppyy_methptrgetter_t)(cppyy_object_t); - /* name to handle */ - cppyy_typehandle_t cppyy_get_typehandle(const char* class_name); - cppyy_typehandle_t cppyy_get_templatehandle(const char* template_name); + /* name to opaque C++ scope representation -------------------------------- */ + cppyy_scope_t cppyy_get_scope(const char* scope_name); + cppyy_type_t cppyy_get_template(const char* template_name); - /* memory management */ - void* cppyy_allocate(cppyy_typehandle_t handle); - void cppyy_deallocate(cppyy_typehandle_t handle, cppyy_object_t instance); - void cppyy_destruct(cppyy_typehandle_t handle, cppyy_object_t self); + /* memory management ------------------------------------------------------ */ + cppyy_object_t cppyy_allocate(cppyy_type_t type); + void cppyy_deallocate(cppyy_type_t type, cppyy_object_t self); + void cppyy_destruct(cppyy_type_t type, cppyy_object_t self); - /* method/function dispatching */ - void cppyy_call_v(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - long cppyy_call_o(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args, cppyy_typehandle_t rettype); - int cppyy_call_b(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - char cppyy_call_c(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - short cppyy_call_h(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - int cppyy_call_i(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - long cppyy_call_l(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - double cppyy_call_f(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - double cppyy_call_d(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); + /* method/function dispatching -------------------------------------------- */ + void cppyy_call_v(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + int cppyy_call_b(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + char cppyy_call_c(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + short cppyy_call_h(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + int cppyy_call_i(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + long cppyy_call_l(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + double cppyy_call_d(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); - void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); - char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, cppyy_object_t self, int numargs, void* args); + void* cppyy_call_r(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + char* cppyy_call_s(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); - cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t handle, int method_index); + cppyy_object_t cppyy_call_o(cppyy_method_t method, cppyy_object_t self, int nargs, void* args, cppyy_type_t result_type); - /* handling of function argument buffer */ + cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_scope_t scope, int method_index); + + /* handling of function argument buffer ----------------------------------- */ void* cppyy_allocate_function_args(size_t nargs); void cppyy_deallocate_function_args(void* args); size_t cppyy_function_arg_sizeof(); size_t cppyy_function_arg_typeoffset(); /* scope reflection information ------------------------------------------- */ - int cppyy_is_namespace(cppyy_typehandle_t handle); + int cppyy_is_namespace(cppyy_scope_t scope); - /* type/class reflection information -------------------------------------- */ - char* cppyy_final_name(cppyy_typehandle_t handle); - int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle); - int cppyy_num_bases(cppyy_typehandle_t handle); - char* cppyy_base_name(cppyy_typehandle_t handle, int base_index); - int cppyy_is_subtype(cppyy_typehandle_t dh, cppyy_typehandle_t bh); - size_t cppyy_base_offset(cppyy_typehandle_t dh, cppyy_typehandle_t bh, cppyy_object_t address); + /* class reflection information ------------------------------------------- */ + char* cppyy_final_name(cppyy_type_t type); + int cppyy_has_complex_hierarchy(cppyy_type_t type); + int cppyy_num_bases(cppyy_type_t type); + char* cppyy_base_name(cppyy_type_t type, int base_index); + int cppyy_is_subtype(cppyy_type_t derived, cppyy_type_t base); + size_t cppyy_base_offset(cppyy_type_t derived, cppyy_type_t base, cppyy_object_t address); - /* method/function reflection information */ - int cppyy_num_methods(cppyy_typehandle_t handle); - char* cppyy_method_name(cppyy_typehandle_t handle, int method_index); - char* cppyy_method_result_type(cppyy_typehandle_t handle, int method_index); - int cppyy_method_num_args(cppyy_typehandle_t handle, int method_index); - int cppyy_method_req_args(cppyy_typehandle_t handle, int method_index); - char* cppyy_method_arg_type(cppyy_typehandle_t handle, int method_index, int index); - char* cppyy_method_arg_default(cppyy_typehandle_t handle, int method_index, int index); + /* method/function reflection information --------------------------------- */ + int cppyy_num_methods(cppyy_scope_t scope); + char* cppyy_method_name(cppyy_scope_t scope, int method_index); + char* cppyy_method_result_type(cppyy_scope_t scope, int method_index); + int cppyy_method_num_args(cppyy_scope_t scope, int method_index); + int cppyy_method_req_args(cppyy_scope_t scope, int method_index); + char* cppyy_method_arg_type(cppyy_scope_t scope, int method_index, int index); + char* cppyy_method_arg_default(cppyy_scope_t scope, int method_index, int index); - /* method properties */ - int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index); - int cppyy_is_staticmethod(cppyy_typehandle_t handle, int method_index); + cppyy_method_t cppyy_get_method(cppyy_scope_t scope, int method_index); - /* data member reflection information */ - int cppyy_num_data_members(cppyy_typehandle_t handle); - char* cppyy_data_member_name(cppyy_typehandle_t handle, int data_member_index); - char* cppyy_data_member_type(cppyy_typehandle_t handle, int data_member_index); - size_t cppyy_data_member_offset(cppyy_typehandle_t handle, int data_member_index); + /* method properties ----------------------------------------------------- */ + int cppyy_is_constructor(cppyy_type_t type, int method_index); + int cppyy_is_staticmethod(cppyy_type_t type, int method_index); - /* data member properties */ - int cppyy_is_publicdata(cppyy_typehandle_t handle, int data_member_index); - int cppyy_is_staticdata(cppyy_typehandle_t handle, int data_member_index); + /* data member reflection information ------------------------------------ */ + int cppyy_num_data_members(cppyy_scope_t scope); + char* cppyy_data_member_name(cppyy_scope_t scope, int data_member_index); + char* cppyy_data_member_type(cppyy_scope_t scope, int data_member_index); + size_t cppyy_data_member_offset(cppyy_scope_t scope, int data_member_index); - /* misc helpers */ + /* data member properties ------------------------------------------------ */ + int cppyy_is_publicdata(cppyy_type_t type, int data_member_index); + int cppyy_is_staticdata(cppyy_type_t type, int data_member_index); + + /* misc helpers ----------------------------------------------------------- */ void cppyy_free(void* ptr); long long cppyy_strtoll(const char* str); unsigned long long cppyy_strtuoll(const char* str); diff --git a/pypy/module/cppyy/interp_cppyy.py b/pypy/module/cppyy/interp_cppyy.py --- a/pypy/module/cppyy/interp_cppyy.py +++ b/pypy/module/cppyy/interp_cppyy.py @@ -27,36 +27,39 @@ class State(object): def __init__(self, space): - self.cpptype_cache = { - "void" : W_CPPType(space, "void", capi.C_NULL_TYPEHANDLE) } - self.cpptemplatetype_cache = {} + self.r_cppscope_cache = { + "void" : W_CPPType(space, "void", capi.C_NULL_TYPE) } + self.r_cpptemplate_cache = {} @unwrap_spec(name=str) def type_byname(space, name): state = space.fromcache(State) try: - return state.cpptype_cache[name] + return state.r_cppscope_cache[name] except KeyError: pass - handle = capi.c_get_typehandle(name) - assert lltype.typeOf(handle) == capi.C_TYPEHANDLE - if handle: - final_name = capi.charp2str_free(capi.c_final_name(handle)) - if capi.c_is_namespace(handle): - cpptype = W_CPPNamespace(space, final_name, handle) - elif capi.c_has_complex_hierarchy(handle): - cpptype = W_ComplexCPPType(space, final_name, handle) + cppscope = capi.c_get_scope(name) + assert lltype.typeOf(cppscope) == capi.C_SCOPE + if cppscope: + final_name = capi.charp2str_free(capi.c_final_name(cppscope)) + if capi.c_is_namespace(cppscope): + r_cppscope = W_CPPNamespace(space, final_name, cppscope) + elif capi.c_has_complex_hierarchy(cppscope): + r_cppscope = W_ComplexCPPType(space, final_name, cppscope) else: - cpptype = W_CPPType(space, final_name, handle) - state.cpptype_cache[name] = cpptype + r_cppscope = W_CPPType(space, final_name, cppscope) + state.r_cppscope_cache[name] = r_cppscope + # prevent getting reflection info that may be linked in through the + # back-end libs or that may be available through an auto-loader, during + # translation time (else it will get translated, too) if space.config.translating and not objectmodel.we_are_translated(): - return cpptype + return r_cppscope - cpptype._find_methods() - cpptype._find_data_members() - return cpptype + r_cppscope._find_methods() + r_cppscope._find_data_members() + return r_cppscope return None @@ -64,16 +67,16 @@ def template_byname(space, name): state = space.fromcache(State) try: - return state.cpptemplatetype_cache[name] + return state.r_cpptemplate_cache[name] except KeyError: pass - handle = capi.c_get_templatehandle(name) - assert lltype.typeOf(handle) == capi.C_TYPEHANDLE - if handle: - template = W_CPPTemplateType(space, name, handle) - state.cpptype_cache[name] = template - return template + cpptemplate = capi.c_get_template(name) + assert lltype.typeOf(cpptemplate) == capi.C_TYPE + if cpptemplate: + r_cpptemplate = W_CPPTemplateType(space, name, cpptemplate) + state.r_cpptemplate_cache[name] = r_cpptemplate + return r_cpptemplate return None @@ -96,9 +99,10 @@ _immutable_ = True def __init__(self, cpptype, method_index, result_type, arg_defs, args_required): + self.space = cpptype.space self.cpptype = cpptype - self.space = cpptype.space self.method_index = method_index + self.cppmethod = capi.c_get_method(self.cpptype.handle, method_index) self.arg_defs = arg_defs self.args_required = args_required self.executor = executor.get_executor(self.space, result_type) @@ -129,7 +133,7 @@ args = self.prepare_arguments(args_w) try: - return self.executor.execute(self.space, w_type, self, cppthis, len(args_w), args) + return self.executor.execute(self.space, w_type, self.cppmethod, cppthis, len(args_w), args) finally: self.free_arguments(args, len(args_w)) @@ -229,7 +233,7 @@ def __init__(self, space, scope_handle, func_name, functions): self.space = space - assert lltype.typeOf(scope_handle) == capi.C_TYPEHANDLE + assert lltype.typeOf(scope_handle) == capi.C_SCOPE self.scope_handle = scope_handle self.func_name = func_name self.functions = debug.make_sure_not_resized(functions) @@ -279,7 +283,7 @@ def __init__(self, space, scope_handle, type_name, offset, is_static): self.space = space - assert lltype.typeOf(scope_handle) == capi.C_TYPEHANDLE + assert lltype.typeOf(scope_handle) == capi.C_SCOPE self.scope_handle = scope_handle self.converter = converter.get_converter(self.space, type_name, '') self.offset = offset @@ -341,7 +345,7 @@ def __init__(self, space, name, handle): self.space = space self.name = name - assert lltype.typeOf(handle) == capi.C_TYPEHANDLE + assert lltype.typeOf(handle) == capi.C_SCOPE self.handle = handle self.methods = {} # Do not call "self._find_methods()" here, so that a distinction can @@ -539,7 +543,7 @@ def __init__(self, space, name, handle): self.space = space self.name = name - assert lltype.typeOf(handle) == capi.C_TYPEHANDLE + assert lltype.typeOf(handle) == capi.C_TYPE self.handle = handle def __call__(self, args_w): diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -1,6 +1,7 @@ #include "cppyy.h" #include "reflexcwrapper.h" +#include "Reflex/Kernel.h" #include "Reflex/Type.h" #include "Reflex/Base.h" #include "Reflex/Member.h" @@ -13,6 +14,7 @@ #include #include +#include #include @@ -23,18 +25,18 @@ return name_char; } -static inline Reflex::Scope scope_from_handle(cppyy_typehandle_t handle) { +static inline Reflex::Scope scope_from_handle(cppyy_type_t handle) { return Reflex::Scope((Reflex::ScopeName*)handle); } -static inline Reflex::Type type_from_handle(cppyy_typehandle_t handle) { +static inline Reflex::Type type_from_handle(cppyy_type_t handle) { return Reflex::Scope((Reflex::ScopeName*)handle); } -static inline std::vector build_args(int numargs, void* args) { +static inline std::vector build_args(int nargs, void* args) { std::vector arguments; - arguments.reserve(numargs); - for (int i=0; i < numargs; ++i) { + arguments.reserve(nargs); + for (int i=0; i < nargs; ++i) { char tc = ((CPPYY_G__value*)args)[i].type; if (tc != 'a' && tc != 'o') arguments.push_back(&((CPPYY_G__value*)args)[i]); @@ -45,136 +47,100 @@ } -/* name to handle --------------------------------------------------------- */ -cppyy_typehandle_t cppyy_get_typehandle(const char* class_name) { - Reflex::Scope s = Reflex::Scope::ByName(class_name); - return (cppyy_typehandle_t)s.Id(); +/* name to opaque C++ scope representation -------------------------------- */ +cppyy_scope_t cppyy_get_scope(const char* scope_name) { + Reflex::Scope s = Reflex::Scope::ByName(scope_name); + return (cppyy_type_t)s.Id(); } -cppyy_typehandle_t cppyy_get_templatehandle(const char* template_name) { +cppyy_type_t cppyy_get_template(const char* template_name) { Reflex::TypeTemplate tt = Reflex::TypeTemplate::ByName(template_name); - return (cppyy_typehandle_t)tt.Id(); + return (cppyy_type_t)tt.Id(); } /* memory management ------------------------------------------------------ */ -void* cppyy_allocate(cppyy_typehandle_t handle) { +cppyy_object_t cppyy_allocate(cppyy_type_t handle) { Reflex::Type t = type_from_handle(handle); - return t.Allocate(); + return (cppyy_object_t)t.Allocate(); } -void cppyy_deallocate(cppyy_typehandle_t handle, cppyy_object_t instance) { +void cppyy_deallocate(cppyy_type_t handle, cppyy_object_t instance) { Reflex::Type t = type_from_handle(handle); t.Deallocate((void*)instance); } -void cppyy_destruct(cppyy_typehandle_t handle, cppyy_object_t self) { +void cppyy_destruct(cppyy_type_t handle, cppyy_object_t self) { Reflex::Type t = type_from_handle(handle); t.Destruct((void*)self, true); } /* method/function dispatching -------------------------------------------- */ -void cppyy_call_v(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - std::vector arguments = build_args(numargs, args); - Reflex::Scope s = scope_from_handle(handle); - Reflex::Member m = s.FunctionMemberAt(method_index); - if (self) { - Reflex::Object o((Reflex::Type)s, (void*)self); - m.Invoke(o, 0, arguments); - } else { - m.Invoke(0, arguments); - } -} - -long cppyy_call_o(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args, - cppyy_typehandle_t rettype) { - void* result = cppyy_allocate(rettype); - std::vector arguments = build_args(numargs, args); - Reflex::Scope s = scope_from_handle(handle); - Reflex::Member m = s.FunctionMemberAt(method_index); - if (self) { - Reflex::Object o((Reflex::Type)s, (void*)self); - m.Invoke(o, *((long*)result), arguments); - } else { - m.Invoke(*((long*)result), arguments); - } - return (long)result; +void cppyy_call_v(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + std::vector arguments = build_args(nargs, args); + Reflex::StubFunction stub = (Reflex::StubFunction)method; + stub(NULL /* return address */, (void*)self, arguments, NULL /* stub context */); } template -static inline T cppyy_call_T(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { +static inline T cppyy_call_T(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { T result; - std::vector arguments = build_args(numargs, args); - Reflex::Scope s = scope_from_handle(handle); - Reflex::Member m = s.FunctionMemberAt(method_index); - if (self) { - Reflex::Object o((Reflex::Type)s, (void*)self); - m.Invoke(o, result, arguments); - } else { - m.Invoke(result, arguments); - } + std::vector arguments = build_args(nargs, args); + Reflex::StubFunction stub = (Reflex::StubFunction)method; + stub(&result, (void*)self, arguments, NULL /* stub context */); return result; } -int cppyy_call_b(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return (int)cppyy_call_T(handle, method_index, self, numargs, args); +int cppyy_call_b(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return (int)cppyy_call_T(method, self, nargs, args); } -char cppyy_call_c(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +char cppyy_call_c(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -short cppyy_call_h(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +short cppyy_call_h(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -int cppyy_call_i(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +int cppyy_call_i(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -long cppyy_call_l(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +long cppyy_call_l(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -double cppyy_call_f(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -double cppyy_call_d(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return cppyy_call_T(handle, method_index, self, numargs, args); +double cppyy_call_d(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); } -void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - return (void*)cppyy_call_T(handle, method_index, self, numargs, args); +void* cppyy_call_r(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return (void*)cppyy_call_T(method, self, nargs, args); } -char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { +char* cppyy_call_s(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { std::string result(""); - std::vector arguments = build_args(numargs, args); - Reflex::Scope s = scope_from_handle(handle); - Reflex::Member m = s.FunctionMemberAt(method_index); - if (self) { - Reflex::Object o((Reflex::Type)s, (void*)self); - m.Invoke(o, result, arguments); - } else { - m.Invoke(result, arguments); - } + std::vector arguments = build_args(nargs, args); + Reflex::StubFunction stub = (Reflex::StubFunction)method; + stub(&result, (void*)self, arguments, NULL /* stub context */); return cppstring_to_cstring(result); } +cppyy_object_t cppyy_call_o(cppyy_method_t method, cppyy_object_t self, int nargs, void* args, + cppyy_type_t result_type) { + void* result = (void*)cppyy_allocate(result_type); + std::vector arguments = build_args(nargs, args); + Reflex::StubFunction stub = (Reflex::StubFunction)method; + stub(result, (void*)self, arguments, NULL /* stub context */); + return (cppyy_object_t)result; +} + static cppyy_methptrgetter_t get_methptr_getter(Reflex::Member m) { Reflex::PropertyList plist = m.Properties(); if (plist.HasProperty("MethPtrGetter")) { @@ -184,7 +150,7 @@ return 0; } -cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t handle, int method_index) { +cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_type_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); return get_methptr_getter(m); @@ -213,14 +179,14 @@ /* scope reflection information ------------------------------------------- */ -int cppyy_is_namespace(cppyy_typehandle_t handle) { +int cppyy_is_namespace(cppyy_scope_t handle) { Reflex::Scope s = scope_from_handle(handle); return s.IsNamespace(); } -/* type/class reflection information -------------------------------------- */ -char* cppyy_final_name(cppyy_typehandle_t handle) { +/* class reflection information ------------------------------------------- */ +char* cppyy_final_name(cppyy_type_t handle) { Reflex::Scope s = scope_from_handle(handle); std::string name = s.Name(Reflex::FINAL); return cppstring_to_cstring(name); @@ -245,36 +211,36 @@ return is_complex; } -int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle) { +int cppyy_has_complex_hierarchy(cppyy_type_t handle) { Reflex::Type t = type_from_handle(handle); return cppyy_has_complex_hierarchy(t); } -int cppyy_num_bases(cppyy_typehandle_t handle) { +int cppyy_num_bases(cppyy_type_t handle) { Reflex::Type t = type_from_handle(handle); return t.BaseSize(); } -char* cppyy_base_name(cppyy_typehandle_t handle, int base_index) { +char* cppyy_base_name(cppyy_type_t handle, int base_index) { Reflex::Type t = type_from_handle(handle); Reflex::Base b = t.BaseAt(base_index); std::string name = b.Name(Reflex::FINAL|Reflex::SCOPED); return cppstring_to_cstring(name); } -int cppyy_is_subtype(cppyy_typehandle_t dh, cppyy_typehandle_t bh) { - Reflex::Type td = type_from_handle(dh); - Reflex::Type tb = type_from_handle(bh); - return (int)td.HasBase(tb); +int cppyy_is_subtype(cppyy_type_t derived_handle, cppyy_type_t base_handle) { + Reflex::Type derived_type = type_from_handle(derived_handle); + Reflex::Type base_type = type_from_handle(base_handle); + return (int)derived_type.HasBase(base_type); } -size_t cppyy_base_offset(cppyy_typehandle_t dh, cppyy_typehandle_t bh, cppyy_object_t address) { - Reflex::Type td = type_from_handle(dh); - Reflex::Type tb = type_from_handle(bh); +size_t cppyy_base_offset(cppyy_type_t derived_handle, cppyy_type_t base_handle, cppyy_object_t address) { + Reflex::Type derived_type = type_from_handle(derived_handle); + Reflex::Type base_type = type_from_handle(base_handle); // when dealing with virtual inheritance the only (reasonably) well-defined info is // in a Reflex internal base table, that contains all offsets within the hierarchy - Reflex::Member getbases = td.FunctionMemberByName( + Reflex::Member getbases = derived_type.FunctionMemberByName( "__getBasesTable", Reflex::Type(), 0, Reflex::INHERITEDMEMBERS_NO, Reflex::DELAYEDLOAD_OFF); if (getbases) { typedef std::vector > Bases_t; @@ -283,7 +249,7 @@ getbases.Invoke(&bases_holder); for (Bases_t::iterator ibase = bases->begin(); ibase != bases->end(); ++ibase) { - if (ibase->first.ToType() == tb) + if (ibase->first.ToType() == base_type) return (size_t)ibase->first.Offset((void*)address); } @@ -296,12 +262,12 @@ /* method/function reflection information --------------------------------- */ -int cppyy_num_methods(cppyy_typehandle_t handle) { +int cppyy_num_methods(cppyy_scope_t handle) { Reflex::Scope s = scope_from_handle(handle); return s.FunctionMemberSize(); } -char* cppyy_method_name(cppyy_typehandle_t handle, int method_index) { +char* cppyy_method_name(cppyy_scope_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); std::string name; @@ -312,7 +278,7 @@ return cppstring_to_cstring(name); } -char* cppyy_method_result_type(cppyy_typehandle_t handle, int method_index) { +char* cppyy_method_result_type(cppyy_scope_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); Reflex::Type rt = m.TypeOf().ReturnType(); @@ -320,19 +286,19 @@ return cppstring_to_cstring(name); } -int cppyy_method_num_args(cppyy_typehandle_t handle, int method_index) { +int cppyy_method_num_args(cppyy_scope_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); return m.FunctionParameterSize(); } -int cppyy_method_req_args(cppyy_typehandle_t handle, int method_index) { +int cppyy_method_req_args(cppyy_scope_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); return m.FunctionParameterSize(true); } -char* cppyy_method_arg_type(cppyy_typehandle_t handle, int method_index, int arg_index) { +char* cppyy_method_arg_type(cppyy_scope_t handle, int method_index, int arg_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); Reflex::Type at = m.TypeOf().FunctionParameterAt(arg_index); @@ -340,21 +306,29 @@ return cppstring_to_cstring(name); } -char* cppyy_method_arg_default(cppyy_typehandle_t handle, int method_index, int arg_index) { +char* cppyy_method_arg_default(cppyy_scope_t handle, int method_index, int arg_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); std::string dflt = m.FunctionParameterDefaultAt(arg_index); return cppstring_to_cstring(dflt); } +cppyy_method_t cppyy_get_method(cppyy_scope_t handle, int method_index) { + Reflex::Scope s = scope_from_handle(handle); + Reflex::Member m = s.FunctionMemberAt(method_index); + assert(m.IsFunctionMember()); + return (cppyy_method_t)m.Stubfunction(); +} -int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index) { + +/* method properties ----------------------------------------------------- */ +int cppyy_is_constructor(cppyy_type_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); return m.IsConstructor(); } -int cppyy_is_staticmethod(cppyy_typehandle_t handle, int method_index) { +int cppyy_is_staticmethod(cppyy_type_t handle, int method_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.FunctionMemberAt(method_index); return m.IsStatic(); @@ -362,39 +336,39 @@ /* data member reflection information ------------------------------------- */ -int cppyy_num_data_members(cppyy_typehandle_t handle) { +int cppyy_num_data_members(cppyy_scope_t handle) { Reflex::Scope s = scope_from_handle(handle); return s.DataMemberSize(); } -char* cppyy_data_member_name(cppyy_typehandle_t handle, int data_member_index) { +char* cppyy_data_member_name(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); std::string name = m.Name(); return cppstring_to_cstring(name); } -char* cppyy_data_member_type(cppyy_typehandle_t handle, int data_member_index) { +char* cppyy_data_member_type(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); std::string name = m.TypeOf().Name(Reflex::FINAL|Reflex::SCOPED|Reflex::QUALIFIED); return cppstring_to_cstring(name); } -size_t cppyy_data_member_offset(cppyy_typehandle_t handle, int data_member_index) { +size_t cppyy_data_member_offset(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); return m.Offset(); } -int cppyy_is_publicdata(cppyy_typehandle_t handle, int data_member_index) { +int cppyy_is_publicdata(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); return m.IsPublic(); } -int cppyy_is_staticdata(cppyy_typehandle_t handle, int data_member_index) { +int cppyy_is_staticdata(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); return m.IsStatic(); From noreply at buildbot.pypy.org Tue Feb 28 07:38:28 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 28 Feb 2012 07:38:28 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: like for Reflex backend, now refactoring to get access to stubs for the CINT backend Message-ID: <20120228063828.AA3AD8203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52955:21d4b7fc4514 Date: 2012-02-27 18:13 -0800 http://bitbucket.org/pypy/pypy/changeset/21d4b7fc4514/ Log: like for Reflex backend, now refactoring to get access to stubs for the CINT backend diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -118,11 +118,11 @@ return cppstring_to_cstring(true_name); } -static inline TClassRef type_from_handle(cppyy_typehandle_t handle) { +static inline TClassRef type_from_handle(cppyy_type_t handle) { return g_classrefs[(ClassRefs_t::size_type)handle]; } -static inline TFunction* type_get_method(cppyy_typehandle_t handle, int method_index) { +static inline TFunction* type_get_method(cppyy_type_t handle, int method_index) { TClassRef cr = type_from_handle(handle); if (cr.GetClass()) return (TFunction*)cr->GetListOfMethods()->At(method_index); @@ -148,168 +148,133 @@ } -/* name to handle --------------------------------------------------------- */ -cppyy_typehandle_t cppyy_get_typehandle(const char* class_name) { - ClassRefIndices_t::iterator icr = g_classref_indices.find(class_name); +/* name to opaque C++ scope representation -------------------------------- */ +cppyy_scope_t cppyy_get_scope(const char* scope_name) { + ClassRefIndices_t::iterator icr = g_classref_indices.find(scope_name); if (icr != g_classref_indices.end()) - return (cppyy_typehandle_t)icr->second; + return (cppyy_type_t)icr->second; // use TClass directly, to enable auto-loading - TClassRef cr(TClass::GetClass(class_name, kTRUE, kTRUE)); + TClassRef cr(TClass::GetClass(scope_name, kTRUE, kTRUE)); if (!cr.GetClass()) - return (cppyy_typehandle_t)NULL; + return (cppyy_type_t)NULL; if (!cr->GetClassInfo()) - return (cppyy_typehandle_t)NULL; + return (cppyy_type_t)NULL; - if (!G__TypeInfo(class_name).IsValid()) - return (cppyy_typehandle_t)NULL; + if (!G__TypeInfo(scope_name).IsValid()) + return (cppyy_type_t)NULL; ClassRefs_t::size_type sz = g_classrefs.size(); - g_classref_indices[class_name] = sz; - g_classrefs.push_back(TClassRef(class_name)); - return (cppyy_typehandle_t)sz; + g_classref_indices[scope_name] = sz; + g_classrefs.push_back(TClassRef(scope_name)); + return (cppyy_scope_t)sz; } -cppyy_typehandle_t cppyy_get_templatehandle(const char* template_name) { +cppyy_type_t cppyy_get_template(const char* template_name) { ClassRefIndices_t::iterator icr = g_classref_indices.find(template_name); if (icr != g_classref_indices.end()) - return (cppyy_typehandle_t)icr->second; + return (cppyy_type_t)icr->second; if (!G__defined_templateclass((char*)template_name)) - return (cppyy_typehandle_t)NULL; + return (cppyy_type_t)NULL; // the following yields a dummy TClassRef, but its name can be queried ClassRefs_t::size_type sz = g_classrefs.size(); g_classref_indices[template_name] = sz; g_classrefs.push_back(TClassRef(template_name)); - return (cppyy_typehandle_t)sz; + return (cppyy_type_t)sz; } /* memory management ------------------------------------------------------ */ -void* cppyy_allocate(cppyy_typehandle_t handle) { +cppyy_object_t cppyy_allocate(cppyy_type_t handle) { TClassRef cr = type_from_handle(handle); - return malloc(cr->Size()); + return (cppyy_object_t)malloc(cr->Size()); } -void cppyy_deallocate(cppyy_typehandle_t /*handle*/, cppyy_object_t instance) { +void cppyy_deallocate(cppyy_type_t /*handle*/, cppyy_object_t instance) { free((void*)instance); } -void cppyy_destruct(cppyy_typehandle_t handle, cppyy_object_t self) { +void cppyy_destruct(cppyy_type_t handle, cppyy_object_t self) { TClassRef cr = type_from_handle(handle); cr->Destructor((void*)self, true); } /* method/function dispatching -------------------------------------------- */ -static inline G__value cppyy_call_T(cppyy_typehandle_t handle, - int method_index, cppyy_object_t self, int numargs, void* args) { +static inline G__value cppyy_call_T(cppyy_method_t method, + cppyy_object_t self, int nargs, void* args) { - if ((long)handle != GLOBAL_HANDLE) { - TClassRef cr = type_from_handle(handle); - assert(method_index < cr->GetListOfMethods()->GetSize()); - TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); + G__InterfaceMethod meth = (G__InterfaceMethod)method; + G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); + assert(libp->paran == nargs); + fixup_args(libp); - G__InterfaceMethod meth = (G__InterfaceMethod)m->InterfaceMethod(); - G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); - assert(libp->paran == numargs); - fixup_args(libp); - - // TODO: access to store_struct_offset won't work on Windows + // TODO: access to store_struct_offset won't work on Windows + long store_struct_offset = G__store_struct_offset; + if (self) { G__setgvp((long)self); - long store_struct_offset = G__store_struct_offset; G__store_struct_offset = (long)self; - - G__value result; - G__setnull(&result); - meth(&result, 0, libp, 0); - - G__store_struct_offset = store_struct_offset; - return result; } - // global function - assert(method_index < (int)g_globalfuncs.size()); - TFunction* f = g_globalfuncs[method_index]; - - G__InterfaceMethod func = (G__InterfaceMethod)f->InterfaceMethod(); - G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); - assert(libp->paran == numargs); - fixup_args(libp); - G__value result; G__setnull(&result); - func(&result, 0, libp, 0); + meth(&result, 0, libp, 0); + + if (self) + G__store_struct_offset = store_struct_offset; return result; } -long cppyy_call_o(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args, - cppyy_typehandle_t) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - G__pop_tempobject_nodel(); +void cppyy_call_v(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + cppyy_call_T(method, self, nargs, args); +} + +int cppyy_call_b(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); + return (bool)G__int(result); +} + +char cppyy_call_c(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); + return (char)G__int(result); +} + +short cppyy_call_h(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); + return (short)G__int(result); +} + +int cppyy_call_i(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); + return (int)G__int(result); +} + +long cppyy_call_l(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); return G__int(result); } -void cppyy_call_v(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - cppyy_call_T(handle, method_index, self, numargs, args); -} - -int cppyy_call_b(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - return (bool)G__int(result); -} - -char cppyy_call_c(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - return (char)G__int(result); -} - -short cppyy_call_h(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - return (short)G__int(result); -} - -int cppyy_call_i(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - return (int)G__int(result); -} - -long cppyy_call_l(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); - return G__int(result); -} - -double cppyy_call_f(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); +double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); return G__double(result); } -double cppyy_call_d(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); +double cppyy_call_d(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); return G__double(result); } -void* cppyy_call_r(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); +void* cppyy_call_r(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); return (void*)result.ref; } -char* cppyy_call_s(cppyy_typehandle_t handle, int method_index, - cppyy_object_t self, int numargs, void* args) { - G__value result = cppyy_call_T(handle, method_index, self, numargs, args); +char* cppyy_call_s(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); G__pop_tempobject_nodel(); if (result.ref && *(long*)result.ref) { char* charp = cppstring_to_cstring(*(std::string*)result.ref); @@ -319,7 +284,14 @@ return cppstring_to_cstring(""); } -cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_typehandle_t /*handle*/, int /*method_index*/) { +cppyy_object_t cppyy_call_o(cppyy_type_t method, cppyy_object_t self, int nargs, void* args, + cppyy_type_t /*result_type*/ ) { + G__value result = cppyy_call_T(method, self, nargs, args); + G__pop_tempobject_nodel(); + return G__int(result); +} + +cppyy_methptrgetter_t cppyy_get_methptr_getter(cppyy_type_t /*handle*/, int /*method_index*/) { return (cppyy_methptrgetter_t)NULL; } @@ -349,7 +321,7 @@ /* scope reflection information ------------------------------------------- */ -int cppyy_is_namespace(cppyy_typehandle_t handle) { +int cppyy_is_namespace(cppyy_scope_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetClassInfo()) return cr->Property() & G__BIT_ISNAMESPACE; @@ -360,7 +332,7 @@ /* type/class reflection information -------------------------------------- */ -char* cppyy_final_name(cppyy_typehandle_t handle) { +char* cppyy_final_name(cppyy_type_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetClassInfo()) { std::string true_name = G__TypeInfo(cr->GetName()).TrueName(); @@ -372,53 +344,53 @@ return cppstring_to_cstring(cr.GetClassName()); } -int cppyy_has_complex_hierarchy(cppyy_typehandle_t handle) { +int cppyy_has_complex_hierarchy(cppyy_type_t handle) { // as long as no fast path is supported for CINT, calculating offsets (which // are cached by the JIT) is not going to hurt return 1; } -int cppyy_num_bases(cppyy_typehandle_t handle) { +int cppyy_num_bases(cppyy_type_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetListOfBases() != 0) return cr->GetListOfBases()->GetSize(); return 0; } -char* cppyy_base_name(cppyy_typehandle_t handle, int base_index) { +char* cppyy_base_name(cppyy_type_t handle, int base_index) { TClassRef cr = type_from_handle(handle); TBaseClass* b = (TBaseClass*)cr->GetListOfBases()->At(base_index); return type_cppstring_to_cstring(b->GetName()); } -int cppyy_is_subtype(cppyy_typehandle_t dh, cppyy_typehandle_t bh) { - TClassRef crd = type_from_handle(dh); - TClassRef crb = type_from_handle(bh); - return crd->GetBaseClass(crb) != 0; +int cppyy_is_subtype(cppyy_type_t derived_handle, cppyy_type_t base_handle) { + TClassRef derived_type = type_from_handle(derived_handle); + TClassRef base_type = type_from_handle(base_handle); + return derived_type->GetBaseClass(base_type) != 0; } -size_t cppyy_base_offset(cppyy_typehandle_t dh, cppyy_typehandle_t bh, cppyy_object_t address) { - TClassRef crd = type_from_handle(dh); - TClassRef crb = type_from_handle(bh); +size_t cppyy_base_offset(cppyy_type_t derived_handle, cppyy_type_t base_handle, cppyy_object_t address) { + TClassRef derived_type = type_from_handle(derived_handle); + TClassRef base_type = type_from_handle(base_handle); size_t offset = 0; - if (crd && crb) { - G__ClassInfo* bci = (G__ClassInfo*)crb->GetClassInfo(); - G__ClassInfo* dci = (G__ClassInfo*)crd->GetClassInfo(); + if (derived_type && base_type) { + G__ClassInfo* base_ci = (G__ClassInfo*)base_type->GetClassInfo(); + G__ClassInfo* derived_ci = (G__ClassInfo*)derived_type->GetClassInfo(); - if (bci && dci) { + if (base_ci && derived_ci) { #ifdef WIN32 // Windows cannot cast-to-derived for virtual inheritance // with CINT's (or Reflex's) interfaces. - long baseprop = dci->IsBase(*bci); + long baseprop = derived_ci->IsBase(*base_ci); if (!baseprop || (baseprop & G__BIT_ISVIRTUALBASE)) - offset = (size_t)crd->GetBaseClassOffset(crb); + offset = (size_t)derived_type->GetBaseClassOffset(base_type); else #endif - offset = G__isanybase(bci->Tagnum(), dci->Tagnum(), (long)address); + offset = G__isanybase(base_ci->Tagnum(), derived_ci->Tagnum(), (long)address); } else { - offset = (size_t)crd->GetBaseClassOffset(crb); + offset = (size_t)derived_type->GetBaseClassOffset(base_type); } } @@ -430,7 +402,7 @@ /* method/function reflection information --------------------------------- */ -int cppyy_num_methods(cppyy_typehandle_t handle) { +int cppyy_num_methods(cppyy_scope_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetListOfMethods()) return cr->GetListOfMethods()->GetSize(); @@ -453,45 +425,51 @@ return 0; } -char* cppyy_method_name(cppyy_typehandle_t handle, int method_index) { +char* cppyy_method_name(cppyy_scope_t handle, int method_index) { TFunction* f = type_get_method(handle, method_index); return cppstring_to_cstring(f->GetName()); } -char* cppyy_method_result_type(cppyy_typehandle_t handle, int method_index) { +char* cppyy_method_result_type(cppyy_scope_t handle, int method_index) { TFunction* f = type_get_method(handle, method_index); return type_cppstring_to_cstring(f->GetReturnTypeName()); } -int cppyy_method_num_args(cppyy_typehandle_t handle, int method_index) { +int cppyy_method_num_args(cppyy_scope_t handle, int method_index) { TFunction* f = type_get_method(handle, method_index); return f->GetNargs(); } -int cppyy_method_req_args(cppyy_typehandle_t handle, int method_index) { +int cppyy_method_req_args(cppyy_scope_t handle, int method_index) { TFunction* f = type_get_method(handle, method_index); return f->GetNargs() - f->GetNargsOpt(); } -char* cppyy_method_arg_type(cppyy_typehandle_t handle, int method_index, int arg_index) { +char* cppyy_method_arg_type(cppyy_scope_t handle, int method_index, int arg_index) { TFunction* f = type_get_method(handle, method_index); TMethodArg* arg = (TMethodArg*)f->GetListOfMethodArgs()->At(arg_index); return type_cppstring_to_cstring(arg->GetFullTypeName()); } -char* cppyy_method_arg_default(cppyy_typehandle_t, int, int) { +char* cppyy_method_arg_default(cppyy_scope_t, int, int) { /* unused: libffi does not work with CINT back-end */ return cppstring_to_cstring(""); } +cppyy_method_t cppyy_get_method(cppyy_scope_t handle, int method_index) { + TFunction* f = type_get_method(handle, method_index); + return (cppyy_method_t)f->InterfaceMethod(); +} -int cppyy_is_constructor(cppyy_typehandle_t handle, int method_index) { + +/* method properties ----------------------------------------------------- */ +int cppyy_is_constructor(cppyy_type_t handle, int method_index) { TClassRef cr = type_from_handle(handle); TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); return strcmp(m->GetName(), cr->GetName()) == 0; } -int cppyy_is_staticmethod(cppyy_typehandle_t handle, int method_index) { +int cppyy_is_staticmethod(cppyy_type_t handle, int method_index) { TClassRef cr = type_from_handle(handle); TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); return m->Property() & G__BIT_ISSTATIC; @@ -499,20 +477,20 @@ /* data member reflection information ------------------------------------- */ -int cppyy_num_data_members(cppyy_typehandle_t handle) { +int cppyy_num_data_members(cppyy_scope_t handle) { TClassRef cr = type_from_handle(handle); if (cr.GetClass() && cr->GetListOfDataMembers()) return cr->GetListOfDataMembers()->GetSize(); return 0; } -char* cppyy_data_member_name(cppyy_typehandle_t handle, int data_member_index) { +char* cppyy_data_member_name(cppyy_scope_t handle, int data_member_index) { TClassRef cr = type_from_handle(handle); TDataMember* m = (TDataMember*)cr->GetListOfDataMembers()->At(data_member_index); return cppstring_to_cstring(m->GetName()); } -char* cppyy_data_member_type(cppyy_typehandle_t handle, int data_member_index) { +char* cppyy_data_member_type(cppyy_scope_t handle, int data_member_index) { TClassRef cr = type_from_handle(handle); TDataMember* m = (TDataMember*)cr->GetListOfDataMembers()->At(data_member_index); std::string fullType = m->GetFullTypeName(); @@ -526,20 +504,21 @@ return cppstring_to_cstring(fullType); } -size_t cppyy_data_member_offset(cppyy_typehandle_t handle, int data_member_index) { +size_t cppyy_data_member_offset(cppyy_scope_t handle, int data_member_index) { TClassRef cr = type_from_handle(handle); TDataMember* m = (TDataMember*)cr->GetListOfDataMembers()->At(data_member_index); return m->GetOffsetCint(); } -int cppyy_is_publicdata(cppyy_typehandle_t handle, int data_member_index) { +/* data member properties ------------------------------------------------ */ +int cppyy_is_publicdata(cppyy_scope_t handle, int data_member_index) { TClassRef cr = type_from_handle(handle); TDataMember* m = (TDataMember*)cr->GetListOfDataMembers()->At(data_member_index); return m->Property() & G__BIT_ISPUBLIC; } -int cppyy_is_staticdata(cppyy_typehandle_t handle, int data_member_index) { +int cppyy_is_staticdata(cppyy_scope_t handle, int data_member_index) { TClassRef cr = type_from_handle(handle); TDataMember* m = (TDataMember*)cr->GetListOfDataMembers()->At(data_member_index); return m->Property() & G__BIT_ISSTATIC; diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -362,6 +362,7 @@ } +/* data member properties ------------------------------------------------ */ int cppyy_is_publicdata(cppyy_scope_t handle, int data_member_index) { Reflex::Scope s = scope_from_handle(handle); Reflex::Member m = s.DataMemberAt(data_member_index); From noreply at buildbot.pypy.org Tue Feb 28 07:38:29 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 28 Feb 2012 07:38:29 +0100 (CET) Subject: [pypy-commit] pypy reflex-support: fix for name matching if a class lives in a namespace Message-ID: <20120228063829.E38A38203C@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r52956:1174528adf73 Date: 2012-02-27 21:52 -0800 http://bitbucket.org/pypy/pypy/changeset/1174528adf73/ Log: fix for name matching if a class lives in a namespace diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -206,7 +206,7 @@ /* method/function dispatching -------------------------------------------- */ static inline G__value cppyy_call_T(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { - + G__InterfaceMethod meth = (G__InterfaceMethod)method; G__param* libp = (G__param*)((char*)args - offsetof(G__param, para)); assert(libp->paran == nargs); @@ -466,7 +466,7 @@ int cppyy_is_constructor(cppyy_type_t handle, int method_index) { TClassRef cr = type_from_handle(handle); TMethod* m = (TMethod*)cr->GetListOfMethods()->At(method_index); - return strcmp(m->GetName(), cr->GetName()) == 0; + return strcmp(m->GetName(), ((G__ClassInfo*)cr->GetClassInfo())->Name()) == 0; } int cppyy_is_staticmethod(cppyy_type_t handle, int method_index) { From noreply at buildbot.pypy.org Tue Feb 28 08:56:20 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 28 Feb 2012 08:56:20 +0100 (CET) Subject: [pypy-commit] pypy default: Failing test where i73 is assigned twice in the optimized trace. Message-ID: <20120228075620.7DB648203C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52957:9a45d7852b5b Date: 2012-02-28 08:15 +0100 http://bitbucket.org/pypy/pypy/changeset/9a45d7852b5b/ Log: Failing test where i73 is assigned twice in the optimized trace. diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -398,6 +398,50 @@ with raises(InvalidLoop): self.optimize_loop(ops, ops) + def test_maybe_issue1045_related(self): + ops = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + mark_opaque_ptr(p54) + i55 = getfield_gc(p54, descr=nextdescr) + p57 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p57, i55, descr=otherdescr) + p69 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p69, i55, descr=otherdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + p79 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p79, i77, descr=otherdescr) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(p57) + jump(p57) + """ + expected = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + i55 = getfield_gc(p54, descr=nextdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(i55) + jump(i55) + """ + self.optimize_loop(ops, expected) + class OptRenameStrlen(Optimization): def propagate_forward(self, op): dispatch_opt(self, op) @@ -457,7 +501,6 @@ jump(p1, i11) """ self.optimize_loop(ops, expected) - class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): From noreply at buildbot.pypy.org Tue Feb 28 08:56:22 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 28 Feb 2012 08:56:22 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20120228075622.1C1468203C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52958:402fe7ba5442 Date: 2012-02-28 08:53 +0100 http://bitbucket.org/pypy/pypy/changeset/402fe7ba5442/ Log: hg merge diff --git a/lib-python/modified-2.7/ctypes/test/test_arrays.py b/lib-python/modified-2.7/ctypes/test/test_arrays.py --- a/lib-python/modified-2.7/ctypes/test/test_arrays.py +++ b/lib-python/modified-2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter import unicodehelper +from pypy.rlib.rstring import StringBuilder def parsestr(space, encoding, s, unicode_literals=False): # compiler.transformer.Transformer.decode_literal depends on what @@ -115,21 +116,23 @@ the string is UTF-8 encoded and should be re-encoded in the specified encoding. """ - lis = [] + builder = StringBuilder(len(s)) ps = 0 end = len(s) - while ps < end: - if s[ps] != '\\': - # note that the C code has a label here. - # the logic is the same. + while 1: + ps2 = ps + while ps < end and s[ps] != '\\': if recode_encoding and ord(s[ps]) & 0x80: w, ps = decode_utf8(space, s, ps, end, recode_encoding) - # Append bytes to output buffer. - lis.append(w) + builder.append(w) + ps2 = ps else: - lis.append(s[ps]) ps += 1 - continue + if ps > ps2: + builder.append_slice(s, ps2, ps) + if ps == end: + break + ps += 1 if ps == end: raise_app_valueerror(space, 'Trailing \\ in string') @@ -140,25 +143,25 @@ if ch == '\n': pass elif ch == '\\': - lis.append('\\') + builder.append('\\') elif ch == "'": - lis.append("'") + builder.append("'") elif ch == '"': - lis.append('"') + builder.append('"') elif ch == 'b': - lis.append("\010") + builder.append("\010") elif ch == 'f': - lis.append('\014') # FF + builder.append('\014') # FF elif ch == 't': - lis.append('\t') + builder.append('\t') elif ch == 'n': - lis.append('\n') + builder.append('\n') elif ch == 'r': - lis.append('\r') + builder.append('\r') elif ch == 'v': - lis.append('\013') # VT + builder.append('\013') # VT elif ch == 'a': - lis.append('\007') # BEL, not classic C + builder.append('\007') # BEL, not classic C elif ch in '01234567': # Look for up to two more octal digits span = ps @@ -168,13 +171,13 @@ # emulate a strange wrap-around behavior of CPython: # \400 is the same as \000 because 0400 == 256 num = int(octal, 8) & 0xFF - lis.append(chr(num)) + builder.append(chr(num)) ps = span elif ch == 'x': if ps+2 <= end and isxdigit(s[ps]) and isxdigit(s[ps + 1]): hexa = s[ps : ps + 2] num = int(hexa, 16) - lis.append(chr(num)) + builder.append(chr(num)) ps += 2 else: raise_app_valueerror(space, 'invalid \\x escape') @@ -184,13 +187,13 @@ # this was not an escape, so the backslash # has to be added, and we start over in # non-escape mode. - lis.append('\\') + builder.append('\\') ps -= 1 assert ps >= 0 continue # an arbitry number of unescaped UTF-8 bytes may follow. - buf = ''.join(lis) + buf = builder.build() return buf diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -769,11 +769,19 @@ self.generate_function('malloc_unicode', malloc_unicode, [lltype.Signed]) - # Rarely called: allocate a fixed-size amount of bytes, but - # not in the nursery, because it is too big. Implemented like - # malloc_nursery_slowpath() above. - self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, - [lltype.Signed]) + # Never called as far as I can tell, but there for completeness: + # allocate a fixed-size object, but not in the nursery, because + # it is too big. + def malloc_big_fixedsize(size, tid): + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_big_fixedsize', malloc_big_fixedsize, + [lltype.Signed] * 2) def _bh_malloc(self, sizedescr): from pypy.rpython.memory.gctypelayout import check_typeid diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -96,8 +96,10 @@ def handle_new_fixedsize(self, descr, op): assert isinstance(descr, SizeDescr) size = descr.size - self.gen_malloc_nursery(size, op.result) - self.gen_initialize_tid(op.result, descr.tid) + if self.gen_malloc_nursery(size, op.result): + self.gen_initialize_tid(op.result, descr.tid) + else: + self.gen_malloc_fixedsize(size, descr.tid, op.result) def handle_new_array(self, arraydescr, op): v_length = op.getarg(0) @@ -112,8 +114,8 @@ pass # total_size is still -1 elif arraydescr.itemsize == 0: total_size = arraydescr.basesize - if 0 <= total_size <= 0xffffff: # up to 16MB, arbitrarily - self.gen_malloc_nursery(total_size, op.result) + if (total_size >= 0 and + self.gen_malloc_nursery(total_size, op.result)): self.gen_initialize_tid(op.result, arraydescr.tid) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) elif self.gc_ll_descr.kind == 'boehm': @@ -147,13 +149,22 @@ # mark 'v_result' as freshly malloced self.recent_mallocs[v_result] = None - def gen_malloc_fixedsize(self, size, v_result): - """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, Const(size)). - Note that with the framework GC, this should be called very rarely. + def gen_malloc_fixedsize(self, size, typeid, v_result): + """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, ...). + Used on Boehm, and on the framework GC for large fixed-size + mallocs. (For all I know this latter case never occurs in + practice, but better safe than sorry.) """ - addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') - self._gen_call_malloc_gc([ConstInt(addr), ConstInt(size)], v_result, - self.gc_ll_descr.malloc_fixedsize_descr) + if self.gc_ll_descr.fielddescr_tid is not None: # framework GC + assert (size & (WORD-1)) == 0, "size not aligned?" + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_big_fixedsize') + args = [ConstInt(addr), ConstInt(size), ConstInt(typeid)] + descr = self.gc_ll_descr.malloc_big_fixedsize_descr + else: # Boehm + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') + args = [ConstInt(addr), ConstInt(size)] + descr = self.gc_ll_descr.malloc_fixedsize_descr + self._gen_call_malloc_gc(args, v_result, descr) def gen_boehm_malloc_array(self, arraydescr, v_num_elem, v_result): """Generate a CALL_MALLOC_GC(malloc_array_fn, ...) for Boehm.""" @@ -211,8 +222,7 @@ """ size = self.round_up_for_allocation(size) if not self.gc_ll_descr.can_use_nursery_malloc(size): - self.gen_malloc_fixedsize(size, v_result) - return + return False # op = None if self._op_malloc_nursery is not None: @@ -238,6 +248,7 @@ self._previous_size = size self._v_last_malloced_nursery = v_result self.recent_mallocs[v_result] = None + return True def gen_initialize_tid(self, v_newgcobj, tid): if self.gc_ll_descr.fielddescr_tid is not None: diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py --- a/pypy/jit/backend/llsupport/test/test_rewrite.py +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -119,12 +119,19 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(adescr.basesize + 10 * adescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=alendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(adescr.basesize)d, \ + 10, \ + %(adescr.itemsize)d, \ + %(adescr.lendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(adescr.basesize + 10 * adescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=alendescr) def test_new_array_variable(self): self.check_rewrite(""" @@ -178,13 +185,20 @@ jump() """, """ [i1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(unicodedescr.basesize + \ - 10 * unicodedescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=unicodelendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(unicodedescr.basesize)d, \ + 10, \ + %(unicodedescr.itemsize)d, \ + %(unicodelendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(unicodedescr.basesize + \ +## 10 * unicodedescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=unicodelendescr) class TestFramework(RewriteTests): @@ -203,7 +217,7 @@ # class FakeCPU(object): def sizeof(self, STRUCT): - descr = SizeDescrWithVTable(102) + descr = SizeDescrWithVTable(104) descr.tid = 9315 return descr self.cpu = FakeCPU() @@ -368,11 +382,9 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(bdescr.basesize + 104)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 8765, descr=tiddescr) - setfield_gc(p0, 103, descr=blendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, 103, \ + descr=malloc_array_descr) jump() """) @@ -435,9 +447,8 @@ jump() """, """ [p1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 9315, descr=tiddescr) + p0 = call_malloc_gc(ConstClass(malloc_big_fixedsize), 104, 9315, \ + descr=malloc_big_fixedsize_descr) setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) jump() """) diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -289,8 +289,21 @@ assert isinstance(token, TargetToken) assert token.original_jitcell_token is None token.original_jitcell_token = trace.original_jitcell_token - - + + +def do_compile_loop(metainterp_sd, inputargs, operations, looptoken, + log=True, name=''): + metainterp_sd.logger_ops.log_loop(inputargs, operations, -2, + 'compiling', name=name) + return metainterp_sd.cpu.compile_loop(inputargs, operations, looptoken, + log=log, name=name) + +def do_compile_bridge(metainterp_sd, faildescr, inputargs, operations, + original_loop_token, log=True): + metainterp_sd.logger_ops.log_bridge(inputargs, operations, -2) + return metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, + original_loop_token, log=log) + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,9 +332,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, - name=loopname) + asminfo = do_compile_loop(metainterp_sd, loop.inputargs, + operations, original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() @@ -333,7 +346,6 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - loopname = jitdriver_sd.warmstate.get_location_str(greenkey) if asminfo is not None: ops_offset = asminfo.ops_offset else: @@ -365,9 +377,9 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, - operations, - original_loop_token) + asminfo = do_compile_bridge(metainterp_sd, faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -18,6 +18,10 @@ debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") + elif number == -2: + debug_start("jit-log-compiling-loop") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-loop") else: debug_start("jit-log-opt-loop") debug_print("# Loop", number, '(%s)' % name , ":", type, @@ -31,6 +35,10 @@ debug_start("jit-log-noopt-bridge") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-bridge") + elif number == -2: + debug_start("jit-log-compiling-bridge") + logops = self._log_operations(inputargs, operations, ops_offset) + debug_stop("jit-log-compiling-bridge") else: debug_start("jit-log-opt-bridge") debug_print("# bridge out of Guard", number, diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -357,7 +357,7 @@ def test_cannot_write_pyc(self): import sys, os - p = os.path.join(sys.path[-1], 'readonly') + p = os.path.join(sys.path[0], 'readonly') try: os.chmod(p, 0555) except: diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -60,6 +60,9 @@ stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() + if getattr(pipe, 'returncode', 0) < 0: + raise IOError("subprocess was killed by signal %d" % ( + pipe.returncode,)) if stderr.startswith('SKIP:'): py.test.skip(stderr) if stderr.startswith('debug_alloc.h:'): # lldebug builds diff --git a/pypy/module/pypyjit/test_pypy_c/test_alloc.py b/pypy/module/pypyjit/test_pypy_c/test_alloc.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test_pypy_c/test_alloc.py @@ -0,0 +1,26 @@ +import py, sys +from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC + +class TestAlloc(BaseTestPyPyC): + + SIZES = dict.fromkeys([2 ** n for n in range(26)] + # up to 32MB + [2 ** n - 1 for n in range(26)]) + + def test_newstr_constant_size(self): + for size in TestAlloc.SIZES: + yield self.newstr_constant_size, size + + def newstr_constant_size(self, size): + src = """if 1: + N = %(size)d + part_a = 'a' * N + part_b = 'b' * N + for i in xrange(20): + ao = '%%s%%s' %% (part_a, part_b) + def main(): + return 42 +""" % {'size': size} + log = self.run(src, [], threshold=10) + assert log.result == 42 + loop, = log.loops_by_filename(self.filepath) + # assert did not crash diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -608,6 +608,11 @@ specified as 0 if the object is not varsized. The returned object is fully initialized and zero-filled.""" # + # Here we really need a valid 'typeid', not 0 (as the JIT might + # try to send us if there is still a bug). + ll_assert(bool(self.combine(typeid, 0)), + "external_malloc: typeid == 0") + # # Compute the total size, carefully checking for overflows. size_gc_header = self.gcheaderbuilder.size_gc_header nonvarsize = size_gc_header + self.fixed_size(typeid) From noreply at buildbot.pypy.org Tue Feb 28 10:06:08 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 28 Feb 2012 10:06:08 +0100 (CET) Subject: [pypy-commit] pypy dead-code-optimization: Failing test Message-ID: <20120228090608.322248203C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: dead-code-optimization Changeset: r52959:b819a305b30f Date: 2012-02-28 10:05 +0100 http://bitbucket.org/pypy/pypy/changeset/b819a305b30f/ Log: Failing test diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -55,7 +55,6 @@ def optimize_trace(metainterp_sd, loop, enable_opts, inline_short_preamble=True): """Optimize loop.operations to remove internal overheadish operations. """ - debug_start("jit-optimize") try: loop.logops = metainterp_sd.logger_noopt.log_loop(loop.inputargs, diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -398,6 +398,24 @@ with raises(InvalidLoop): self.optimize_loop(ops, ops) + def test_dont_kill_exported_ops(self): + ops = """ + [i0] + i1 = int_add(i0, 1) + label(i0) + i2 = int_add(i0, 1) + escape(i2) + jump(i0) + """ + expected = """ + [i0] + i1 = int_add(i0, 1) + label(i0, i1) + escape(i1) + jump(i0, i1) + """ + self.optimize_loop(ops, expected) + class OptRenameStrlen(Optimization): def propagate_forward(self, op): dispatch_opt(self, op) @@ -458,7 +476,6 @@ """ self.optimize_loop(ops, expected) - class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): pass From noreply at buildbot.pypy.org Tue Feb 28 12:05:59 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 28 Feb 2012 12:05:59 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: make sure we are only checking one byte in the cond cond_call_* operations Message-ID: <20120228110559.394528203C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r52960:d47de5e0115f Date: 2012-02-28 11:04 +0000 http://bitbucket.org/pypy/pypy/changeset/d47de5e0115f/ Log: make sure we are only checking one byte in the cond cond_call_* operations diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -16,6 +16,7 @@ gen_emit_unary_float_op, saved_registers, count_reg_args) +from pypy.jit.backend.arm.helper.regalloc import check_imm_arg from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder from pypy.jit.backend.arm.jump import remap_frame_layout from pypy.jit.backend.arm.regalloc import TempInt, TempPtr @@ -534,16 +535,10 @@ else: raise AssertionError(opnum) loc_base = arglocs[0] - self.mc.LDR_ri(r.ip.value, loc_base.value) - # calculate the shift value to rotate the ofs according to the ARM - # shifted imm values - # (4 - 0) * 4 & 0xF = 0 - # (4 - 1) * 4 & 0xF = 12 - # (4 - 2) * 4 & 0xF = 8 - # (4 - 3) * 4 & 0xF = 4 - ofs = (((4 - descr.jit_wb_if_flag_byteofs) * 4) & 0xF) << 8 - ofs |= descr.jit_wb_if_flag_singlebyte - self.mc.TST_ri(r.ip.value, imm=ofs) + assert check_imm_arg(descr.jit_wb_if_flag_byteofs) + assert check_imm_arg(descr.jit_wb_if_flag_singlebyte) + self.mc.LDRB_ri(r.ip.value, loc_base.value, imm=descr.jit_wb_if_flag_byteofs) + self.mc.TST_ri(r.ip.value, imm=descr.jit_wb_if_flag_singlebyte) jz_location = self.mc.currpos() self.mc.BKPT() @@ -551,11 +546,10 @@ # for cond_call_gc_wb_array, also add another fast path: # if GCFLAG_CARDS_SET, then we can just set one bit and be done if card_marking: - # calculate the shift value to rotate the ofs according to the ARM - # shifted imm values - ofs = (((4 - descr.jit_wb_cards_set_byteofs) * 4) & 0xF) << 8 - ofs |= descr.jit_wb_cards_set_singlebyte - self.mc.TST_ri(r.ip.value, imm=ofs) + assert check_imm_arg(descr.jit_wb_cards_set_byteofs) + assert check_imm_arg(descr.jit_wb_cards_set_singlebyte) + self.mc.LDRB_ri(r.ip.value, loc_base.value, imm=descr.jit_wb_cards_set_byteofs) + self.mc.TST_ri(r.ip.value, imm=descr.jit_wb_cards_set_singlebyte) # jnz_location = self.mc.currpos() self.mc.BKPT() From noreply at buildbot.pypy.org Tue Feb 28 12:21:06 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: start to kill support for W_File and add support for _io files instead. _io Message-ID: <20120228112106.123A38203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52961:1c05f5b2cb44 Date: 2012-02-28 10:54 +0100 http://bitbucket.org/pypy/pypy/changeset/1c05f5b2cb44/ Log: start to kill support for W_File and add support for _io files instead. _io files haven't got a .stream attribute, so we construct a new streamio object from the fd directly. This is similar to CPython, which internally uses FILE* which are constructed from the fd as well. diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -1,10 +1,10 @@ from pypy.module.imp import importing -from pypy.module._file.interp_file import W_File -from pypy.rlib import streamio from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.module import Module from pypy.interpreter.gateway import unwrap_spec -from pypy.module._file.interp_stream import StreamErrors, wrap_streamerror +from pypy.rlib import streamio +from pypy.module._io.interp_iobase import W_IOBase +from pypy.module._file.interp_stream import wrap_streamerror def get_suffixes(space): @@ -35,13 +35,17 @@ if w_file is None or space.is_w(w_file, space.w_None): try: return streamio.open_file_as_stream(filename, filemode) - except StreamErrors, e: + except streamio.StreamErrors, e: # XXX this is not quite the correct place, but it will do for now. # XXX see the issue which I'm sure exists already but whose number # XXX I cannot find any more... raise wrap_streamerror(space, e) else: - return space.interp_w(W_File, w_file).stream + w_iobase = space.interp_w(W_IOBase, w_file) + # XXX: not all W_IOBase have a fileno method: in that case, we should + # probably raise a TypeError? + fd = space.int_w(space.call_method(w_iobase, 'fileno')) + return streamio.fdopen_as_stream(fd, filemode) def find_module(space, w_name, w_path=None): name = space.str0_w(w_name) From noreply at buildbot.pypy.org Tue Feb 28 12:21:07 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: add a way to open a _io file given the stream. The dependency on _file is almost killed now Message-ID: <20120228112107.4A5D28203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52962:31042aa82067 Date: 2012-02-28 11:11 +0100 http://bitbucket.org/pypy/pypy/changeset/31042aa82067/ Log: add a way to open a _io file given the stream. The dependency on _file is almost killed now diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -4,6 +4,7 @@ from pypy.interpreter.gateway import unwrap_spec from pypy.rlib import streamio from pypy.module._io.interp_iobase import W_IOBase +from pypy.module._io import interp_io from pypy.module._file.interp_stream import wrap_streamerror @@ -63,11 +64,12 @@ stream = find_info.stream if stream is not None: - fileobj = W_File(space) - fileobj.fdopenstream( - stream, stream.try_to_find_file_descriptor(), - find_info.filemode, w_filename) - w_fileobj = space.wrap(fileobj) + fd = stream.try_to_find_file_descriptor() + # in python2, both CPython and PyPy pass the filename to + # open(). However, CPython 3 just passes the fd, so the returned file + # object doesn't have a name attached. We do the same in PyPy, because + # there is no easy way to attach the filename -- too bad + w_fileobj = interp_io.open(space, space.wrap(fd), find_info.filemode) else: w_fileobj = space.w_None w_import_info = space.newtuple( From noreply at buildbot.pypy.org Tue Feb 28 12:21:09 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:09 +0100 (CET) Subject: [pypy-commit] pypy py3k: py3k compatibility. Note that we also change the check done by one assert: the corresponding test in lib-python also changed this way Message-ID: <20120228112109.BE5A78203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52963:ec3859bf4f3e Date: 2012-02-28 11:24 +0100 http://bitbucket.org/pypy/pypy/changeset/ec3859bf4f3e/ Log: py3k compatibility. Note that we also change the check done by one assert: the corresponding test in lib-python also changed this way diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -148,14 +148,14 @@ from sys import modules, path from shutil import rmtree from tempfile import mkdtemp - code = """if 1: + code = b"""if 1: import sys code_filename = sys._getframe().f_code.co_filename module_filename = __file__ constant = 1 def func(): pass - func_filename = func.func_code.co_filename + func_filename = func.__code__.co_filename """ module_name = "unlikely_module_name" @@ -181,7 +181,7 @@ try: # Ensure proper results assert mod != orig_module - assert mod.module_filename == compiled_name + assert mod.module_filename == file_name assert mod.code_filename == file_name assert mod.func_filename == file_name finally: From noreply at buildbot.pypy.org Tue Feb 28 12:21:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:12 +0100 (CET) Subject: [pypy-commit] pypy default: there is no need to import the full applevel warnings module to implement space.warn: warnings.warn is replaced by _warning.warn anyway, so just use that instead. As a consequence, we need to make _warnings an essential module, but I think this is fine since emitting warnings is required by some places in the core interpreter Message-ID: <20120228112112.3E4F48203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52964:538b58a51320 Date: 2012-02-28 12:17 +0100 http://bitbucket.org/pypy/pypy/changeset/538b58a51320/ Log: there is no need to import the full applevel warnings module to implement space.warn: warnings.warn is replaced by _warning.warn anyway, so just use that instead. As a consequence, we need to make _warnings an essential module, but I think this is fine since emitting warnings is required by some places in the core interpreter diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -13,7 +13,7 @@ and not p.basename.startswith('test')] essential_modules = dict.fromkeys( - ["exceptions", "_file", "sys", "__builtin__", "posix"] + ["exceptions", "_file", "sys", "__builtin__", "posix", "_warnings"] ) default_modules = essential_modules.copy() diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1471,8 +1471,8 @@ def warn(self, msg, w_warningcls): self.appexec([self.wrap(msg), w_warningcls], """(msg, warningcls): - import warnings - warnings.warn(msg, warningcls, stacklevel=2) + import _warnings + _warnings.warn(msg, warningcls, stacklevel=2) """) def resolve_target(self, w_obj): From noreply at buildbot.pypy.org Tue Feb 28 12:21:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:14 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120228112114.DECC08203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52965:6fe3c22c294c Date: 2012-02-28 12:19 +0100 http://bitbucket.org/pypy/pypy/changeset/6fe3c22c294c/ Log: hg merge default diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -13,7 +13,7 @@ and not p.basename.startswith('test')] essential_modules = dict.fromkeys( - ["exceptions", "_io", "sys", "builtins", "posix"] + ["exceptions", "_io", "sys", "builtins", "posix", "_warnings"] ) default_modules = essential_modules.copy() diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1464,8 +1464,8 @@ def warn(self, msg, w_warningcls): self.appexec([self.wrap(msg), w_warningcls], """(msg, warningcls): - import warnings - warnings.warn(msg, warningcls, stacklevel=2) + import _warnings + _warnings.warn(msg, warningcls, stacklevel=2) """) def resolve_target(self, w_obj): diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -1,5 +1,6 @@ from pypy.interpreter.error import OperationError from pypy.interpreter import unicodehelper +from pypy.rlib.rstring import StringBuilder def parsestr(space, encoding, s): # compiler.transformer.Transformer.decode_literal depends on what @@ -111,21 +112,23 @@ the string is UTF-8 encoded and should be re-encoded in the specified encoding. """ - lis = [] + builder = StringBuilder(len(s)) ps = 0 end = len(s) - while ps < end: - if s[ps] != '\\': - # note that the C code has a label here. - # the logic is the same. + while 1: + ps2 = ps + while ps < end and s[ps] != '\\': if recode_encoding and ord(s[ps]) & 0x80: w, ps = decode_utf8(space, s, ps, end, recode_encoding) - # Append bytes to output buffer. - lis.append(w) + builder.append(w) + ps2 = ps else: - lis.append(s[ps]) ps += 1 - continue + if ps > ps2: + builder.append_slice(s, ps2, ps) + if ps == end: + break + ps += 1 if ps == end: raise_app_valueerror(space, 'Trailing \\ in string') @@ -136,25 +139,25 @@ if ch == '\n': pass elif ch == '\\': - lis.append('\\') + builder.append('\\') elif ch == "'": - lis.append("'") + builder.append("'") elif ch == '"': - lis.append('"') + builder.append('"') elif ch == 'b': - lis.append("\010") + builder.append("\010") elif ch == 'f': - lis.append('\014') # FF + builder.append('\014') # FF elif ch == 't': - lis.append('\t') + builder.append('\t') elif ch == 'n': - lis.append('\n') + builder.append('\n') elif ch == 'r': - lis.append('\r') + builder.append('\r') elif ch == 'v': - lis.append('\013') # VT + builder.append('\013') # VT elif ch == 'a': - lis.append('\007') # BEL, not classic C + builder.append('\007') # BEL, not classic C elif ch in '01234567': # Look for up to two more octal digits span = ps @@ -164,13 +167,13 @@ # emulate a strange wrap-around behavior of CPython: # \400 is the same as \000 because 0400 == 256 num = int(octal, 8) & 0xFF - lis.append(chr(num)) + builder.append(chr(num)) ps = span elif ch == 'x': if ps+2 <= end and isxdigit(s[ps]) and isxdigit(s[ps + 1]): hexa = s[ps : ps + 2] num = int(hexa, 16) - lis.append(chr(num)) + builder.append(chr(num)) ps += 2 else: raise_app_valueerror(space, 'invalid \\x escape') @@ -180,13 +183,13 @@ # this was not an escape, so the backslash # has to be added, and we start over in # non-escape mode. - lis.append('\\') + builder.append('\\') ps -= 1 assert ps >= 0 continue # an arbitry number of unescaped UTF-8 bytes may follow. - buf = ''.join(lis) + buf = builder.build() return buf diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -769,11 +769,19 @@ self.generate_function('malloc_unicode', malloc_unicode, [lltype.Signed]) - # Rarely called: allocate a fixed-size amount of bytes, but - # not in the nursery, because it is too big. Implemented like - # malloc_nursery_slowpath() above. - self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, - [lltype.Signed]) + # Never called as far as I can tell, but there for completeness: + # allocate a fixed-size object, but not in the nursery, because + # it is too big. + def malloc_big_fixedsize(size, tid): + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_big_fixedsize', malloc_big_fixedsize, + [lltype.Signed] * 2) def _bh_malloc(self, sizedescr): from pypy.rpython.memory.gctypelayout import check_typeid diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -96,8 +96,10 @@ def handle_new_fixedsize(self, descr, op): assert isinstance(descr, SizeDescr) size = descr.size - self.gen_malloc_nursery(size, op.result) - self.gen_initialize_tid(op.result, descr.tid) + if self.gen_malloc_nursery(size, op.result): + self.gen_initialize_tid(op.result, descr.tid) + else: + self.gen_malloc_fixedsize(size, descr.tid, op.result) def handle_new_array(self, arraydescr, op): v_length = op.getarg(0) @@ -112,8 +114,8 @@ pass # total_size is still -1 elif arraydescr.itemsize == 0: total_size = arraydescr.basesize - if 0 <= total_size <= 0xffffff: # up to 16MB, arbitrarily - self.gen_malloc_nursery(total_size, op.result) + if (total_size >= 0 and + self.gen_malloc_nursery(total_size, op.result)): self.gen_initialize_tid(op.result, arraydescr.tid) self.gen_initialize_len(op.result, v_length, arraydescr.lendescr) elif self.gc_ll_descr.kind == 'boehm': @@ -147,13 +149,22 @@ # mark 'v_result' as freshly malloced self.recent_mallocs[v_result] = None - def gen_malloc_fixedsize(self, size, v_result): - """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, Const(size)). - Note that with the framework GC, this should be called very rarely. + def gen_malloc_fixedsize(self, size, typeid, v_result): + """Generate a CALL_MALLOC_GC(malloc_fixedsize_fn, ...). + Used on Boehm, and on the framework GC for large fixed-size + mallocs. (For all I know this latter case never occurs in + practice, but better safe than sorry.) """ - addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') - self._gen_call_malloc_gc([ConstInt(addr), ConstInt(size)], v_result, - self.gc_ll_descr.malloc_fixedsize_descr) + if self.gc_ll_descr.fielddescr_tid is not None: # framework GC + assert (size & (WORD-1)) == 0, "size not aligned?" + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_big_fixedsize') + args = [ConstInt(addr), ConstInt(size), ConstInt(typeid)] + descr = self.gc_ll_descr.malloc_big_fixedsize_descr + else: # Boehm + addr = self.gc_ll_descr.get_malloc_fn_addr('malloc_fixedsize') + args = [ConstInt(addr), ConstInt(size)] + descr = self.gc_ll_descr.malloc_fixedsize_descr + self._gen_call_malloc_gc(args, v_result, descr) def gen_boehm_malloc_array(self, arraydescr, v_num_elem, v_result): """Generate a CALL_MALLOC_GC(malloc_array_fn, ...) for Boehm.""" @@ -211,8 +222,7 @@ """ size = self.round_up_for_allocation(size) if not self.gc_ll_descr.can_use_nursery_malloc(size): - self.gen_malloc_fixedsize(size, v_result) - return + return False # op = None if self._op_malloc_nursery is not None: @@ -238,6 +248,7 @@ self._previous_size = size self._v_last_malloced_nursery = v_result self.recent_mallocs[v_result] = None + return True def gen_initialize_tid(self, v_newgcobj, tid): if self.gc_ll_descr.fielddescr_tid is not None: diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py --- a/pypy/jit/backend/llsupport/test/test_rewrite.py +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -119,12 +119,19 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(adescr.basesize + 10 * adescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=alendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(adescr.basesize)d, \ + 10, \ + %(adescr.itemsize)d, \ + %(adescr.lendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(adescr.basesize + 10 * adescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=alendescr) def test_new_array_variable(self): self.check_rewrite(""" @@ -178,13 +185,20 @@ jump() """, """ [i1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(unicodedescr.basesize + \ - 10 * unicodedescr.itemsize)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 10, descr=unicodelendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(unicodedescr.basesize)d, \ + 10, \ + %(unicodedescr.itemsize)d, \ + %(unicodelendescr.offset)d, \ + descr=malloc_array_descr) jump() """) +## should ideally be: +## p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ +## %(unicodedescr.basesize + \ +## 10 * unicodedescr.itemsize)d, \ +## descr=malloc_fixedsize_descr) +## setfield_gc(p0, 10, descr=unicodelendescr) class TestFramework(RewriteTests): @@ -203,7 +217,7 @@ # class FakeCPU(object): def sizeof(self, STRUCT): - descr = SizeDescrWithVTable(102) + descr = SizeDescrWithVTable(104) descr.tid = 9315 return descr self.cpu = FakeCPU() @@ -368,11 +382,9 @@ jump() """, """ [] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ - %(bdescr.basesize + 104)d, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 8765, descr=tiddescr) - setfield_gc(p0, 103, descr=blendescr) + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, 103, \ + descr=malloc_array_descr) jump() """) @@ -435,9 +447,8 @@ jump() """, """ [p1] - p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ - descr=malloc_fixedsize_descr) - setfield_gc(p0, 9315, descr=tiddescr) + p0 = call_malloc_gc(ConstClass(malloc_big_fixedsize), 104, 9315, \ + descr=malloc_big_fixedsize_descr) setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) jump() """) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -398,6 +398,50 @@ with raises(InvalidLoop): self.optimize_loop(ops, ops) + def test_maybe_issue1045_related(self): + ops = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + mark_opaque_ptr(p54) + i55 = getfield_gc(p54, descr=nextdescr) + p57 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p57, i55, descr=otherdescr) + p69 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p69, i55, descr=otherdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + p79 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p79, i77, descr=otherdescr) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(p57) + jump(p57) + """ + expected = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + i55 = getfield_gc(p54, descr=nextdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(i55) + jump(i55) + """ + self.optimize_loop(ops, expected) + class OptRenameStrlen(Optimization): def propagate_forward(self, op): dispatch_opt(self, op) @@ -457,7 +501,6 @@ jump(p1, i11) """ self.optimize_loop(ops, expected) - class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -60,6 +60,9 @@ stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = pipe.communicate() + if getattr(pipe, 'returncode', 0) < 0: + raise IOError("subprocess was killed by signal %d" % ( + pipe.returncode,)) if stderr.startswith('SKIP:'): py.test.skip(stderr) if stderr.startswith('debug_alloc.h:'): # lldebug builds diff --git a/pypy/module/pypyjit/test_pypy_c/test_alloc.py b/pypy/module/pypyjit/test_pypy_c/test_alloc.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test_pypy_c/test_alloc.py @@ -0,0 +1,26 @@ +import py, sys +from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC + +class TestAlloc(BaseTestPyPyC): + + SIZES = dict.fromkeys([2 ** n for n in range(26)] + # up to 32MB + [2 ** n - 1 for n in range(26)]) + + def test_newstr_constant_size(self): + for size in TestAlloc.SIZES: + yield self.newstr_constant_size, size + + def newstr_constant_size(self, size): + src = """if 1: + N = %(size)d + part_a = 'a' * N + part_b = 'b' * N + for i in xrange(20): + ao = '%%s%%s' %% (part_a, part_b) + def main(): + return 42 +""" % {'size': size} + log = self.run(src, [], threshold=10) + assert log.result == 42 + loop, = log.loops_by_filename(self.filepath) + # assert did not crash diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -608,6 +608,11 @@ specified as 0 if the object is not varsized. The returned object is fully initialized and zero-filled.""" # + # Here we really need a valid 'typeid', not 0 (as the JIT might + # try to send us if there is still a bug). + ll_assert(bool(self.combine(typeid, 0)), + "external_malloc: typeid == 0") + # # Compute the total size, carefully checking for overflows. size_gc_header = self.gcheaderbuilder.size_gc_header nonvarsize = size_gc_header + self.fixed_size(typeid) From noreply at buildbot.pypy.org Tue Feb 28 12:21:17 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 12:21:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: rewrite this test by using directly _warnings instead of warnings: it's a considerable speedup, and import warnings did not ultimately work because the stdlib itertools module is shadowed by our testing package Message-ID: <20120228112117.4A85F8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52966:282563d7352d Date: 2012-02-28 12:20 +0100 http://bitbucket.org/pypy/pypy/changeset/282563d7352d/ Log: rewrite this test by using directly _warnings instead of warnings: it's a considerable speedup, and import warnings did not ultimately work because the stdlib itertools module is shadowed by our testing package diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -176,13 +176,15 @@ def imp(): import notapackage - import warnings - - warnings.simplefilter('error', ImportWarning) + import _warnings + def simplefilter(action, category): + _warnings.filters.insert(0, (action, None, category, None, 0)) + + simplefilter('error', ImportWarning) try: raises(ImportWarning, imp) finally: - warnings.simplefilter('default', ImportWarning) + simplefilter('default', ImportWarning) def test_import_sys(self): import sys From noreply at buildbot.pypy.org Tue Feb 28 13:42:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 13:42:05 +0100 (CET) Subject: [pypy-commit] pypy default: Add _Py_ForgetReference(). Message-ID: <20120228124205.C68278203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52967:b741ab752493 Date: 2012-02-28 13:41 +0100 http://bitbucket.org/pypy/pypy/changeset/b741ab752493/ Log: Add _Py_ForgetReference(). http://mail.python.org/pipermail/pypy- dev/2012-February/009482.html diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -56,6 +56,8 @@ #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) +#define _Py_ForgetReference(ob) /* nothing */ + #define Py_None (&_Py_NoneStruct) /* From noreply at buildbot.pypy.org Tue Feb 28 14:04:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 14:04:35 +0100 (CET) Subject: [pypy-commit] pypy miniscan: Another attempt to have conservative stack scanning for the Message-ID: <20120228130435.C05368203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52968:0eacd8585bac Date: 2012-02-28 14:02 +0100 http://bitbucket.org/pypy/pypy/changeset/0eacd8585bac/ Log: Another attempt to have conservative stack scanning for the minimark gc. From noreply at buildbot.pypy.org Tue Feb 28 14:04:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 14:04:36 +0100 (CET) Subject: [pypy-commit] pypy miniscan: Start. Message-ID: <20120228130436.EEA678203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52969:9a7ea1f2e494 Date: 2012-02-28 14:03 +0100 http://bitbucket.org/pypy/pypy/changeset/9a7ea1f2e494/ Log: Start. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -116,6 +116,10 @@ TID_MASK = (first_gcflag << 7) - 1 +GCFLAG_HIGH_MASK = intmask(~TID_MASK) +assert GCFLAG_HIGH_MASK < 0 and not (GCFLAG_HIGH_MASK & GCFLAG_CARDS_SET) +GCFLAG_HIGH = intmask(0x5555555555555555 & GCFLAG_HIGH_MASK) + FORWARDSTUB = lltype.GcStruct('forwarding_stub', ('forw', llmemory.Address)) @@ -240,9 +244,9 @@ # it gives a lower bound on the allowed size of the nursery. self.nonlarge_max = large_object - 1 # - self.nursery = NULL - self.nursery_free = NULL - self.nursery_top = NULL + self.nursery = NULL + self.nursery_next = NULL + self.nursery_frag_end = NULL self.debug_tiny_nursery = -1 self.debug_rotating_nurseries = None # @@ -386,9 +390,9 @@ debug_print("nursery size:", self.nursery_size) self.nursery = self._alloc_nursery() # the current position in the nursery: - self.nursery_free = self.nursery + self.nursery_next = self.nursery # the end of the nursery: - self.nursery_top = self.nursery + self.nursery_size + self.nursery_frag_end = self.nursery + self.nursery_size # initialize the threshold self.min_heap_size = max(self.min_heap_size, self.nursery_size * self.major_collection_threshold) @@ -490,11 +494,12 @@ totalsize = rawtotalsize = min_size # # Get the memory from the nursery. If there is not enough space - # there, do a collect first. - result = self.nursery_free - self.nursery_free = result + totalsize - if self.nursery_free > self.nursery_top: - result = self.collect_and_reserve(totalsize) + # there, we have run out of the current fragment; pick the next + # one or do a collection. + result = self.nursery_next + self.nursery_next = result + totalsize + if self.nursery_next > self.nursery_frag_end: + result = self.pick_next_fragment(totalsize) # # Build the object. llarena.arena_reserve(result, totalsize) @@ -813,7 +818,7 @@ # have been chosen to allow 'flags' to be zero in the common # case (hence the 'NO' in their name). hdr = llmemory.cast_adr_to_ptr(addr, lltype.Ptr(self.HDR)) - hdr.tid = self.combine(typeid16, flags) + hdr.tid = self.combine(typeid16, flags | GCFLAG_HIGH) def init_gc_object_immortal(self, addr, typeid16, flags=0): # For prebuilt GC objects, the flags must contain From noreply at buildbot.pypy.org Tue Feb 28 14:08:02 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:02 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add method to branch and link to absolute address Message-ID: <20120228130802.95B838203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52970:758b09fb2f1f Date: 2012-02-27 17:27 +0100 http://bitbucket.org/pypy/pypy/changeset/758b09fb2f1f/ Log: add method to branch and link to absolute address diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1035,6 +1035,12 @@ self.trap() self.bctr() + def bl_abs(self, address): + with scratch_reg(self): + self.load_imm(r.SCRATCH, address) + self.mtctr(r.SCRATCH.value) + self.bctrl() + def call(self, address): """ do a call to an absolute address """ From noreply at buildbot.pypy.org Tue Feb 28 14:08:03 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:03 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add *args to __exit__ method in class scratch_reg Message-ID: <20120228130803.C13688203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52971:3e9c0b6b242a Date: 2012-02-27 17:28 +0100 http://bitbucket.org/pypy/pypy/changeset/3e9c0b6b242a/ Log: add *args to __exit__ method in class scratch_reg diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -1187,7 +1187,7 @@ def __enter__(self): self.mc.alloc_scratch_reg() - def __exit__(self): + def __exit__(self, *args): self.mc.free_scratch_reg() class BranchUpdater(PPCAssembler): From noreply at buildbot.pypy.org Tue Feb 28 14:08:05 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:05 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): use more space efficient guard state encoding like X86 and ARM backends Message-ID: <20120228130805.019138203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52972:3e4381b13941 Date: 2012-02-27 17:40 +0100 http://bitbucket.org/pypy/pypy/changeset/3e4381b13941/ Log: (bivab, hager): use more space efficient guard state encoding like X86 and ARM backends diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -162,12 +162,13 @@ @rgc.no_collect def failure_recovery_func(mem_loc, spilling_pointer): """ - mem_loc is a structure in memory describing where the values for - the failargs are stored. + mem_loc is a pointer to the beginning of the encoding. spilling_pointer is the address of the FORCE_INDEX. """ - return self.decode_registers_and_descr(mem_loc, spilling_pointer) + regs = rffi.cast(rffi.LONGP, spilling_pointer) + return self.decode_registers_and_descr(mem_loc, + spilling_pointer, regs) self.failure_recovery_func = failure_recovery_func @@ -175,110 +176,125 @@ lltype.Signed], lltype.Signed)) @rgc.no_collect - def decode_registers_and_descr(self, mem_loc, spp_loc): - ''' - mem_loc : pointer to encoded state - spp_loc : pointer to begin of the spilling area - ''' - enc = rffi.cast(rffi.CCHARP, mem_loc) - managed_size = WORD * len(r.MANAGED_REGS) - regs = rffi.cast(rffi.CCHARP, spp_loc) - i = -1 - fail_index = -1 - while(True): - i += 1 - fail_index += 1 - res = enc[i] - if res == self.END_OF_LOCS: - break - if res == self.EMPTY_LOC: - continue - - group = res - i += 1 - res = enc[i] - if res == self.IMM_LOC: - # imm value - if group == self.INT_TYPE or group == self.REF_TYPE: - if IS_PPC_32: - value = decode32(enc, i+1) - i += 4 - else: - value = decode64(enc, i+1) - i += 8 - else: - assert 0, "not implemented yet" - elif res == self.STACK_LOC: - stack_location = decode32(enc, i+1) - i += 4 - if group == self.FLOAT_TYPE: + def decode_registers_and_descr(self, mem_loc, spp, registers): + """Decode locations encoded in memory at mem_loc and write the values + to the failboxes. Values for spilled vars and registers are stored on + stack at frame_loc """ + assert spp & 1 == 0 + self.fail_force_index = spp + bytecode = rffi.cast(rffi.UCHARP, mem_loc) + num = 0 + value = 0 + fvalue = 0 + code_inputarg = False + while True: + code = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + if code >= self.CODE_FROMSTACK: + if code > 0x7F: + shift = 7 + code &= 0x7F + while True: + nextcode = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + code |= (nextcode & 0x7F) << shift + shift += 7 + if nextcode <= 0x7F: + break + # load the value from the stack + kind = code & 3 + code = int((code - self.CODE_FROMSTACK) >> 2) + if code_inputarg: + code = ~code + code_inputarg = False + if kind == self.DESCR_FLOAT: assert 0, "not implemented yet" else: - start = spp_loc + get_spp_offset(stack_location) + start = spp + get_spp_offset(int(code)) value = rffi.cast(rffi.LONGP, start)[0] - else: # REG_LOC - reg = ord(enc[i]) - if group == self.FLOAT_TYPE: + else: + # 'code' identifies a register: load its value + kind = code & 3 + if kind == self.DESCR_SPECIAL: + if code == self.CODE_HOLE: + num += 1 + continue + if code == self.CODE_INPUTARG: + code_inputarg = True + continue + assert code == self.CODE_STOP + break + code >>= 2 + if kind == self.DESCR_FLOAT: assert 0, "not implemented yet" else: - regindex = r.get_managed_reg_index(reg) - if IS_PPC_32: - value = decode32(regs, regindex * WORD) - else: - value = decode64(regs, regindex * WORD) - - if group == self.INT_TYPE: - self.fail_boxes_int.setitem(fail_index, value) - elif group == self.REF_TYPE: - tgt = self.fail_boxes_ptr.get_addr_for_num(fail_index) + reg_index = r.get_managed_reg_index(code) + value = registers[reg_index] + # store the loaded value into fail_boxes_ + if kind == self.DESCR_FLOAT: + assert 0, "not implemented yet" + else: + if kind == self.DESCR_INT: + tgt = self.fail_boxes_int.get_addr_for_num(num) + elif kind == self.DESCR_REF: + assert (value & 3) == 0, "misaligned pointer" + tgt = self.fail_boxes_ptr.get_addr_for_num(num) + else: + assert 0, "bogus kind" rffi.cast(rffi.LONGP, tgt)[0] = value + num += 1 + self.fail_boxes_count = num + fail_index = rffi.cast(rffi.INTP, bytecode)[0] + fail_index = rffi.cast(lltype.Signed, fail_index) + return fail_index + + def decode_inputargs(self, code): + descr_to_box_type = [REF, INT, FLOAT] + bytecode = rffi.cast(rffi.UCHARP, code) + arglocs = [] + code_inputarg = False + while 1: + # decode the next instruction from the bytecode + code = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + if code >= self.CODE_FROMSTACK: + # 'code' identifies a stack location + if code > 0x7F: + shift = 7 + code &= 0x7F + while True: + nextcode = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + code |= (nextcode & 0x7F) << shift + shift += 7 + if nextcode <= 0x7F: + break + kind = code & 3 + code = (code - self.CODE_FROMSTACK) >> 2 + if code_inputarg: + code = ~code + code_inputarg = False + loc = PPCFrameManager.frame_pos(code, descr_to_box_type[kind]) + elif code == self.CODE_STOP: + break + elif code == self.CODE_HOLE: + continue + elif code == self.CODE_INPUTARG: + code_inputarg = True + continue else: - assert 0, 'unknown type' - - assert enc[i] == self.END_OF_LOCS - descr = decode32(enc, i+1) - self.fail_boxes_count = fail_index - self.fail_force_index = spp_loc - assert isinstance(descr, int) - return descr - - def decode_inputargs(self, enc): - locs = [] - j = 0 - while enc[j] != self.END_OF_LOCS: - res = enc[j] - if res == self.EMPTY_LOC: - j += 1 - continue - - assert res in [self.INT_TYPE, self.REF_TYPE],\ - 'location type is not supported' - res_type = res - j += 1 - res = enc[j] - if res == self.IMM_LOC: - # XXX decode imm if necessary - assert 0, 'Imm Locations are not supported' - elif res == self.STACK_LOC: - if res_type == self.FLOAT_TYPE: - t = FLOAT - elif res_type == self.INT_TYPE: - t = INT - else: - t = REF - assert t != FLOAT - stack_loc = decode32(enc, j+1) - loc = PPCFrameManager.frame_pos(stack_loc, t) - j += 4 - else: # REG_LOC - if res_type == self.FLOAT_TYPE: + # 'code' identifies a register + kind = code & 3 + code >>= 2 + if kind == self.DESCR_FLOAT: assert 0, "not implemented yet" else: - reg = ord(res) - loc = r.MANAGED_REGS[r.get_managed_reg_index(reg)] - j += 1 - locs.append(loc) - return locs + #loc = r.all_regs[code] + assert (r.ALL_REGS[code] is + r.MANAGED_REGS[r.get_managed_reg_index(code)]) + loc = r.ALL_REGS[code] + arglocs.append(loc) + return arglocs[:] def _build_malloc_slowpath(self): mc = PPCBuilder() @@ -505,10 +521,9 @@ def assemble_bridge(self, faildescr, inputargs, operations, looptoken, log): operations = self.setup(looptoken, operations) assert isinstance(faildescr, AbstractFailDescr) - code = faildescr._failure_recovery_code - enc = rffi.cast(rffi.CCHARP, code) + code = self._find_failure_recovery_bytecode(faildescr) frame_depth = faildescr._ppc_frame_depth - arglocs = self.decode_inputargs(enc) + arglocs = self.decode_inputargs(code) if not we_are_translated(): assert len(inputargs) == len(arglocs) regalloc = Regalloc(assembler=self, frame_manager=PPCFrameManager()) @@ -550,56 +565,65 @@ mc.prepare_insts_blocks() mc.copy_to_raw_memory(rawstart + sp_patch_location) - # For an explanation of the encoding, see - # backend/arm/assembler.py - def gen_descr_encoding(self, descr, args, arglocs): - minsize = (len(arglocs) - 1) * 6 + 5 - memsize = self.align(minsize) - memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1) - mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr) - i = 0 - j = 0 - while i < len(args): - if arglocs[i+1]: - arg = args[i] - loc = arglocs[i+1] - if arg.type == INT: - mem[j] = self.INT_TYPE - j += 1 - elif arg.type == REF: - mem[j] = self.REF_TYPE - j += 1 + DESCR_REF = 0x00 + DESCR_INT = 0x01 + DESCR_FLOAT = 0x02 + DESCR_SPECIAL = 0x03 + CODE_FROMSTACK = 128 + CODE_STOP = 0 | DESCR_SPECIAL + CODE_HOLE = 4 | DESCR_SPECIAL + CODE_INPUTARG = 8 | DESCR_SPECIAL + + def gen_descr_encoding(self, descr, failargs, locs): + assert self.mc is not None + buf = [] + for i in range(len(failargs)): + arg = failargs[i] + if arg is not None: + if arg.type == REF: + kind = self.DESCR_REF + elif arg.type == INT: + kind = self.DESCR_INT elif arg.type == FLOAT: - assert 0, "not implemented yet" + assert 0, "not implemented" else: - assert 0, 'unknown type' + raise AssertionError("bogus kind") + loc = locs[i] + if loc.is_stack(): + pos = loc.position + if pos < 0: + buf.append(self.CODE_INPUTARG) + pos = ~pos + n = self.CODE_FROMSTACK // 4 + pos + else: + assert loc.is_reg() or loc.is_vfp_reg() + n = loc.value + n = kind + 4 * n + while n > 0x7F: + buf.append((n & 0x7F) | 0x80) + n >>= 7 + else: + n = self.CODE_HOLE + buf.append(n) + buf.append(self.CODE_STOP) - if loc.is_reg() or loc.is_vfp_reg(): - mem[j] = chr(loc.value) - j += 1 - elif loc.is_imm() or loc.is_imm_float(): - assert (arg.type == INT or arg.type == REF - or arg.type == FLOAT) - mem[j] = self.IMM_LOC - if IS_PPC_32: - encode32(mem, j+1, loc.getint()) - j += 5 - else: - encode64(mem, j+1, loc.getint()) - j += 9 - else: - mem[j] = self.STACK_LOC - encode32(mem, j+1, loc.position) - j += 5 - else: - mem[j] = self.EMPTY_LOC - j += 1 - i += 1 + fdescr = self.cpu.get_fail_descr_number(descr) - mem[j] = chr(0xFF) - n = self.cpu.get_fail_descr_number(descr) - encode32(mem, j+1, n) - return memaddr + buf.append((fdescr >> 24) & 0xFF) + buf.append((fdescr >> 16) & 0xFF) + buf.append((fdescr >> 8) & 0xFF) + buf.append( fdescr & 0xFF) + + lenbuf = len(buf) + # XXX fix memory leaks + enc_arr = lltype.malloc(rffi.CArray(rffi.CHAR), lenbuf, + flavor='raw', track_allocation=False) + enc_ptr = rffi.cast(lltype.Signed, enc_arr) + for i, byte in enumerate(buf): + enc_arr[i] = chr(byte) + # assert that the fail_boxes lists are big enough + assert len(failargs) <= self.fail_boxes_int.SIZE + return enc_ptr def align(self, size): while size % 8 != 0: @@ -700,6 +724,9 @@ return frame_depth + def _find_failure_recovery_bytecode(self, faildescr): + return faildescr._failure_recovery_code_adr + def fixup_target_tokens(self, rawstart): for targettoken in self.target_tokens_currently_compiling: targettoken._ppc_loop_code += rawstart @@ -726,29 +753,29 @@ pos = self.mc.currpos() tok.pos_recovery_stub = pos - memaddr = self.gen_exit_stub(descr, tok.failargs, + encoding_adr = self.gen_exit_stub(descr, tok.failargs, tok.faillocs, save_exc=tok.save_exc) + # store info on the descr descr._ppc_frame_depth = tok.faillocs[0].getint() - descr._failure_recovery_code = memaddr + descr._failure_recovery_code_adr = encoding_adr descr._ppc_guard_pos = pos def gen_exit_stub(self, descr, args, arglocs, save_exc=False): - memaddr = self.gen_descr_encoding(descr, args, arglocs) - - # store addr in force index field - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, memaddr) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() - if save_exc: path = self._leave_jitted_hook_save_exc else: path = self._leave_jitted_hook + + # write state encoding to memory and store the address of the beginning + # of the encoding in the FORCE INDEX slot + encoding_adr = self.gen_descr_encoding(descr, args, arglocs[1:]) + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, encoding_adr) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.b_abs(path) - return memaddr + return encoding_adr def process_pending_guards(self, block_start): clt = self.current_clt @@ -775,6 +802,7 @@ mc.b_abs(bridge_addr) mc.prepare_insts_blocks() mc.copy_to_raw_memory(patch_addr) + faildescr._failure_recovery_code_ofs = 0 def get_asmmemmgr_blocks(self, looptoken): clt = looptoken.compiled_loop_token @@ -1023,3 +1051,6 @@ AssemblerPPC.operations = operations AssemblerPPC.operations_with_guard = operations_with_guard + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -91,6 +91,11 @@ adr = llmemory.cast_ptr_to_adr(x) return PPC_64_CPU.cast_adr_to_int(adr) + # XXX find out how big FP registers are on PPC32 + all_null_registers = lltype.malloc(rffi.LONGP.TO, + len(r.MANAGED_REGS), + flavor='raw', zero=True, immortal=True) + def force(self, spilling_pointer): TP = rffi.CArrayPtr(lltype.Signed) @@ -101,9 +106,13 @@ faildescr = self.get_fail_descr_from_number(fail_index) rffi.cast(TP, addr_of_force_index)[0] = ~fail_index + bytecode = self.asm._find_failure_recovery_bytecode(faildescr) + addr_all_null_registers = rffi.cast(rffi.LONG, self.all_null_registers) # start of "no gc operation!" block fail_index_2 = self.asm.decode_registers_and_descr( - faildescr._failure_recovery_code, spilling_pointer) + bytecode, + spilling_pointer, + self.all_null_registers) self.asm.leave_jitted_hook() # end of "no gc operation!" block assert fail_index == fail_index_2 From noreply at buildbot.pypy.org Tue Feb 28 14:08:06 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120228130806.7DBA88203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52973:52ece45399fc Date: 2012-02-27 20:00 +0100 http://bitbucket.org/pypy/pypy/changeset/52ece45399fc/ Log: merge diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -523,7 +523,7 @@ return [] def add_frame_offset(self, shape, offset): - assert offset != 0 + assert offset & 3 == 0 shape.append(offset) def add_callee_save_reg(self, shape, register): diff --git a/pypy/jit/backend/ppc/codebuilder.py b/pypy/jit/backend/ppc/codebuilder.py --- a/pypy/jit/backend/ppc/codebuilder.py +++ b/pypy/jit/backend/ppc/codebuilder.py @@ -962,6 +962,11 @@ PPCAssembler.__init__(self) self.init_block_builder() self.r0_in_use = r0_in_use + self.ops_offset = {} + + def mark_op(self, op): + pos = self.get_relative_pos() + self.ops_offset[op] = pos def check(self, desc, v, *args): desc.__get__(self)(*args) @@ -994,13 +999,12 @@ self.ldx(rD.value, 0, rD.value) def store_reg(self, source_reg, addr): - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, addr) - if IS_PPC_32: - self.stwx(source_reg.value, 0, r.SCRATCH.value) - else: - self.stdx(source_reg.value, 0, r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, addr) + if IS_PPC_32: + self.stwx(source_reg.value, 0, r.SCRATCH.value) + else: + self.stdx(source_reg.value, 0, r.SCRATCH.value) def b_offset(self, target): curpos = self.currpos() @@ -1020,17 +1024,15 @@ BI = condition[0] BO = condition[1] - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, addr) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, addr) + self.mtctr(r.SCRATCH.value) self.bcctr(BO, BI) def b_abs(self, address, trap=False): - self.alloc_scratch_reg() - self.load_imm(r.SCRATCH, address) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + self.load_imm(r.SCRATCH, address) + self.mtctr(r.SCRATCH.value) if trap: self.trap() self.bctr() @@ -1044,17 +1046,16 @@ def call(self, address): """ do a call to an absolute address """ - self.alloc_scratch_reg() - if IS_PPC_32: - self.load_imm(r.SCRATCH, address) - else: - self.store(r.TOC.value, r.SP.value, 5 * WORD) - self.load_imm(r.r11, address) - self.load(r.SCRATCH.value, r.r11.value, 0) - self.load(r.r2.value, r.r11.value, WORD) - self.load(r.r11.value, r.r11.value, 2 * WORD) - self.mtctr(r.SCRATCH.value) - self.free_scratch_reg() + with scratch_reg(self): + if IS_PPC_32: + self.load_imm(r.SCRATCH, address) + else: + self.store(r.TOC.value, r.SP.value, 5 * WORD) + self.load_imm(r.r11, address) + self.load(r.SCRATCH.value, r.r11.value, 0) + self.load(r.r2.value, r.r11.value, WORD) + self.load(r.r11.value, r.r11.value, 2 * WORD) + self.mtctr(r.SCRATCH.value) self.bctrl() if IS_PPC_64: diff --git a/pypy/jit/backend/ppc/helper/regalloc.py b/pypy/jit/backend/ppc/helper/regalloc.py --- a/pypy/jit/backend/ppc/helper/regalloc.py +++ b/pypy/jit/backend/ppc/helper/regalloc.py @@ -76,7 +76,7 @@ def prepare_binary_int_op(): def f(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() b0, b1 = boxes reg1 = self._ensure_value_is_boxed(b0, forbidden_vars=boxes) diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -12,7 +12,8 @@ from pypy.jit.backend.ppc.helper.assembler import (count_reg_args, Saved_Volatiles) from pypy.jit.backend.ppc.jump import remap_frame_layout -from pypy.jit.backend.ppc.codebuilder import OverwritingBuilder +from pypy.jit.backend.ppc.codebuilder import (OverwritingBuilder, scratch_reg, + PPCBuilder) from pypy.jit.backend.ppc.regalloc import TempPtr, TempInt from pypy.jit.backend.llsupport import symbolic from pypy.rpython.lltypesystem import rstr, rffi, lltype @@ -210,12 +211,11 @@ # instead of XER could be more efficient def _emit_ovf_guard(self, op, arglocs, cond): # move content of XER to GPR - self.mc.alloc_scratch_reg() - self.mc.mfspr(r.SCRATCH.value, 1) - # shift and mask to get comparison result - self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, 1, 0, 0) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.mfspr(r.SCRATCH.value, 1) + # shift and mask to get comparison result + self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, 1, 0, 0) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(op, arglocs, cond) def emit_guard_no_overflow(self, op, arglocs, regalloc): @@ -244,14 +244,13 @@ def _cmp_guard_class(self, op, locs, regalloc): offset = locs[2] if offset is not None: - self.mc.alloc_scratch_reg() - if offset.is_imm(): - self.mc.load(r.SCRATCH.value, locs[0].value, offset.value) - else: - assert offset.is_reg() - self.mc.loadx(r.SCRATCH.value, locs[0].value, offset.value) - self.mc.cmp_op(0, r.SCRATCH.value, locs[1].value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + if offset.is_imm(): + self.mc.load(r.SCRATCH.value, locs[0].value, offset.value) + else: + assert offset.is_reg() + self.mc.loadx(r.SCRATCH.value, locs[0].value, offset.value) + self.mc.cmp_op(0, r.SCRATCH.value, locs[1].value) else: assert 0, "not implemented yet" self._emit_guard(op, locs[3:], c.NE) @@ -288,10 +287,9 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - self.mc.storex(loc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + self.mc.storex(loc.value, 0, r.SCRATCH.value) elif loc.is_vfp_reg(): assert box.type == FLOAT assert 0, "not implemented yet" @@ -305,13 +303,12 @@ adr = self.fail_boxes_int.get_addr_for_num(i) else: assert 0 - self.mc.alloc_scratch_reg() - self.mov_loc_loc(loc, r.SCRATCH) - # store content of r5 temporary in ENCODING AREA - self.mc.store(r.r5.value, r.SPP.value, 0) - self.mc.load_imm(r.r5, adr) - self.mc.store(r.SCRATCH.value, r.r5.value, 0) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mov_loc_loc(loc, r.SCRATCH) + # store content of r5 temporary in ENCODING AREA + self.mc.store(r.r5.value, r.SPP.value, 0) + self.mc.load_imm(r.r5, adr) + self.mc.store(r.SCRATCH.value, r.r5.value, 0) # restore r5 self.mc.load(r.r5.value, r.SPP.value, 0) else: @@ -362,10 +359,9 @@ failargs = arglocs[5:] self.mc.load_imm(loc1, pos_exception.value) - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, loc1.value, 0) - self.mc.cmp_op(0, r.SCRATCH.value, loc.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, loc1.value, 0) + self.mc.cmp_op(0, r.SCRATCH.value, loc.value) self._emit_guard(op, failargs, c.NE, save_exc=True) self.mc.load_imm(loc, pos_exc_value.value) @@ -373,11 +369,10 @@ if resloc: self.mc.load(resloc.value, loc.value, 0) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, 0) - self.mc.store(r.SCRATCH.value, loc.value, 0) - self.mc.store(r.SCRATCH.value, loc1.value, 0) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, 0) + self.mc.store(r.SCRATCH.value, loc.value, 0) + self.mc.store(r.SCRATCH.value, loc1.value, 0) def emit_call(self, op, args, regalloc, force_index=-1): adr = args[0].value @@ -426,13 +421,12 @@ param_offset = ((BACKCHAIN_SIZE + MAX_REG_PARAMS) * WORD) # space for first 8 parameters - self.mc.alloc_scratch_reg() - for i, arg in enumerate(stack_args): - offset = param_offset + i * WORD - if arg is not None: - self.regalloc_mov(regalloc.loc(arg), r.SCRATCH) - self.mc.store(r.SCRATCH.value, r.SP.value, offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + for i, arg in enumerate(stack_args): + offset = param_offset + i * WORD + if arg is not None: + self.regalloc_mov(regalloc.loc(arg), r.SCRATCH) + self.mc.store(r.SCRATCH.value, r.SP.value, offset) # collect variables that need to go in registers # and the registers they will be stored in @@ -542,31 +536,31 @@ def emit_getinteriorfield_gc(self, op, arglocs, regalloc): (base_loc, index_loc, res_loc, ofs_loc, ofs, itemsize, fieldsize) = arglocs - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, itemsize.value) - self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) - if ofs.value > 0: - if ofs_loc.is_imm(): - self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, itemsize.value) + self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) + if ofs.value > 0: + if ofs_loc.is_imm(): + self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + else: + self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + + if fieldsize.value == 8: + self.mc.ldx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 4: + self.mc.lwzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 2: + self.mc.lhzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 1: + self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) else: - self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) - - if fieldsize.value == 8: - self.mc.ldx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 4: - self.mc.lwzx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 2: - self.mc.lhzx(res_loc.value, base_loc.value, r.SCRATCH.value) - elif fieldsize.value == 1: - self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) - else: - assert 0 - self.mc.free_scratch_reg() + assert 0 #XXX Hack, Hack, Hack if not we_are_translated(): signed = op.getdescr().fielddescr.is_field_signed() self._ensure_result_bit_extension(res_loc, fieldsize.value, signed) + emit_getinteriorfield_raw = emit_getinteriorfield_gc def emit_setinteriorfield_gc(self, op, arglocs, regalloc): (base_loc, index_loc, value_loc, @@ -588,7 +582,7 @@ self.mc.stbx(value_loc.value, base_loc.value, r.SCRATCH.value) else: assert 0 - + emit_setinteriorfield_raw = emit_setinteriorfield_gc class ArrayOpAssembler(object): @@ -752,13 +746,12 @@ bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, 1 << scale) - if IS_PPC_32: - self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) - else: - self.mc.mulld(bytes_loc.value, r.SCRATCH.value, length_loc.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, 1 << scale) + if IS_PPC_32: + self.mc.mullw(bytes_loc.value, r.SCRATCH.value, length_loc.value) + else: + self.mc.mulld(bytes_loc.value, r.SCRATCH.value, length_loc.value) length_box = bytes_box length_loc = bytes_loc # call memcpy() @@ -873,15 +866,15 @@ def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) def emit_debug_merge_point(self, op, arglocs, regalloc): pass emit_jit_debug = emit_debug_merge_point + emit_keepalive = emit_debug_merge_point def emit_cond_call_gc_wb(self, op, arglocs, regalloc): # Write code equivalent to write_barrier() in the GC: it checks @@ -906,26 +899,25 @@ raise AssertionError(opnum) loc_base = arglocs[0] - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, loc_base.value, 0) + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, loc_base.value, 0) - # get the position of the bit we want to test - bitpos = descr.jit_wb_if_flag_bitpos + # get the position of the bit we want to test + bitpos = descr.jit_wb_if_flag_bitpos - if IS_PPC_32: - # put this bit to the rightmost bitposition of r0 - if bitpos > 0: - self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, - 32 - bitpos, 31, 31) - # test whether this bit is set - self.mc.cmpwi(0, r.SCRATCH.value, 1) - else: - if bitpos > 0: - self.mc.rldicl(r.SCRATCH.value, r.SCRATCH.value, - 64 - bitpos, 63) - # test whether this bit is set - self.mc.cmpdi(0, r.SCRATCH.value, 1) - self.mc.free_scratch_reg() + if IS_PPC_32: + # put this bit to the rightmost bitposition of r0 + if bitpos > 0: + self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, + 32 - bitpos, 31, 31) + # test whether this bit is set + self.mc.cmpwi(0, r.SCRATCH.value, 1) + else: + if bitpos > 0: + self.mc.rldicl(r.SCRATCH.value, r.SCRATCH.value, + 64 - bitpos, 63) + # test whether this bit is set + self.mc.cmpdi(0, r.SCRATCH.value, 1) jz_location = self.mc.currpos() self.mc.nop() @@ -947,7 +939,7 @@ # patch the JZ above offset = self.mc.currpos() - jz_location pmc = OverwritingBuilder(self.mc, jz_location, 1) - pmc.bc(4, 2, offset) # jump if the two values are equal + pmc.bc(12, 2, offset) # jump if the two values are equal pmc.overwrite() emit_cond_call_gc_wb_array = emit_cond_call_gc_wb @@ -989,10 +981,9 @@ # check value resloc = regalloc.try_allocate_reg(resbox) assert resloc is r.RES - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, value) - self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, value) + self.mc.cmp_op(0, resloc.value, r.SCRATCH.value) regalloc.possibly_free_var(resbox) fast_jmp_pos = self.mc.currpos() @@ -1035,11 +1026,10 @@ assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset resloc = regalloc.force_allocate_reg(resbox) - self.mc.alloc_scratch_reg() - self.mov_loc_loc(arglocs[1], r.SCRATCH) - self.mc.li(resloc.value, 0) - self.mc.storex(resloc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mov_loc_loc(arglocs[1], r.SCRATCH) + self.mc.li(resloc.value, 0) + self.mc.storex(resloc.value, 0, r.SCRATCH.value) regalloc.possibly_free_var(resbox) if op.result is not None: @@ -1055,13 +1045,12 @@ raise AssertionError(kind) resloc = regalloc.force_allocate_reg(op.result) regalloc.possibly_free_var(resbox) - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, adr) - if op.result.type == FLOAT: - assert 0, "not implemented yet" - else: - self.mc.loadx(resloc.value, 0, r.SCRATCH.value) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, adr) + if op.result.type == FLOAT: + assert 0, "not implemented yet" + else: + self.mc.loadx(resloc.value, 0, r.SCRATCH.value) # merge point offset = self.mc.currpos() - jmp_pos @@ -1070,10 +1059,9 @@ pmc.b(offset) pmc.overwrite() - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, 0) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, 0) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.LT) @@ -1102,10 +1090,9 @@ def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) - self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, ENCODING_AREA) + self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) self._emit_guard(guard_op, arglocs, c.LT, save_exc=True) emit_guard_call_release_gil = emit_guard_call_may_force diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -3,7 +3,7 @@ from pypy.jit.backend.ppc.ppc_form import PPCForm as Form from pypy.jit.backend.ppc.ppc_field import ppc_fields from pypy.jit.backend.ppc.regalloc import (TempInt, PPCFrameManager, - Regalloc) + Regalloc, PPCRegisterManager) from pypy.jit.backend.ppc.assembler import Assembler from pypy.jit.backend.ppc.opassembler import OpAssembler from pypy.jit.backend.ppc.symbol_lookup import lookup @@ -37,15 +37,23 @@ from pypy.jit.metainterp.history import (BoxInt, ConstInt, ConstPtr, ConstFloat, Box, INT, REF, FLOAT) from pypy.jit.backend.x86.support import values_array +from pypy.rlib.debug import (debug_print, debug_start, debug_stop, + have_debug_prints) from pypy.rlib import rgc from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop from pypy.jit.backend.ppc.locations import StackLocation, get_spp_offset +from pypy.rlib.jit import AsmInfo memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, sandboxsafe=True, _nowrapper=True) + +DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point + ('number', lltype.Signed)) def hi(w): return w >> 16 @@ -85,6 +93,7 @@ EMPTY_LOC = '\xFE' END_OF_LOCS = '\xFF' + FORCE_INDEX_AREA = len(r.MANAGED_REGS) * WORD ENCODING_AREA = len(r.MANAGED_REGS) * WORD OFFSET_SPP_TO_GPR_SAVE_AREA = (FORCE_INDEX + FLOAT_INT_CONVERSION + ENCODING_AREA) @@ -108,6 +117,12 @@ self.max_stack_params = 0 self.propagate_exception_path = 0 self.setup_failure_recovery() + self._debug = False + self.loop_run_counters = [] + self.debug_counter_descr = cpu.fielddescrof(DEBUG_COUNTER, 'i') + + def set_debug(self, v): + self._debug = v def _save_nonvolatiles(self): """ save nonvolatile GPRs in GPR SAVE AREA @@ -298,24 +313,64 @@ def _build_malloc_slowpath(self): mc = PPCBuilder() - with Saved_Volatiles(mc): - # Values to compute size stored in r3 and r4 - mc.subf(r.r3.value, r.r3.value, r.r4.value) - addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() - mc.call(addr) + if IS_PPC_64: + for _ in range(6): + mc.write32(0) + frame_size = (# add space for floats later + + BACKCHAIN_SIZE * WORD) + if IS_PPC_32: + mc.stwu(r.SP.value, r.SP.value, -frame_size) + mc.mflr(r.SCRATCH.value) + mc.stw(r.SCRATCH.value, r.SP.value, frame_size + WORD) + else: + mc.stdu(r.SP.value, r.SP.value, -frame_size) + mc.mflr(r.SCRATCH.value) + mc.std(r.SCRATCH.value, r.SP.value, frame_size + 2 * WORD) + # managed volatiles are saved below + if self.cpu.supports_floats: + assert 0, "make sure to save floats here" + # Values to compute size stored in r3 and r4 + mc.subf(r.r3.value, r.r3.value, r.r4.value) + addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + mc.store(reg.value, r.SPP.value, ofs) + mc.call(addr) + for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): + mc.load(reg.value, r.SPP.value, ofs) mc.cmp_op(0, r.r3.value, 0, imm=True) jmp_pos = mc.currpos() mc.nop() + nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() mc.load_imm(r.r4, nursery_free_adr) mc.load(r.r4.value, r.r4.value, 0) + + if IS_PPC_32: + ofs = WORD + else: + ofs = WORD * 2 + mc.load(r.SCRATCH.value, r.SP.value, frame_size + ofs) + mc.mtlr(r.SCRATCH.value) + mc.addi(r.SP.value, r.SP.value, frame_size) + mc.blr() + # if r3 == 0 we skip the return above and jump to the exception path + offset = mc.currpos() - jmp_pos pmc = OverwritingBuilder(mc, jmp_pos, 1) - pmc.bc(4, 2, jmp_pos) # jump if the two values are equal + pmc.bc(12, 2, offset) pmc.overwrite() + # restore the frame before leaving + mc.load(r.SCRATCH.value, r.SP.value, frame_size + ofs) + mc.mtlr(r.SCRATCH.value) + mc.addi(r.SP.value, r.SP.value, frame_size) mc.b_abs(self.propagate_exception_path) + + + mc.prepare_insts_blocks() rawstart = mc.materialize(self.cpu.asmmemmgr, []) + if IS_PPC_64: + self.write_64_bit_func_descr(rawstart, rawstart+3*WORD) self.malloc_slowpath = rawstart def _build_propagate_exception_path(self): @@ -362,8 +417,8 @@ addr = rffi.cast(lltype.Signed, decode_func_addr) # load parameters into parameter registers - mc.load(r.r3.value, r.SPP.value, self.ENCODING_AREA) # address of state encoding - mc.mr(r.r4.value, r.SPP.value) # load spilling pointer + mc.load(r.r3.value, r.SPP.value, self.FORCE_INDEX_AREA) # address of state encoding + mc.mr(r.r4.value, r.SPP.value) # load spilling pointer # # call decoding function mc.call(addr) @@ -430,6 +485,23 @@ self.exit_code_adr = self._gen_exit_path() self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) + debug_start('jit-backend-counts') + self.set_debug(have_debug_prints()) + debug_stop('jit-backend-counts') + + def finish_once(self): + if self._debug: + debug_start('jit-backend-counts') + for i in range(len(self.loop_run_counters)): + struct = self.loop_run_counters[i] + if struct.type == 'l': + prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) + else: + prefix = 'entry ' + str(struct.number) + debug_print(prefix + ':' + str(struct.i)) + debug_stop('jit-backend-counts') @staticmethod def _release_gil_shadowstack(): @@ -475,6 +547,7 @@ looptoken._ppc_loop_code = start_pos clt.frame_depth = clt.param_depth = -1 spilling_area, param_depth = self._assemble(operations, regalloc) + size_excluding_failure_stuff = self.mc.get_relative_pos() clt.frame_depth = spilling_area clt.param_depth = param_depth @@ -502,8 +575,12 @@ print 'Loop', inputargs, operations self.mc._dump_trace(loop_start, 'loop_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling loop with token %r' % looptoken + ops_offset = self.mc.ops_offset self._teardown() + # XXX 3rd arg may not be correct yet + return AsmInfo(ops_offset, real_start, size_excluding_failure_stuff) + def _assemble(self, operations, regalloc): regalloc.compute_hint_frame_locations(operations) self._walk_operations(operations, regalloc) @@ -531,7 +608,9 @@ sp_patch_location = self._prepare_sp_patch_position() + startpos = self.mc.get_relative_pos() spilling_area, param_depth = self._assemble(operations, regalloc) + codeendpos = self.mc.get_relative_pos() self.write_pending_failure_recoveries() @@ -553,8 +632,12 @@ print 'Loop', inputargs, operations self.mc._dump_trace(rawstart, 'bridge_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling bridge with token %r' % looptoken + + ops_offset = self.mc.ops_offset self._teardown() + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) + def _patch_sp_offset(self, sp_patch_location, rawstart): mc = PPCBuilder() frame_depth = self.compute_frame_depth(self.current_clt.frame_depth, @@ -828,11 +911,10 @@ return # move immediate value to memory elif loc.is_stack(): - self.mc.alloc_scratch_reg() - offset = loc.value - self.mc.load_imm(r.SCRATCH, value) - self.mc.store(r.SCRATCH.value, r.SPP.value, offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + offset = loc.value + self.mc.load_imm(r.SCRATCH, value) + self.mc.store(r.SCRATCH.value, r.SPP.value, offset) return assert 0, "not supported location" elif prev_loc.is_stack(): @@ -845,10 +927,9 @@ # move in memory elif loc.is_stack(): target_offset = loc.value - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, offset) - self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load(r.SCRATCH.value, r.SPP.value, offset) + self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) return assert 0, "not supported location" elif prev_loc.is_reg(): @@ -883,10 +964,7 @@ elif loc.is_reg(): self.mc.addi(r.SP.value, r.SP.value, -WORD) # decrease stack pointer # push value - if IS_PPC_32: - self.mc.stw(loc.value, r.SP.value, 0) - else: - self.mc.std(loc.value, r.SP.value, 0) + self.mc.store(loc.value, r.SP.value, 0) elif loc.is_imm(): assert 0, "not implemented yet" elif loc.is_imm_float(): @@ -946,17 +1024,17 @@ def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): assert size & (WORD-1) == 0 # must be correctly aligned - self.mc.load_imm(r.RES.value, nursery_free_adr) + self.mc.load_imm(r.RES, nursery_free_adr) self.mc.load(r.RES.value, r.RES.value, 0) if _check_imm_arg(size): self.mc.addi(r.r4.value, r.RES.value, size) else: - self.mc.load_imm(r.r4.value, size) + self.mc.load_imm(r.r4, size) self.mc.add(r.r4.value, r.RES.value, r.r4.value) with scratch_reg(self.mc): - self.mc.gen_load_int(r.SCRATCH.value, nursery_top_adr) + self.mc.load_imm(r.SCRATCH, nursery_top_adr) self.mc.loadx(r.SCRATCH.value, 0, r.SCRATCH.value) self.mc.cmp_op(0, r.r4.value, r.SCRATCH.value, signed=False) @@ -977,10 +1055,11 @@ offset = self.mc.currpos() - fast_jmp_pos pmc = OverwritingBuilder(self.mc, fast_jmp_pos, 1) pmc.bc(4, 1, offset) # jump if LE (not GT) + pmc.overwrite() with scratch_reg(self.mc): - self.mc.load_imm(r.SCRATCH.value, nursery_free_adr) - self.mc.storex(r.r1.value, 0, r.SCRATCH.value) + self.mc.load_imm(r.SCRATCH, nursery_free_adr) + self.mc.storex(r.r4.value, 0, r.SCRATCH.value) def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: @@ -1010,10 +1089,9 @@ return 0 def _write_fail_index(self, fail_index): - self.mc.alloc_scratch_reg() - self.mc.load_imm(r.SCRATCH, fail_index) - self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) - self.mc.free_scratch_reg() + with scratch_reg(self.mc): + self.mc.load_imm(r.SCRATCH, fail_index) + self.mc.store(r.SCRATCH.value, r.SPP.value, self.FORCE_INDEX_AREA) def load(self, loc, value): assert loc.is_reg() and value.is_imm() diff --git a/pypy/jit/backend/ppc/regalloc.py b/pypy/jit/backend/ppc/regalloc.py --- a/pypy/jit/backend/ppc/regalloc.py +++ b/pypy/jit/backend/ppc/regalloc.py @@ -50,37 +50,33 @@ save_around_call_regs = r.VOLATILES REGLOC_TO_COPY_AREA_OFS = { - r.r0: MY_COPY_OF_REGS + 0 * WORD, - r.r2: MY_COPY_OF_REGS + 1 * WORD, - r.r3: MY_COPY_OF_REGS + 2 * WORD, - r.r4: MY_COPY_OF_REGS + 3 * WORD, - r.r5: MY_COPY_OF_REGS + 4 * WORD, - r.r6: MY_COPY_OF_REGS + 5 * WORD, - r.r7: MY_COPY_OF_REGS + 6 * WORD, - r.r8: MY_COPY_OF_REGS + 7 * WORD, - r.r9: MY_COPY_OF_REGS + 8 * WORD, - r.r10: MY_COPY_OF_REGS + 9 * WORD, - r.r11: MY_COPY_OF_REGS + 10 * WORD, - r.r12: MY_COPY_OF_REGS + 11 * WORD, - r.r13: MY_COPY_OF_REGS + 12 * WORD, - r.r14: MY_COPY_OF_REGS + 13 * WORD, - r.r15: MY_COPY_OF_REGS + 14 * WORD, - r.r16: MY_COPY_OF_REGS + 15 * WORD, - r.r17: MY_COPY_OF_REGS + 16 * WORD, - r.r18: MY_COPY_OF_REGS + 17 * WORD, - r.r19: MY_COPY_OF_REGS + 18 * WORD, - r.r20: MY_COPY_OF_REGS + 19 * WORD, - r.r21: MY_COPY_OF_REGS + 20 * WORD, - r.r22: MY_COPY_OF_REGS + 21 * WORD, - r.r23: MY_COPY_OF_REGS + 22 * WORD, - r.r24: MY_COPY_OF_REGS + 23 * WORD, - r.r25: MY_COPY_OF_REGS + 24 * WORD, - r.r26: MY_COPY_OF_REGS + 25 * WORD, - r.r27: MY_COPY_OF_REGS + 26 * WORD, - r.r28: MY_COPY_OF_REGS + 27 * WORD, - r.r29: MY_COPY_OF_REGS + 28 * WORD, - r.r30: MY_COPY_OF_REGS + 29 * WORD, - r.r31: MY_COPY_OF_REGS + 30 * WORD, + r.r3: MY_COPY_OF_REGS + 0 * WORD, + r.r4: MY_COPY_OF_REGS + 1 * WORD, + r.r5: MY_COPY_OF_REGS + 2 * WORD, + r.r6: MY_COPY_OF_REGS + 3 * WORD, + r.r7: MY_COPY_OF_REGS + 4 * WORD, + r.r8: MY_COPY_OF_REGS + 5 * WORD, + r.r9: MY_COPY_OF_REGS + 6 * WORD, + r.r10: MY_COPY_OF_REGS + 7 * WORD, + r.r11: MY_COPY_OF_REGS + 8 * WORD, + r.r12: MY_COPY_OF_REGS + 9 * WORD, + r.r14: MY_COPY_OF_REGS + 10 * WORD, + r.r15: MY_COPY_OF_REGS + 11 * WORD, + r.r16: MY_COPY_OF_REGS + 12 * WORD, + r.r17: MY_COPY_OF_REGS + 13 * WORD, + r.r18: MY_COPY_OF_REGS + 14 * WORD, + r.r19: MY_COPY_OF_REGS + 15 * WORD, + r.r20: MY_COPY_OF_REGS + 16 * WORD, + r.r21: MY_COPY_OF_REGS + 17 * WORD, + r.r22: MY_COPY_OF_REGS + 18 * WORD, + r.r23: MY_COPY_OF_REGS + 19 * WORD, + r.r24: MY_COPY_OF_REGS + 20 * WORD, + r.r25: MY_COPY_OF_REGS + 21 * WORD, + r.r26: MY_COPY_OF_REGS + 22 * WORD, + r.r27: MY_COPY_OF_REGS + 23 * WORD, + r.r28: MY_COPY_OF_REGS + 24 * WORD, + r.r29: MY_COPY_OF_REGS + 25 * WORD, + r.r30: MY_COPY_OF_REGS + 26 * WORD, } def __init__(self, longevity, frame_manager=None, assembler=None): @@ -177,7 +173,7 @@ def prepare_loop(self, inputargs, operations): self._prepare(inputargs, operations) self._set_initial_bindings(inputargs) - self.possibly_free_vars(list(inputargs)) + self.possibly_free_vars(inputargs) def prepare_bridge(self, inputargs, arglocs, ops): self._prepare(inputargs, ops) @@ -425,7 +421,7 @@ prepare_guard_not_invalidated = prepare_guard_no_overflow def prepare_guard_exception(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) loc = self._ensure_value_is_boxed(arg0) loc1 = self.get_scratch_reg(INT, boxes) @@ -447,7 +443,7 @@ return arglocs def prepare_guard_value(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes l0 = self._ensure_value_is_boxed(a0, boxes) l1 = self._ensure_value_is_boxed(a1, boxes) @@ -459,7 +455,7 @@ def prepare_guard_class(self, op): assert isinstance(op.getarg(0), Box) - boxes = list(op.getarglist()) + boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) y = self.get_scratch_reg(REF, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) @@ -559,7 +555,7 @@ return [] def prepare_setfield_gc(self, op): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes ofs, size, sign = unpack_fielddescr(op.getdescr()) base_loc = self._ensure_value_is_boxed(a0, boxes) @@ -608,6 +604,7 @@ self.possibly_free_var(op.result) return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_getinteriorfield_raw = prepare_getinteriorfield_gc def prepare_setinteriorfield_gc(self, op): t = unpack_interiorfielddescr(op.getdescr()) @@ -622,6 +619,7 @@ ofs_loc = self._ensure_value_is_boxed(ConstInt(ofs), args) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_setinteriorfield_raw = prepare_setinteriorfield_gc def prepare_arraylen_gc(self, op): arraydescr = op.getdescr() @@ -811,6 +809,7 @@ prepare_debug_merge_point = void prepare_jit_debug = void + prepare_keepalive = void def prepare_cond_call_gc_wb(self, op): assert op.result is None diff --git a/pypy/jit/backend/ppc/register.py b/pypy/jit/backend/ppc/register.py --- a/pypy/jit/backend/ppc/register.py +++ b/pypy/jit/backend/ppc/register.py @@ -14,7 +14,8 @@ NONVOLATILES = [r14, r15, r16, r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31] -VOLATILES = [r0, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13] +VOLATILES = [r0, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12] +# volatile r2 is persisted around calls and r13 can be ignored NONVOLATILES_FLOAT = [f14, f15, f16, f17, f18, f19, f20, f21, f22, f23, f24, f25, f26, f27, f28, f29, f30, f31] diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -32,7 +32,7 @@ gcdescr.force_index_ofs = FORCE_INDEX_OFS # XXX for now the ppc backend does not support the gcremovetypeptr # translation option - assert gcdescr.config.translation.gcremovetypeptr is False + # assert gcdescr.config.translation.gcremovetypeptr is False AbstractLLCPU.__init__(self, rtyper, stats, opts, translate_support_code, gcdescr) diff --git a/pypy/jit/backend/ppc/test/test_ztranslation.py b/pypy/jit/backend/ppc/test/test_ztranslation.py --- a/pypy/jit/backend/ppc/test/test_ztranslation.py +++ b/pypy/jit/backend/ppc/test/test_ztranslation.py @@ -18,8 +18,9 @@ def _check_cbuilder(self, cbuilder): # We assume here that we have sse2. If not, the CPUClass # needs to be changed to CPU386_NO_SSE2, but well. - assert '-msse2' in cbuilder.eci.compile_extra - assert '-mfpmath=sse' in cbuilder.eci.compile_extra + #assert '-msse2' in cbuilder.eci.compile_extra + #assert '-mfpmath=sse' in cbuilder.eci.compile_extra + pass def test_stuff_translates(self): # this is a basic test that tries to hit a number of features and their @@ -176,7 +177,7 @@ def _get_TranslationContext(self): t = TranslationContext() t.config.translation.gc = DEFL_GC # 'hybrid' or 'minimark' - t.config.translation.gcrootfinder = 'asmgcc' + t.config.translation.gcrootfinder = 'shadowstack' t.config.translation.list_comprehension_operations = True t.config.translation.gcremovetypeptr = True return t diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1677,6 +1677,7 @@ c_box = self.alloc_string("hi there").constbox() c_nest = ConstInt(0) self.execute_operation(rop.DEBUG_MERGE_POINT, [c_box, c_nest], 'void') + self.execute_operation(rop.KEEPALIVE, [c_box], 'void') self.execute_operation(rop.JIT_DEBUG, [c_box, c_nest, c_nest, c_nest, c_nest], 'void') From noreply at buildbot.pypy.org Tue Feb 28 14:08:07 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:07 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use architecture independent cmp_op instead of cmpwi/cmpdi Message-ID: <20120228130807.A7EB78203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52974:b2a3369d7362 Date: 2012-02-28 14:04 +0100 http://bitbucket.org/pypy/pypy/changeset/b2a3369d7362/ Log: use architecture independent cmp_op instead of cmpwi/cmpdi diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -910,14 +910,13 @@ if bitpos > 0: self.mc.rlwinm(r.SCRATCH.value, r.SCRATCH.value, 32 - bitpos, 31, 31) - # test whether this bit is set - self.mc.cmpwi(0, r.SCRATCH.value, 1) else: if bitpos > 0: self.mc.rldicl(r.SCRATCH.value, r.SCRATCH.value, 64 - bitpos, 63) - # test whether this bit is set - self.mc.cmpdi(0, r.SCRATCH.value, 1) + + # test whether this bit is set + self.mc.cmp_op(0, r.SCRATCH.value, 1, imm=True) jz_location = self.mc.currpos() self.mc.nop() From noreply at buildbot.pypy.org Tue Feb 28 14:08:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 14:08:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): the previous jump condition was correct, see comment in code Message-ID: <20120228130808.D0E0A8203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52975:287553fbcb22 Date: 2012-02-28 14:06 +0100 http://bitbucket.org/pypy/pypy/changeset/287553fbcb22/ Log: (bivab, hager): the previous jump condition was correct, see comment in code diff --git a/pypy/jit/backend/ppc/opassembler.py b/pypy/jit/backend/ppc/opassembler.py --- a/pypy/jit/backend/ppc/opassembler.py +++ b/pypy/jit/backend/ppc/opassembler.py @@ -938,7 +938,12 @@ # patch the JZ above offset = self.mc.currpos() - jz_location pmc = OverwritingBuilder(self.mc, jz_location, 1) - pmc.bc(12, 2, offset) # jump if the two values are equal + # We want to jump if the compared bits are not equal. + # This corresponds to the x86 backend, which uses + # the TEST operation. Hence, on first sight, it might + # seem that we use the wrong condition here. This is + # because TEST results in a 1 if the operands are different. + pmc.bc(4, 2, offset) pmc.overwrite() emit_cond_call_gc_wb_array = emit_cond_call_gc_wb From noreply at buildbot.pypy.org Tue Feb 28 14:09:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 14:09:36 +0100 (CET) Subject: [pypy-commit] pypy miniscan: Add the option. Message-ID: <20120228130936.6E8BD8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52976:1407952bc3f8 Date: 2012-02-28 14:04 +0100 http://bitbucket.org/pypy/pypy/changeset/1407952bc3f8/ Log: Add the option. diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -90,13 +90,14 @@ default=IS_64_BITS, cmdline="--gcremovetypeptr"), ChoiceOption("gcrootfinder", "Strategy for finding GC Roots (framework GCs only)", - ["n/a", "shadowstack", "asmgcc"], + ["n/a", "shadowstack", "asmgcc", "scan"], "shadowstack", cmdline="--gcrootfinder", requires={ "shadowstack": [("translation.gctransformer", "framework")], "asmgcc": [("translation.gctransformer", "framework"), ("translation.backend", "c")], + "scan": [("translation.gctransformer", "framework")], }), # other noticeable options From noreply at buildbot.pypy.org Tue Feb 28 14:09:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 14:09:37 +0100 (CET) Subject: [pypy-commit] pypy miniscan: Comment. Message-ID: <20120228130937.99A298203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52977:c5c715bfac2b Date: 2012-02-28 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/c5c715bfac2b/ Log: Comment. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -116,6 +116,10 @@ TID_MASK = (first_gcflag << 7) - 1 +# The remaining unused bits (GCFLAG_HIGH_MASK) are set to a known pattern +# in all objects (GCFLAG_HIGH). From a random address in the nursery, it +# let us know if it points to a valid object: either "definitely not", or +# "very likely". We can't be sure, though, so be careful. GCFLAG_HIGH_MASK = intmask(~TID_MASK) assert GCFLAG_HIGH_MASK < 0 and not (GCFLAG_HIGH_MASK & GCFLAG_CARDS_SET) GCFLAG_HIGH = intmask(0x5555555555555555 & GCFLAG_HIGH_MASK) From noreply at buildbot.pypy.org Tue Feb 28 14:52:46 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 14:52:46 +0100 (CET) Subject: [pypy-commit] pypy miniscan: In-progress: import existing tests for using them with 'gcrootfinder=scan'. Message-ID: <20120228135246.BE4158203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: miniscan Changeset: r52978:8ab8c0ea5f42 Date: 2012-02-28 14:51 +0100 http://bitbucket.org/pypy/pypy/changeset/8ab8c0ea5f42/ Log: In-progress: import existing tests for using them with 'gcrootfinder=scan'. diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -251,9 +251,12 @@ self.nursery = NULL self.nursery_next = NULL self.nursery_frag_end = NULL + self.nursery_top = NULL self.debug_tiny_nursery = -1 self.debug_rotating_nurseries = None # + self.gcrootfinder_scan = (config.gcrootfinder == "scan") + # # The ArenaCollection() handles the nonmovable objects allocation. if ArenaCollectionClass is None: ArenaCollectionClass = minimarkpage.ArenaCollection @@ -396,7 +399,8 @@ # the current position in the nursery: self.nursery_next = self.nursery # the end of the nursery: - self.nursery_frag_end = self.nursery + self.nursery_size + self.nursery_top = self.nursery + self.nursery_size + self.nursery_frag_end = self.nursery_top # initialize the threshold self.min_heap_size = max(self.min_heap_size, self.nursery_size * self.major_collection_threshold) @@ -559,10 +563,10 @@ # # Get the memory from the nursery. If there is not enough space # there, do a collect first. - result = self.nursery_free - self.nursery_free = result + totalsize - if self.nursery_free > self.nursery_top: - result = self.collect_and_reserve(totalsize) + result = self.nursery_next + self.nursery_next = result + totalsize + if self.nursery_next > self.nursery_frag_end: + result = self.pick_next_fragment(totalsize) # # Build the object. llarena.arena_reserve(result, totalsize) @@ -581,8 +585,17 @@ if gen > 0: self.major_collection() + def pick_next_fragment(self, totalsize): + """To call when nursery_next overflows nursery_frag_end. + Pick the next fragment of the nursery, or if there are none + big enough for 'totalsize', do a collection. + """ + # XXX + return self.collect_and_reserve(totalsize) + pick_next_fragment._dont_inline_ = True + def collect_and_reserve(self, totalsize): - """To call when nursery_free overflows nursery_top. + """To call when we have run out of nursery fragments. Do a minor collection, and possibly also a major collection, and finally reserve 'totalsize' bytes at the start of the now-empty nursery. @@ -607,7 +620,6 @@ self.nursery_free = self.nursery_top - self.debug_tiny_nursery # return result - collect_and_reserve._dont_inline_ = True def external_malloc(self, typeid, length, can_make_young=True): @@ -753,6 +765,7 @@ if self.next_major_collection_threshold < 0: # cannot trigger a full collection now, but we can ensure # that one will occur very soon + xxx self.nursery_free = self.nursery_top def can_malloc_nonmovable(self): @@ -822,7 +835,9 @@ # have been chosen to allow 'flags' to be zero in the common # case (hence the 'NO' in their name). hdr = llmemory.cast_adr_to_ptr(addr, lltype.Ptr(self.HDR)) - hdr.tid = self.combine(typeid16, flags | GCFLAG_HIGH) + if self.gcrootfinder_scan: # don't bother setting these high + flags |= GCFLAG_HIGH # bits if not "scan" + hdr.tid = self.combine(typeid16, flags) def init_gc_object_immortal(self, addr, typeid16, flags=0): # For prebuilt GC objects, the flags must contain @@ -876,8 +891,13 @@ if result: ll_assert(tid == -42, "bogus header for young obj") else: - ll_assert(bool(tid), "bogus header (1)") - ll_assert(tid & ~TID_MASK == 0, "bogus header (2)") + htid = llop.extract_ushort(llgroup.HALFWORD, tid) + ll_assert(bool(htid), "bogus header (1)") + if self.gcrootfinder_scan: + expected = GCFLAG_HIGH + else: + expected = 0 + ll_assert(tid & GCFLAG_HIGH_MASK == expected, "bogus header (2)") return result def get_forwarding_address(self, obj): @@ -1294,6 +1314,7 @@ # the whole nursery with zero and reset the current nursery pointer. llarena.arena_reset(self.nursery, self.nursery_size, 2) self.debug_rotate_nursery() + xxx self.nursery_free = self.nursery # debug_print("minor collect, total memory used:", diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -177,6 +177,8 @@ if self.gcheap.gc.points_to_valid_gc_object(addrofaddr): collect_static_in_prebuilt_nongc(gc, addrofaddr) if collect_stack_root: + translator = gcheap.llinterp.typer.annotator.translator + assert translator.config.translation.gcrootfinder != 'scan' for addrofaddr in gcheap.llinterp.find_roots(): if self.gcheap.gc.points_to_valid_gc_object(addrofaddr): collect_stack_root(gc, addrofaddr) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -24,6 +24,7 @@ class GCTest(object): GC_PARAMS = {} + CONFIG_OPTS = {} GC_CAN_MOVE = False GC_CAN_MALLOC_NONMOVABLE = True GC_CAN_SHRINK_ARRAY = False @@ -40,6 +41,7 @@ py.log._setstate(cls._saved_logstate) def interpret(self, func, values, **kwds): + kwds.update(self.CONFIG_OPTS) interp, graph = get_interpreter(func, values, **kwds) gcwrapper.prepare_graphs_and_create_gc(interp, self.GCClass, self.GC_PARAMS) @@ -921,3 +923,6 @@ class TestMiniMarkGCCardMarking(TestMiniMarkGC): GC_PARAMS = {'card_page_indices': 4} + +class TestMiniMarkGCScan(TestMiniMarkGC): + CONFIG_OPTS = {'gcrootfinder': 'scan'} diff --git a/pypy/translator/c/gc.py b/pypy/translator/c/gc.py --- a/pypy/translator/c/gc.py +++ b/pypy/translator/c/gc.py @@ -6,7 +6,7 @@ typeOf, Ptr, ContainerType, RttiStruct, \ RuntimeTypeInfo, getRuntimeTypeInfo, top_container from pypy.rpython.memory.gctransform import \ - refcounting, boehm, framework, asmgcroot + refcounting, boehm, framework, asmgcroot, scan from pypy.rpython.lltypesystem import lltype, llmemory from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -403,6 +403,12 @@ def OP_GC_STACK_BOTTOM(self, funcgen, op): return 'pypy_asm_stack_bottom();' +class ScanFrameworkGcPolicy(FrameworkGcPolicy): + transformerclass = scan.ScanFrameworkGCTransformer + + def GC_KEEPALIVE(self, funcgen, v): + return 'pypy_asm_keepalive(%s);' % funcgen.expr(v) + name_to_gcpolicy = { 'boehm': BoehmGcPolicy, @@ -410,6 +416,5 @@ 'none': NoneGcPolicy, 'framework': FrameworkGcPolicy, 'framework+asmgcroot': AsmGcRootFrameworkGcPolicy, + 'framework+scan': ScanFrameworkGcPolicy, } - - diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -193,6 +193,8 @@ name = self.config.translation.gctransformer if self.config.translation.gcrootfinder == "asmgcc": name = "%s+asmgcroot" % (name,) + if self.config.translation.gcrootfinder == "scan": + name = "%s+scan" % (name,) return gc.name_to_gcpolicy[name] return self.gcpolicy diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -15,6 +15,7 @@ class TestUsingFramework(object): gcpolicy = "marksweep" + gcrootfinder = "shadowstack" should_be_moving = False removetypeptr = False taggedpointers = False @@ -41,7 +42,8 @@ t = Translation(main, standalone=True, gc=cls.gcpolicy, policy=annpolicy.StrictAnnotatorPolicy(), taggedpointers=cls.taggedpointers, - gcremovetypeptr=cls.removetypeptr) + gcremovetypeptr=cls.removetypeptr, + gcrootfinder=cls.gcrootfinder) t.disable(['backendopt']) t.set_backend_extra_options(c_debug_defines=True) t.rtype() @@ -1600,3 +1602,6 @@ class TestMiniMarkGCMostCompact(TaggedPointersTest, TestMiniMarkGC): removetypeptr = True + +class TestMiniMarkGCScan(TestMiniMarkGC): + gcrootfinder = "scan" From noreply at buildbot.pypy.org Tue Feb 28 16:06:51 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 16:06:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add test_gc_integration from x86 backend Message-ID: <20120228150651.DAF828203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52979:0c2f89975c18 Date: 2012-02-28 16:06 +0100 http://bitbucket.org/pypy/pypy/changeset/0c2f89975c18/ Log: add test_gc_integration from x86 backend diff --git a/pypy/jit/backend/ppc/test/test_gc_integration.py b/pypy/jit/backend/ppc/test/test_gc_integration.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/ppc/test/test_gc_integration.py @@ -0,0 +1,259 @@ + +""" Tests for register allocation for common constructs +""" + +import py +from pypy.jit.metainterp.history import BoxInt, ConstInt,\ + BoxPtr, ConstPtr, TreeLoop, TargetToken +from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.codewriter import heaptracker +from pypy.jit.codewriter.effectinfo import EffectInfo +from pypy.jit.backend.llsupport.descr import GcCache, FieldDescr, FLAG_SIGNED +from pypy.jit.backend.llsupport.gc import GcLLDescription +from pypy.jit.backend.detect_cpu import getcpuclass +from pypy.jit.backend.x86.regalloc import RegAlloc +from pypy.jit.backend.x86.arch import WORD, FRAME_FIXED_SIZE +from pypy.jit.tool.oparser import parse +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rpython.annlowlevel import llhelper +from pypy.rpython.lltypesystem import rclass, rstr +from pypy.jit.backend.llsupport.gc import GcLLDescr_framework + +from pypy.jit.backend.x86.test.test_regalloc import MockAssembler +from pypy.jit.backend.x86.test.test_regalloc import BaseTestRegalloc +from pypy.jit.backend.x86.regalloc import X86RegisterManager, X86FrameManager,\ + X86XMMRegisterManager + +CPU = getcpuclass() + +class MockGcRootMap(object): + is_shadow_stack = False + def get_basic_shape(self, is_64_bit): + return ['shape'] + def add_frame_offset(self, shape, offset): + shape.append(offset) + def add_callee_save_reg(self, shape, reg_index): + index_to_name = { 1: 'ebx', 2: 'esi', 3: 'edi' } + shape.append(index_to_name[reg_index]) + def compress_callshape(self, shape, datablockwrapper): + assert datablockwrapper == 'fakedatablockwrapper' + assert shape[0] == 'shape' + return ['compressed'] + shape[1:] + +class MockGcDescr(GcCache): + get_malloc_slowpath_addr = None + write_barrier_descr = None + moving_gc = True + gcrootmap = MockGcRootMap() + + def initialize(self): + pass + + _record_constptrs = GcLLDescr_framework._record_constptrs.im_func + rewrite_assembler = GcLLDescr_framework.rewrite_assembler.im_func + +class TestRegallocDirectGcIntegration(object): + + def test_mark_gc_roots(self): + cpu = CPU(None, None) + cpu.setup_once() + regalloc = RegAlloc(MockAssembler(cpu, MockGcDescr(False))) + regalloc.assembler.datablockwrapper = 'fakedatablockwrapper' + boxes = [BoxPtr() for i in range(len(X86RegisterManager.all_regs))] + longevity = {} + for box in boxes: + longevity[box] = (0, 1) + regalloc.fm = X86FrameManager() + regalloc.rm = X86RegisterManager(longevity, regalloc.fm, + assembler=regalloc.assembler) + regalloc.xrm = X86XMMRegisterManager(longevity, regalloc.fm, + assembler=regalloc.assembler) + cpu = regalloc.assembler.cpu + for box in boxes: + regalloc.rm.try_allocate_reg(box) + TP = lltype.FuncType([], lltype.Signed) + calldescr = cpu.calldescrof(TP, TP.ARGS, TP.RESULT, + EffectInfo.MOST_GENERAL) + regalloc.rm._check_invariants() + box = boxes[0] + regalloc.position = 0 + regalloc.consider_call(ResOperation(rop.CALL, [box], BoxInt(), + calldescr)) + assert len(regalloc.assembler.movs) == 3 + # + mark = regalloc.get_mark_gc_roots(cpu.gc_ll_descr.gcrootmap) + assert mark[0] == 'compressed' + base = -WORD * FRAME_FIXED_SIZE + expected = ['ebx', 'esi', 'edi', base, base-WORD, base-WORD*2] + assert dict.fromkeys(mark[1:]) == dict.fromkeys(expected) + +class TestRegallocGcIntegration(BaseTestRegalloc): + + cpu = CPU(None, None) + cpu.gc_ll_descr = MockGcDescr(False) + cpu.setup_once() + + S = lltype.GcForwardReference() + S.become(lltype.GcStruct('S', ('field', lltype.Ptr(S)), + ('int', lltype.Signed))) + + fielddescr = cpu.fielddescrof(S, 'field') + + struct_ptr = lltype.malloc(S) + struct_ref = lltype.cast_opaque_ptr(llmemory.GCREF, struct_ptr) + child_ptr = lltype.nullptr(S) + struct_ptr.field = child_ptr + + + descr0 = cpu.fielddescrof(S, 'int') + ptr0 = struct_ref + + targettoken = TargetToken() + + namespace = locals().copy() + + def test_basic(self): + ops = ''' + [p0] + p1 = getfield_gc(p0, descr=fielddescr) + finish(p1) + ''' + self.interpret(ops, [self.struct_ptr]) + assert not self.getptr(0, lltype.Ptr(self.S)) + + def test_rewrite_constptr(self): + ops = ''' + [] + p1 = getfield_gc(ConstPtr(struct_ref), descr=fielddescr) + finish(p1) + ''' + self.interpret(ops, []) + assert not self.getptr(0, lltype.Ptr(self.S)) + + def test_bug_0(self): + ops = ''' + [i0, i1, i2, i3, i4, i5, i6, i7, i8] + label(i0, i1, i2, i3, i4, i5, i6, i7, i8, descr=targettoken) + guard_value(i2, 1) [i2, i3, i4, i5, i6, i7, i0, i1, i8] + guard_class(i4, 138998336) [i4, i5, i6, i7, i0, i1, i8] + i11 = getfield_gc(i4, descr=descr0) + guard_nonnull(i11) [i4, i5, i6, i7, i0, i1, i11, i8] + i13 = getfield_gc(i11, descr=descr0) + guard_isnull(i13) [i4, i5, i6, i7, i0, i1, i11, i8] + i15 = getfield_gc(i4, descr=descr0) + i17 = int_lt(i15, 0) + guard_false(i17) [i4, i5, i6, i7, i0, i1, i11, i15, i8] + i18 = getfield_gc(i11, descr=descr0) + i19 = int_ge(i15, i18) + guard_false(i19) [i4, i5, i6, i7, i0, i1, i11, i15, i8] + i20 = int_lt(i15, 0) + guard_false(i20) [i4, i5, i6, i7, i0, i1, i11, i15, i8] + i21 = getfield_gc(i11, descr=descr0) + i22 = getfield_gc(i11, descr=descr0) + i23 = int_mul(i15, i22) + i24 = int_add(i21, i23) + i25 = getfield_gc(i4, descr=descr0) + i27 = int_add(i25, 1) + setfield_gc(i4, i27, descr=descr0) + i29 = getfield_raw(144839744, descr=descr0) + i31 = int_and(i29, -2141192192) + i32 = int_is_true(i31) + guard_false(i32) [i4, i6, i7, i0, i1, i24] + i33 = getfield_gc(i0, descr=descr0) + guard_value(i33, ConstPtr(ptr0)) [i4, i6, i7, i0, i1, i33, i24] + jump(i0, i1, 1, 17, i4, ConstPtr(ptr0), i6, i7, i24, descr=targettoken) + ''' + self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0, 0], run=False) + +NOT_INITIALIZED = chr(0xdd) + +class GCDescrFastpathMalloc(GcLLDescription): + gcrootmap = None + write_barrier_descr = None + + def __init__(self): + GcLLDescription.__init__(self, None) + # create a nursery + NTP = rffi.CArray(lltype.Char) + self.nursery = lltype.malloc(NTP, 64, flavor='raw') + for i in range(64): + self.nursery[i] = NOT_INITIALIZED + self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 2, + flavor='raw') + self.addrs[0] = rffi.cast(lltype.Signed, self.nursery) + self.addrs[1] = self.addrs[0] + 64 + self.calls = [] + def malloc_slowpath(size): + self.calls.append(size) + # reset the nursery + nadr = rffi.cast(lltype.Signed, self.nursery) + self.addrs[0] = nadr + size + return nadr + self.generate_function('malloc_nursery', malloc_slowpath, + [lltype.Signed], lltype.Signed) + + def get_nursery_free_addr(self): + return rffi.cast(lltype.Signed, self.addrs) + + def get_nursery_top_addr(self): + return rffi.cast(lltype.Signed, self.addrs) + WORD + + def get_malloc_slowpath_addr(self): + return self.get_malloc_fn_addr('malloc_nursery') + + def check_nothing_in_nursery(self): + # CALL_MALLOC_NURSERY should not write anything in the nursery + for i in range(64): + assert self.nursery[i] == NOT_INITIALIZED + +class TestMallocFastpath(BaseTestRegalloc): + + def setup_method(self, method): + cpu = CPU(None, None) + cpu.gc_ll_descr = GCDescrFastpathMalloc() + cpu.setup_once() + self.cpu = cpu + + def test_malloc_fastpath(self): + ops = ''' + [] + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(16) + finish(p0, p1, p2) + ''' + self.interpret(ops, []) + # check the returned pointers + gc_ll_descr = self.cpu.gc_ll_descr + nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 48 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 64 + # slowpath never called + assert gc_ll_descr.calls == [] + + def test_malloc_slowpath(self): + ops = ''' + [] + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(24) # overflow + finish(p0, p1, p2) + ''' + self.interpret(ops, []) + # check the returned pointers + gc_ll_descr = self.cpu.gc_ll_descr + nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 0 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 24 + # this should call slow path once + assert gc_ll_descr.calls == [24] From noreply at buildbot.pypy.org Tue Feb 28 16:31:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: use 'struct' instead of 'string', because when we happen to import the stdlib module, it has fewer dependencies. It's much faster, and string won't import anyway because itertools is shadowed Message-ID: <20120228153132.3AB448203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52980:ee935eada069 Date: 2012-02-28 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/ee935eada069/ Log: use 'struct' instead of 'string', because when we happen to import the stdlib module, it has fewer dependencies. It's much faster, and string won't import anyway because itertools is shadowed diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -48,19 +48,19 @@ abs_b = "import b", abs_x_y = "import x.y", abs_sys = "import sys", - string = "inpackage = 1", + struct = "inpackage = 1", errno = "", - absolute = "from __future__ import absolute_import\nimport string", - relative_b = "from __future__ import absolute_import\nfrom . import string", - relative_c = "from __future__ import absolute_import\nfrom .string import inpackage", + absolute = "from __future__ import absolute_import\nimport struct", + relative_b = "from __future__ import absolute_import\nfrom . import struct", + relative_c = "from __future__ import absolute_import\nfrom .struct import inpackage", relative_f = "from .imp import get_magic", relative_g = "import imp; from .imp import get_magic", ) setuppkg("pkg.pkg1", __init__ = 'from . import a', a = '', - relative_d = "from __future__ import absolute_import\nfrom ..string import inpackage", - relative_e = "from __future__ import absolute_import\nfrom .. import string", + relative_d = "from __future__ import absolute_import\nfrom ..struct import inpackage", + relative_e = "from __future__ import absolute_import\nfrom .. import struct", relative_g = "from .. import pkg1\nfrom ..pkg1 import b", b = "insubpackage = 1", ) @@ -386,12 +386,12 @@ def test_future_absolute_import(self): def imp(): from pkg import absolute - absolute.string.inpackage - raises(AttributeError, imp) + assert hasattr(absolute.struct, 'pack') + imp() def test_future_relative_import_without_from_name(self): from pkg import relative_b - assert relative_b.string.inpackage == 1 + assert relative_b.struct.inpackage == 1 def test_no_relative_import(self): def imp(): @@ -415,7 +415,7 @@ def test_future_relative_import_level_2_without_from_name(self): from pkg.pkg1 import relative_e - assert relative_e.string.inpackage == 1 + assert relative_e.struct.inpackage == 1 def test_future_relative_import_level_3(self): from pkg.pkg1 import relative_g @@ -427,7 +427,7 @@ ns = {'__name__': __name__} exec("""def imp(): print('__name__ =', __name__) - from .string import inpackage + from .struct import inpackage """, ns) raises(ValueError, ns['imp']) From noreply at buildbot.pypy.org Tue Feb 28 16:31:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix the invocations of _testfile because the signature changed Message-ID: <20120228153134.A03FC8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52982:cae066e3bc7d Date: 2012-02-28 12:42 +0100 http://bitbucket.org/pypy/pypy/changeset/cae066e3bc7d/ Log: fix the invocations of _testfile because the signature changed diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -679,7 +679,7 @@ def test_check_compiled_module(self): space = self.space mtime = 12345 - cpathname = _testfile(importing.get_pyc_magic(space), mtime) + cpathname = _testfile(space, importing.get_pyc_magic(space), mtime) ret = importing.check_compiled_module(space, cpathname, mtime) @@ -700,7 +700,7 @@ os.remove(cpathname) # check for wrong version - cpathname = _testfile(importing.get_pyc_magic(space)+1, mtime) + cpathname = _testfile(space, importing.get_pyc_magic(space)+1, mtime) ret = importing.check_compiled_module(space, cpathname, mtime) @@ -968,7 +968,7 @@ pathname = "whatever" mtime = 12345 co = compile('x = 42', '?', 'exec') - cpathname = _testfile(importing.get_pyc_magic(space1), + cpathname = _testfile(space1, importing.get_pyc_magic(space1), mtime, co) w_modulename = space2.wrap('somemodule') stream = streamio.open_file_as_stream(cpathname, "rb") From noreply at buildbot.pypy.org Tue Feb 28 16:31:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure to always specify an explicit encoding. Else, _io.open will try to import locale to get the default one, triggering a recursive import and then BOOM Message-ID: <20120228153135.CF9ED8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52983:df535a5e3932 Date: 2012-02-28 15:39 +0100 http://bitbucket.org/pypy/pypy/changeset/df535a5e3932/ Log: make sure to always specify an explicit encoding. Else, _io.open will try to import locale to get the default one, triggering a recursive import and then BOOM diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -2,6 +2,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.module import Module from pypy.interpreter.gateway import unwrap_spec +from pypy.objspace.std import unicodetype from pypy.rlib import streamio from pypy.module._io.interp_iobase import W_IOBase from pypy.module._io import interp_io @@ -69,7 +70,9 @@ # open(). However, CPython 3 just passes the fd, so the returned file # object doesn't have a name attached. We do the same in PyPy, because # there is no easy way to attach the filename -- too bad - w_fileobj = interp_io.open(space, space.wrap(fd), find_info.filemode) + encoding = unicodetype.getdefaultencoding(space) + w_fileobj = interp_io.open(space, space.wrap(fd), find_info.filemode, + encoding=encoding) else: w_fileobj = space.w_None w_import_info = space.newtuple( From noreply at buildbot.pypy.org Tue Feb 28 16:31:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: it might happen that itertools is already imported because _io imports locale which imports the world. Delete it from sys.modules before starting the test Message-ID: <20120228153133.704E98203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52981:dd83b30764ff Date: 2012-02-28 12:39 +0100 http://bitbucket.org/pypy/pypy/changeset/dd83b30764ff/ Log: it might happen that itertools is already imported because _io imports locale which imports the world. Delete it from sys.modules before starting the test diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -611,7 +611,7 @@ # 'import itertools' is supposed to find itertools.py if there is # one in sys.path. import sys - assert 'itertools' not in sys.modules + sys.modules.pop('itertools', None) import itertools assert hasattr(itertools, 'hello_world') assert not hasattr(itertools, 'count') @@ -624,7 +624,7 @@ # if there is also one in sys.path as long as it is *after* the # special entry '.../lib_pypy/__extensions__'. import sys - assert 'itertools' not in sys.modules + sys.modules.pop('itertools', None) sys.path.append(sys.path.pop(0)) try: import itertools From noreply at buildbot.pypy.org Tue Feb 28 16:31:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: autodetect the encoding and use it to open the file when calling imp.find_module Message-ID: <20120228153137.0CC058203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52984:7f5530c7685c Date: 2012-02-28 16:03 +0100 http://bitbucket.org/pypy/pypy/changeset/7f5530c7685c/ Log: autodetect the encoding and use it to open the file when calling imp.find_module diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -2,6 +2,7 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.module import Module from pypy.interpreter.gateway import unwrap_spec +from pypy.interpreter.pyparser import pytokenizer from pypy.objspace.std import unicodetype from pypy.rlib import streamio from pypy.module._io.interp_iobase import W_IOBase @@ -65,12 +66,20 @@ stream = find_info.stream if stream is not None: - fd = stream.try_to_find_file_descriptor() + # try to find the declared encoding + encoding = None + firstline = stream.readline() + stream.seek(0, 0) # reset position + if firstline.startswith('#'): + encoding = pytokenizer.match_encoding_declaration(firstline) + if encoding is None: + encoding = unicodetype.getdefaultencoding(space) + # # in python2, both CPython and PyPy pass the filename to # open(). However, CPython 3 just passes the fd, so the returned file # object doesn't have a name attached. We do the same in PyPy, because # there is no easy way to attach the filename -- too bad - encoding = unicodetype.getdefaultencoding(space) + fd = stream.try_to_find_file_descriptor() w_fileobj = interp_io.open(space, space.wrap(fd), find_info.filemode, encoding=encoding) else: diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -1,10 +1,16 @@ from __future__ import with_statement +from pypy.tool.udir import udir MARKER = 42 class AppTestImpModule: def setup_class(cls): cls.w_imp = cls.space.getbuiltinmodule('imp') cls.w_file_module = cls.space.wrap(__file__) + latin1 = udir.join('latin1.py') + latin1.write("# -*- coding: iso-8859-1 -*\n") + fake_latin1 = udir.join('fake_latin1.py') + fake_latin1.write("print('-*- coding: iso-8859-1 -*')") + cls.w_udir = cls.space.wrap(str(udir)) def w__py_file(self): fn = self.file_module @@ -33,6 +39,18 @@ assert pathname.endswith('.py') # even if .pyc is up-to-date assert description in self.imp.get_suffixes() + def test_find_module_with_encoding(self): + import sys + sys.path.insert(0, self.udir) + try: + file, pathname, description = self.imp.find_module('latin1') + assert file.encoding == 'iso-8859-1' + # + file, pathname, description = self.imp.find_module('fake_latin1') + assert file.encoding == 'utf-8' + finally: + del sys.path[0] + def test_load_dynamic(self): raises(ImportError, self.imp.load_dynamic, 'foo', 'bar') raises(ImportError, self.imp.load_dynamic, 'foo', 'bar', From noreply at buildbot.pypy.org Tue Feb 28 16:31:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: move the import in the setup_class, else it runs and fails on top of py.py (because the tests load the source of test_app.py itself) Message-ID: <20120228153138.3C9538203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52985:6d4f46082489 Date: 2012-02-28 16:15 +0100 http://bitbucket.org/pypy/pypy/changeset/6d4f46082489/ Log: move the import in the setup_class, else it runs and fails on top of py.py (because the tests load the source of test_app.py itself) diff --git a/pypy/module/imp/test/test_app.py b/pypy/module/imp/test/test_app.py --- a/pypy/module/imp/test/test_app.py +++ b/pypy/module/imp/test/test_app.py @@ -1,9 +1,9 @@ from __future__ import with_statement -from pypy.tool.udir import udir MARKER = 42 class AppTestImpModule: def setup_class(cls): + from pypy.tool.udir import udir cls.w_imp = cls.space.getbuiltinmodule('imp') cls.w_file_module = cls.space.wrap(__file__) latin1 = udir.join('latin1.py') From noreply at buildbot.pypy.org Tue Feb 28 16:31:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:39 +0100 (CET) Subject: [pypy-commit] pypy default: move wrap_streamerror and wrap_oserror_as_ioerror in a separate file. This is usefult in the py3k branch because we are about to kill module/_file Message-ID: <20120228153139.7B93B8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r52986:ca63e43565f0 Date: 2012-02-28 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/ca63e43565f0/ Log: move wrap_streamerror and wrap_oserror_as_ioerror in a separate file. This is usefult in the py3k branch because we are about to kill module/_file diff --git a/pypy/interpreter/streamutil.py b/pypy/interpreter/streamutil.py new file mode 100644 --- /dev/null +++ b/pypy/interpreter/streamutil.py @@ -0,0 +1,17 @@ +from pypy.rlib.streamio import StreamError +from pypy.interpreter.error import OperationError, wrap_oserror2 + +def wrap_streamerror(space, e, w_filename=None): + if isinstance(e, StreamError): + return OperationError(space.w_ValueError, + space.wrap(e.message)) + elif isinstance(e, OSError): + return wrap_oserror_as_ioerror(space, e, w_filename) + else: + # should not happen: wrap_streamerror() is only called when + # StreamErrors = (OSError, StreamError) are raised + return OperationError(space.w_IOError, space.w_None) + +def wrap_oserror_as_ioerror(space, e, w_filename=None): + return wrap_oserror2(space, e, w_filename, + w_exception_class=space.w_IOError) diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -5,14 +5,13 @@ from pypy.rlib import streamio from pypy.rlib.rarithmetic import r_longlong from pypy.rlib.rstring import StringBuilder -from pypy.module._file.interp_stream import (W_AbstractStream, StreamErrors, - wrap_streamerror, wrap_oserror_as_ioerror) +from pypy.module._file.interp_stream import W_AbstractStream, StreamErrors from pypy.module.posix.interp_posix import dispatch_filename from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, make_weakref_descr, interp_attrproperty_w) from pypy.interpreter.gateway import interp2app, unwrap_spec - +from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror class W_File(W_AbstractStream): """An interp-level file object. This implements the same interface than diff --git a/pypy/module/_file/interp_stream.py b/pypy/module/_file/interp_stream.py --- a/pypy/module/_file/interp_stream.py +++ b/pypy/module/_file/interp_stream.py @@ -2,27 +2,13 @@ from pypy.rlib import streamio from pypy.rlib.streamio import StreamErrors -from pypy.interpreter.error import OperationError, wrap_oserror2 +from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import ObjSpace, Wrappable from pypy.interpreter.typedef import TypeDef from pypy.interpreter.gateway import interp2app +from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror -def wrap_streamerror(space, e, w_filename=None): - if isinstance(e, streamio.StreamError): - return OperationError(space.w_ValueError, - space.wrap(e.message)) - elif isinstance(e, OSError): - return wrap_oserror_as_ioerror(space, e, w_filename) - else: - # should not happen: wrap_streamerror() is only called when - # StreamErrors = (OSError, StreamError) are raised - return OperationError(space.w_IOError, space.w_None) - -def wrap_oserror_as_ioerror(space, e, w_filename=None): - return wrap_oserror2(space, e, w_filename, - w_exception_class=space.w_IOError) - class W_AbstractStream(Wrappable): """Base class for interp-level objects that expose streams to app-level""" slock = None diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -1,10 +1,11 @@ from pypy.module.imp import importing from pypy.module._file.interp_file import W_File from pypy.rlib import streamio +from pypy.rlib.streamio import StreamErrors from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.module import Module from pypy.interpreter.gateway import unwrap_spec -from pypy.module._file.interp_stream import StreamErrors, wrap_streamerror +from pypy.interpreter.streamutil import wrap_streamerror def get_suffixes(space): From noreply at buildbot.pypy.org Tue Feb 28 16:31:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:31:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120228153140.B35568203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52987:0921c2a635db Date: 2012-02-28 16:31 +0100 http://bitbucket.org/pypy/pypy/changeset/0921c2a635db/ Log: hg merge default diff --git a/pypy/interpreter/streamutil.py b/pypy/interpreter/streamutil.py new file mode 100644 --- /dev/null +++ b/pypy/interpreter/streamutil.py @@ -0,0 +1,17 @@ +from pypy.rlib.streamio import StreamError +from pypy.interpreter.error import OperationError, wrap_oserror2 + +def wrap_streamerror(space, e, w_filename=None): + if isinstance(e, StreamError): + return OperationError(space.w_ValueError, + space.wrap(e.message)) + elif isinstance(e, OSError): + return wrap_oserror_as_ioerror(space, e, w_filename) + else: + # should not happen: wrap_streamerror() is only called when + # StreamErrors = (OSError, StreamError) are raised + return OperationError(space.w_IOError, space.w_None) + +def wrap_oserror_as_ioerror(space, e, w_filename=None): + return wrap_oserror2(space, e, w_filename, + w_exception_class=space.w_IOError) diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py --- a/pypy/module/_file/interp_file.py +++ b/pypy/module/_file/interp_file.py @@ -5,14 +5,13 @@ from pypy.rlib import streamio from pypy.rlib.rarithmetic import r_longlong from pypy.rlib.rstring import StringBuilder -from pypy.module._file.interp_stream import (W_AbstractStream, StreamErrors, - wrap_streamerror, wrap_oserror_as_ioerror) +from pypy.module._file.interp_stream import W_AbstractStream, StreamErrors from pypy.module.posix.interp_posix import dispatch_filename from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.typedef import (TypeDef, GetSetProperty, interp_attrproperty, make_weakref_descr, interp_attrproperty_w) from pypy.interpreter.gateway import interp2app, unwrap_spec - +from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror class W_File(W_AbstractStream): """An interp-level file object. This implements the same interface than diff --git a/pypy/module/_file/interp_stream.py b/pypy/module/_file/interp_stream.py --- a/pypy/module/_file/interp_stream.py +++ b/pypy/module/_file/interp_stream.py @@ -2,27 +2,13 @@ from pypy.rlib import streamio from pypy.rlib.streamio import StreamErrors -from pypy.interpreter.error import OperationError, wrap_oserror2 +from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import ObjSpace, Wrappable from pypy.interpreter.typedef import TypeDef from pypy.interpreter.gateway import interp2app +from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror -def wrap_streamerror(space, e, w_filename=None): - if isinstance(e, streamio.StreamError): - return OperationError(space.w_ValueError, - space.wrap(e.message)) - elif isinstance(e, OSError): - return wrap_oserror_as_ioerror(space, e, w_filename) - else: - # should not happen: wrap_streamerror() is only called when - # StreamErrors = (OSError, StreamError) are raised - return OperationError(space.w_IOError, space.w_None) - -def wrap_oserror_as_ioerror(space, e, w_filename=None): - return wrap_oserror2(space, e, w_filename, - w_exception_class=space.w_IOError) - class W_AbstractStream(Wrappable): """Base class for interp-level objects that expose streams to app-level""" slock = None diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -56,6 +56,8 @@ #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) +#define _Py_ForgetReference(ob) /* nothing */ + #define Py_None (&_Py_NoneStruct) /* diff --git a/pypy/module/imp/interp_imp.py b/pypy/module/imp/interp_imp.py --- a/pypy/module/imp/interp_imp.py +++ b/pypy/module/imp/interp_imp.py @@ -1,4 +1,6 @@ from pypy.module.imp import importing +from pypy.rlib import streamio +from pypy.rlib.streamio import StreamErrors from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.module import Module from pypy.interpreter.gateway import unwrap_spec @@ -7,7 +9,7 @@ from pypy.rlib import streamio from pypy.module._io.interp_iobase import W_IOBase from pypy.module._io import interp_io -from pypy.module._file.interp_stream import wrap_streamerror +from pypy.interpreter.streamutil import wrap_streamerror def get_suffixes(space): From noreply at buildbot.pypy.org Tue Feb 28 16:34:50 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 16:34:50 +0100 (CET) Subject: [pypy-commit] pypy default: Add a passing test. Message-ID: <20120228153450.C41EA8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52988:caf81be4ceb9 Date: 2012-02-28 16:34 +0100 http://bitbucket.org/pypy/pypy/changeset/caf81be4ceb9/ Log: Add a passing test. diff --git a/pypy/jit/backend/x86/test/test_gc_integration.py b/pypy/jit/backend/x86/test/test_gc_integration.py --- a/pypy/jit/backend/x86/test/test_gc_integration.py +++ b/pypy/jit/backend/x86/test/test_gc_integration.py @@ -257,3 +257,66 @@ assert gc_ll_descr.addrs[0] == nurs_adr + 24 # this should call slow path once assert gc_ll_descr.calls == [24] + + def test_save_regs_around_malloc(self): + S1 = lltype.GcStruct('S1') + S2 = lltype.GcStruct('S2', ('s0', lltype.Ptr(S1)), + ('s1', lltype.Ptr(S1)), + ('s2', lltype.Ptr(S1)), + ('s3', lltype.Ptr(S1)), + ('s4', lltype.Ptr(S1)), + ('s5', lltype.Ptr(S1)), + ('s6', lltype.Ptr(S1)), + ('s7', lltype.Ptr(S1)), + ('s8', lltype.Ptr(S1)), + ('s9', lltype.Ptr(S1)), + ('s10', lltype.Ptr(S1)), + ('s11', lltype.Ptr(S1)), + ('s12', lltype.Ptr(S1)), + ('s13', lltype.Ptr(S1)), + ('s14', lltype.Ptr(S1)), + ('s15', lltype.Ptr(S1))) + cpu = self.cpu + self.namespace = self.namespace.copy() + for i in range(16): + self.namespace['ds%i' % i] = cpu.fielddescrof(S2, 's%d' % i) + ops = ''' + [p0] + p1 = getfield_gc(p0, descr=ds0) + p2 = getfield_gc(p0, descr=ds1) + p3 = getfield_gc(p0, descr=ds2) + p4 = getfield_gc(p0, descr=ds3) + p5 = getfield_gc(p0, descr=ds4) + p6 = getfield_gc(p0, descr=ds5) + p7 = getfield_gc(p0, descr=ds6) + p8 = getfield_gc(p0, descr=ds7) + p9 = getfield_gc(p0, descr=ds8) + p10 = getfield_gc(p0, descr=ds9) + p11 = getfield_gc(p0, descr=ds10) + p12 = getfield_gc(p0, descr=ds11) + p13 = getfield_gc(p0, descr=ds12) + p14 = getfield_gc(p0, descr=ds13) + p15 = getfield_gc(p0, descr=ds14) + p16 = getfield_gc(p0, descr=ds15) + # + # now all registers are in use + p17 = call_malloc_nursery(40) + p18 = call_malloc_nursery(40) # overflow + # + finish(p1, p2, p3, p4, p5, p6, p7, p8, \ + p9, p10, p11, p12, p13, p14, p15, p16) + ''' + s2 = lltype.malloc(S2) + for i in range(16): + setattr(s2, 's%d' % i, lltype.malloc(S1)) + s2ref = lltype.cast_opaque_ptr(llmemory.GCREF, s2) + # + self.interpret(ops, [s2ref]) + gc_ll_descr = cpu.gc_ll_descr + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.calls == [40] + # check the returned pointers + for i in range(16): + s1ref = self.cpu.get_latest_value_ref(i) + s1 = lltype.cast_opaque_ptr(lltype.Ptr(S1), s1ref) + assert s1 == getattr(s2, 's%d' % i) From noreply at buildbot.pypy.org Tue Feb 28 16:44:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 16:44:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill module/_file. Files are handled by the _io module now Message-ID: <20120228154405.EA5808203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52989:96fc36348096 Date: 2012-02-28 16:43 +0100 http://bitbucket.org/pypy/pypy/changeset/96fc36348096/ Log: kill module/_file. Files are handled by the _io module now diff --git a/pypy/module/_file/__init__.py b/pypy/module/_file/__init__.py deleted file mode 100644 --- a/pypy/module/_file/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ - -# Package initialisation -from pypy.interpreter.mixedmodule import MixedModule -import sys - -class Module(MixedModule): - appleveldefs = { - } - - interpleveldefs = { - "file": "interp_file.W_File", - "set_file_encoding": "interp_file.set_file_encoding", - } - - def __init__(self, space, *args): - "NOT_RPYTHON" - - # on windows with oo backends, remove file.truncate, - # because the implementation is based on rffi - if (sys.platform == 'win32' and - space.config.translation.type_system == 'ootype'): - from pypy.module._file.interp_file import W_File - del W_File.typedef.rawdict['truncate'] - - MixedModule.__init__(self, space, *args) - - def shutdown(self, space): - # at shutdown, flush all open streams. Ignore I/O errors. - from pypy.module._file.interp_file import getopenstreams, StreamErrors - openstreams = getopenstreams(space) - while openstreams: - for stream in openstreams.keys(): - try: - del openstreams[stream] - except KeyError: - pass # key was removed in the meantime - else: - try: - stream.flush() - except StreamErrors: - pass - - def setup_after_space_initialization(self): - from pypy.module._file.interp_file import W_File - from pypy.objspace.std.transparent import register_proxyable - register_proxyable(self.space, W_File) diff --git a/pypy/module/_file/interp_file.py b/pypy/module/_file/interp_file.py deleted file mode 100644 --- a/pypy/module/_file/interp_file.py +++ /dev/null @@ -1,569 +0,0 @@ -import py -import os -import stat -import errno -from pypy.rlib import streamio -from pypy.rlib.rarithmetic import r_longlong -from pypy.rlib.rstring import StringBuilder -from pypy.module._file.interp_stream import W_AbstractStream, StreamErrors -from pypy.module.posix.interp_posix import dispatch_filename -from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.typedef import (TypeDef, GetSetProperty, - interp_attrproperty, make_weakref_descr, interp_attrproperty_w) -from pypy.interpreter.gateway import interp2app, unwrap_spec -from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror - -class W_File(W_AbstractStream): - """An interp-level file object. This implements the same interface than - the app-level files, with the following differences: - - * method names are prefixed with 'file_' - * the 'normal' app-level constructor is implemented by file___init__(). - * the methods with the 'direct_' prefix should be used if the caller - locks and unlocks the file itself, and takes care of StreamErrors. - """ - - # Default values until the file is successfully opened - stream = None - w_name = None - mode = "" - binary = False - softspace= 0 # Required according to file object docs - encoding = None - errors = None - fd = -1 - - newlines = 0 # Updated when the stream is closed - - def __init__(self, space): - self.space = space - - def __del__(self): - # assume that the file and stream objects are only visible in the - # thread that runs __del__, so no race condition should be possible - self.clear_all_weakrefs() - if self.stream is not None: - self.enqueue_for_destruction(self.space, W_File.destructor, - 'close() method of ') - - def destructor(self): - assert isinstance(self, W_File) - try: - self.direct_close() - except StreamErrors, e: - operr = wrap_streamerror(self.space, e, self.w_name) - raise operr - - def fdopenstream(self, stream, fd, mode, w_name=None): - self.fd = fd - self.mode = mode - self.binary = "b" in mode - if w_name is not None: - self.w_name = w_name - self.stream = stream - if stream.flushable(): - getopenstreams(self.space)[stream] = None - - def check_not_dir(self, fd): - try: - st = os.fstat(fd) - except OSError: - pass - else: - if (stat.S_ISDIR(st[0])): - ose = OSError(errno.EISDIR, '') - raise wrap_oserror_as_ioerror(self.space, ose, self.w_name) - - def check_mode_ok(self, mode): - if (not mode or mode[0] not in ['r', 'w', 'a', 'U'] or - ('U' in mode and ('w' in mode or 'a' in mode))): - space = self.space - raise operationerrfmt(space.w_ValueError, - "invalid mode: '%s'", mode) - - def check_closed(self): - if self.stream is None: - raise OperationError(self.space.w_ValueError, - self.space.wrap("I/O operation on closed file") - ) - - def getstream(self): - """Return self.stream or raise an app-level ValueError if missing - (i.e. if the file is closed).""" - self.check_closed() - return self.stream - - def _when_reading_first_flush(self, otherfile): - """Flush otherfile before reading from self.""" - self.stream = streamio.CallbackReadFilter(self.stream, - otherfile._try_to_flush) - - def _try_to_flush(self): - stream = self.stream - if stream is not None: - stream.flush() - - # ____________________________________________________________ - # - # The 'direct_' methods assume that the caller already acquired the - # file lock. They don't convert StreamErrors to OperationErrors, too. - - @unwrap_spec(mode=str, buffering=int) - def direct___init__(self, w_name, mode='r', buffering=-1): - self.direct_close() - self.w_name = w_name - self.check_mode_ok(mode) - stream = dispatch_filename(streamio.open_file_as_stream)( - self.space, w_name, mode, buffering) - fd = stream.try_to_find_file_descriptor() - self.check_not_dir(fd) - self.fdopenstream(stream, fd, mode) - - def direct___enter__(self): - self.check_closed() - return self - - def file__exit__(self, __args__): - """__exit__(*excinfo) -> None. Closes the file.""" - self.space.call_method(self, "close") - # can't return close() value - return None - - def direct_fdopen(self, fd, mode='r', buffering=-1): - self.direct_close() - self.w_name = self.space.wrap('') - self.check_mode_ok(mode) - stream = streamio.fdopen_as_stream(fd, mode, buffering) - self.fdopenstream(stream, fd, mode) - - def direct_close(self): - space = self.space - stream = self.stream - if stream is not None: - self.newlines = self.stream.getnewlines() - self.stream = None - self.fd = -1 - openstreams = getopenstreams(self.space) - try: - del openstreams[stream] - except KeyError: - pass - stream.close() - - def direct_fileno(self): - self.getstream() # check if the file is still open - return self.fd - - def direct_flush(self): - self.getstream().flush() - - def direct___next__(self): - line = self.getstream().readline() - if line == '': - raise OperationError(self.space.w_StopIteration, self.space.w_None) - return line - - @unwrap_spec(n=int) - def direct_read(self, n=-1): - stream = self.getstream() - if n < 0: - return stream.readall() - else: - result = StringBuilder(n) - while n > 0: - data = stream.read(n) - if not data: - break - n -= len(data) - result.append(data) - return result.build() - - @unwrap_spec(size=int) - def direct_readline(self, size=-1): - stream = self.getstream() - if size < 0: - return stream.readline() - else: - # very inefficient unless there is a peek() - result = [] - while size > 0: - # "peeks" on the underlying stream to see how many chars - # we can safely read without reading past an end-of-line - peeked = stream.peek() - pn = peeked.find("\n", 0, size) - if pn < 0: - pn = min(size-1, len(peeked)) - c = stream.read(pn + 1) - if not c: - break - result.append(c) - if c.endswith('\n'): - break - size -= len(c) - return ''.join(result) - - @unwrap_spec(size=int) - def direct_readlines(self, size=0): - stream = self.getstream() - # this is implemented as: .read().split('\n') - # except that it keeps the \n in the resulting strings - if size <= 0: - data = stream.readall() - else: - data = stream.read(size) - result = [] - splitfrom = 0 - for i in range(len(data)): - if data[i] == '\n': - result.append(data[splitfrom : i + 1]) - splitfrom = i + 1 - # - if splitfrom < len(data): - # there is a partial line at the end. If size > 0, it is likely - # to be because the 'read(size)' returned data up to the middle - # of a line. In that case, use 'readline()' to read until the - # end of the current line. - data = data[splitfrom:] - if size > 0: - data += stream.readline() - result.append(data) - return result - - @unwrap_spec(offset=r_longlong, whence=int) - def direct_seek(self, offset, whence=0): - self.getstream().seek(offset, whence) - - def direct_tell(self): - return self.getstream().tell() - - def direct_truncate(self, w_size=None): # note: a wrapped size! - stream = self.getstream() - space = self.space - if w_size is None or space.is_w(w_size, space.w_None): - size = stream.tell() - else: - size = space.r_longlong_w(w_size) - stream.truncate(size) - - def direct_write(self, w_data): - space = self.space - if not self.binary and space.isinstance_w(w_data, space.w_unicode): - w_data = space.call_method(w_data, "encode", space.wrap(self.encoding), space.wrap(self.errors)) - data = space.bufferstr_w(w_data) - self.do_direct_write(data) - - def do_direct_write(self, data): - self.softspace = 0 - self.getstream().write(data) - - def direct___iter__(self): - self.getstream() - return self - direct_xreadlines = direct___iter__ - - def direct_isatty(self): - self.getstream() # check if the file is still open - return os.isatty(self.fd) - - # ____________________________________________________________ - # - # The 'file_' methods are the one exposed to app-level. - - def file_fdopen(self, fd, mode="r", buffering=-1): - try: - self.direct_fdopen(fd, mode, buffering) - except StreamErrors, e: - raise wrap_streamerror(self.space, e, self.w_name) - - _exposed_method_names = [] - - def _decl(class_scope, name, docstring, - wrapresult="space.wrap(result)"): - # hack hack to build a wrapper around the direct_xxx methods. - # The wrapper adds lock/unlock calls and a space.wrap() on - # the result, conversion of stream errors to OperationErrors, - # and has the specified docstring and unwrap_spec. - direct_fn = class_scope['direct_' + name] - co = direct_fn.func_code - argnames = co.co_varnames[:co.co_argcount] - defaults = direct_fn.func_defaults or () - unwrap_spec = getattr(direct_fn, 'unwrap_spec', None) - - args = [] - for i, argname in enumerate(argnames): - try: - default = defaults[-len(argnames) + i] - except IndexError: - args.append(argname) - else: - args.append('%s=%r' % (argname, default)) - sig = ', '.join(args) - assert argnames[0] == 'self' - callsig = ', '.join(argnames[1:]) - - src = py.code.Source(""" - def file_%(name)s(%(sig)s): - %(docstring)r - space = self.space - self.lock() - try: - try: - result = self.direct_%(name)s(%(callsig)s) - except StreamErrors, e: - raise wrap_streamerror(space, e, self.w_name) - finally: - self.unlock() - return %(wrapresult)s - """ % locals()) - exec str(src) in globals(), class_scope - if unwrap_spec is not None: - class_scope['file_' + name].unwrap_spec = unwrap_spec - class_scope['_exposed_method_names'].append(name) - - - _decl(locals(), "__init__", """Opens a file.""") - - _decl(locals(), "__enter__", """__enter__() -> self.""") - - _decl(locals(), "close", - """close() -> None or (perhaps) an integer. Close the file. - -Sets data attribute .closed to True. A closed file cannot be used for -further I/O operations. close() may be called more than once without -error. Some kinds of file objects (for example, opened by popen()) -may return an exit status upon closing.""") - # NB. close() needs to use the stream lock to avoid double-closes or - # close-while-another-thread-uses-it. - - - _decl(locals(), "fileno", - '''fileno() -> integer "file descriptor". - -This is needed for lower-level file interfaces, such os.read().''') - - _decl(locals(), "flush", - """flush() -> None. Flush the internal I/O buffer.""") - - _decl(locals(), "isatty", - """isatty() -> true or false. True if the file is connected to a tty device.""") - - _decl(locals(), "__next__", - """__next__() -> the next line in the file, or raise StopIteration""") - - _decl(locals(), "read", - """read([size]) -> read at most size bytes, returned as a string. - -If the size argument is negative or omitted, read until EOF is reached. -Notice that when in non-blocking mode, less data than what was requested -may be returned, even if no size parameter was given.""") - - _decl(locals(), "readline", - """readline([size]) -> next line from the file, as a string. - -Retain newline. A non-negative size argument limits the maximum -number of bytes to return (an incomplete line may be returned then). -Return an empty string at EOF.""") - - _decl(locals(), "readlines", - """readlines([size]) -> list of strings, each a line from the file. - -Call readline() repeatedly and return a list of the lines so read. -The optional size argument, if given, is an approximate bound on the -total number of bytes in the lines returned.""", - wrapresult = "wrap_list_of_str(space, result)") - - _decl(locals(), "seek", - """seek(offset[, whence]) -> None. Move to new file position. - -Argument offset is a byte count. Optional argument whence defaults to -0 (offset from start of file, offset should be >= 0); other values are 1 -(move relative to current position, positive or negative), and 2 (move -relative to end of file, usually negative, although many platforms allow -seeking beyond the end of a file). If the file is opened in text mode, -only offsets returned by tell() are legal. Use of other offsets causes -undefined behavior. -Note that not all file objects are seekable.""") - - _decl(locals(), "tell", - "tell() -> current file position, an integer (may be a long integer).") - - _decl(locals(), "truncate", - """truncate([size]) -> None. Truncate the file to at most size bytes. - -Size defaults to the current file position, as returned by tell().""") - - _decl(locals(), "write", - """write(str) -> None. Write string str to file. - -Note that due to buffering, flush() or close() may be needed before -the file on disk reflects the data written.""") - - _decl(locals(), "__iter__", - """Iterating over files, as in 'for line in f:', returns each line of -the file one by one.""") - - _decl(locals(), "xreadlines", - """xreadlines() -> returns self. - -For backward compatibility. File objects now include the performance -optimizations previously implemented in the xreadlines module.""") - - def file__repr__(self): - if self.stream is None: - head = "closed" - else: - head = "open" - info = "%s file %s, mode '%s'" % ( - head, - self.getdisplayname(), - self.mode) - return self.getrepr(self.space, info) - - def getdisplayname(self): - w_name = self.w_name - if w_name is None: - return '?' - elif self.space.is_true(self.space.isinstance(w_name, - self.space.w_str)): - return "'%s'" % self.space.str_w(w_name) - else: - return self.space.str_w(self.space.repr(w_name)) - - def file_writelines(self, w_lines): - """writelines(sequence_of_strings) -> None. Write the strings to the file. - -Note that newlines are not added. The sequence can be any iterable object -producing strings. This is equivalent to calling write() for each string.""" - - space = self.space - self.check_closed() - w_iterator = space.iter(w_lines) - while True: - try: - w_line = space.next(w_iterator) - except OperationError, e: - if not e.match(space, space.w_StopIteration): - raise - break # done - self.file_write(w_line) - - def file_readinto(self, w_rwbuffer): - """readinto() -> Undocumented. Don't use this; it may go away.""" - # XXX not the most efficient solution as it doesn't avoid the copying - space = self.space - rwbuffer = space.rwbuffer_w(w_rwbuffer) - w_data = self.file_read(rwbuffer.getlength()) - data = space.str_w(w_data) - rwbuffer.setslice(0, data) - return space.wrap(len(data)) - - -# ____________________________________________________________ - - -def descr_file__new__(space, w_subtype, __args__): - file = space.allocate_instance(W_File, w_subtype) - W_File.__init__(file, space) - return space.wrap(file) - - at unwrap_spec(fd=int, mode=str, buffering=int) -def descr_file_fdopen(space, w_subtype, fd, mode='r', buffering=-1): - file = space.allocate_instance(W_File, w_subtype) - W_File.__init__(file, space) - file.file_fdopen(fd, mode, buffering) - return space.wrap(file) - -def descr_file_closed(space, file): - return space.wrap(file.stream is None) - -def descr_file_newlines(space, file): - if file.stream: - newlines = file.stream.getnewlines() - else: - newlines = file.newlines - if newlines == 0: - return space.w_None - elif newlines == 1: - return space.wrap("\r") - elif newlines == 2: - return space.wrap("\n") - elif newlines == 4: - return space.wrap("\r\n") - result = [] - if newlines & 1: - result.append(space.wrap('\r')) - if newlines & 2: - result.append(space.wrap('\n')) - if newlines & 4: - result.append(space.wrap('\r\n')) - return space.newtuple(result[:]) - -def descr_file_softspace(space, file): - return space.wrap(file.softspace) - -def descr_file_setsoftspace(space, file, w_newvalue): - file.softspace = space.int_w(w_newvalue) - -# ____________________________________________________________ - -W_File.typedef = TypeDef( - "file", - __doc__ = """file(name[, mode[, buffering]]) -> file object - -Open a file. The mode can be 'r', 'w' or 'a' for reading (default), -writing or appending. The file will be created if it doesn't exist -when opened for writing or appending; it will be truncated when -opened for writing. Add a 'b' to the mode for binary files. -Add a '+' to the mode to allow simultaneous reading and writing. -If the buffering argument is given, 0 means unbuffered, 1 means line -buffered, and larger numbers specify the buffer size. -Add a 'U' to mode to open the file for input with universal newline -support. Any line ending in the input file will be seen as a '\n' -in Python. Also, a file so opened gains the attribute 'newlines'; -the value for this attribute is one of None (no newline read yet), -'\r', '\n', '\r\n' or a tuple containing all the newline types seen. - -Note: open() is an alias for file(). -""", - __new__ = interp2app(descr_file__new__), - fdopen = interp2app(descr_file_fdopen, as_classmethod=True), - name = interp_attrproperty_w('w_name', cls=W_File, doc="file name"), - mode = interp_attrproperty('mode', cls=W_File, - doc = "file mode ('r', 'U', 'w', 'a', " - "possibly with 'b' or '+' added)"), - encoding = interp_attrproperty('encoding', cls=W_File), - errors = interp_attrproperty('errors', cls=W_File), - closed = GetSetProperty(descr_file_closed, cls=W_File, - doc="True if the file is closed"), - newlines = GetSetProperty(descr_file_newlines, cls=W_File, - doc="end-of-line convention used in this file"), - softspace= GetSetProperty(descr_file_softspace, - descr_file_setsoftspace, - cls=W_File, - doc="Support for 'print'."), - __repr__ = interp2app(W_File.file__repr__), - readinto = interp2app(W_File.file_readinto), - writelines = interp2app(W_File.file_writelines), - __exit__ = interp2app(W_File.file__exit__), - __weakref__ = make_weakref_descr(W_File), - **dict([(name, interp2app(getattr(W_File, 'file_' + name))) - for name in W_File._exposed_method_names]) - ) - -# ____________________________________________________________ - -def wrap_list_of_str(space, lst): - return space.newlist([space.wrap(s) for s in lst]) - -class FileState: - def __init__(self, space): - self.openstreams = {} - -def getopenstreams(space): - return space.fromcache(FileState).openstreams - - - at unwrap_spec(file=W_File, encoding="str_or_None", errors="str_or_None") -def set_file_encoding(space, file, encoding=None, errors=None): - file.encoding = encoding - file.errors = errors diff --git a/pypy/module/_file/interp_stream.py b/pypy/module/_file/interp_stream.py deleted file mode 100644 --- a/pypy/module/_file/interp_stream.py +++ /dev/null @@ -1,123 +0,0 @@ -import py -from pypy.rlib import streamio -from pypy.rlib.streamio import StreamErrors - -from pypy.interpreter.error import OperationError -from pypy.interpreter.baseobjspace import ObjSpace, Wrappable -from pypy.interpreter.typedef import TypeDef -from pypy.interpreter.gateway import interp2app -from pypy.interpreter.streamutil import wrap_streamerror, wrap_oserror_as_ioerror - - -class W_AbstractStream(Wrappable): - """Base class for interp-level objects that expose streams to app-level""" - slock = None - slockowner = None - # Locking issues: - # * Multiple threads can access the same W_AbstractStream in - # parallel, because many of the streamio calls eventually - # release the GIL in some external function call. - # * Parallel accesses have bad (and crashing) effects on the - # internal state of the buffering levels of the stream in - # particular. - # * We can't easily have a lock on each W_AbstractStream because we - # can't translate prebuilt lock objects. - # We are still protected by the GIL, so the easiest is to create - # the lock on-demand. - - def __init__(self, space, stream): - self.space = space - self.stream = stream - - def _try_acquire_lock(self): - # this function runs with the GIL acquired so there is no race - # condition in the creation of the lock - if self.slock is None: - self.slock = self.space.allocate_lock() - me = self.space.getexecutioncontext() # used as thread ident - if self.slockowner is me: - return False # already acquired by the current thread - self.slock.acquire(True) - assert self.slockowner is None - self.slockowner = me - return True - - def _release_lock(self): - self.slockowner = None - self.slock.release() - - def lock(self): - if not self._try_acquire_lock(): - raise OperationError(self.space.w_RuntimeError, - self.space.wrap("stream lock already held")) - - def unlock(self): - me = self.space.getexecutioncontext() # used as thread ident - if self.slockowner is not me: - raise OperationError(self.space.w_RuntimeError, - self.space.wrap("stream lock is not held")) - self._release_lock() - - def _freeze_(self): - # remove the lock object, which will be created again as needed at - # run-time. - self.slock = None - assert self.slockowner is None - return False - - def stream_read(self, n): - """ - An interface for direct interp-level usage of W_AbstractStream, - e.g. from interp_marshal.py. - NOTE: this assumes that the stream lock is already acquired. - Like os.read(), this can return less than n bytes. - """ - try: - return self.stream.read(n) - except StreamErrors, e: - raise wrap_streamerror(self.space, e) - - def do_write(self, data): - """ - An interface for direct interp-level usage of W_Stream, - e.g. from interp_marshal.py. - NOTE: this assumes that the stream lock is already acquired. - """ - try: - self.stream.write(data) - except StreamErrors, e: - raise wrap_streamerror(self.space, e) - -# ____________________________________________________________ - -class W_Stream(W_AbstractStream): - """A class that exposes the raw stream interface to app-level.""" - # this exists for historical reasons, and kept around in case we want - # to re-expose the raw stream interface to app-level. - -for name, argtypes in streamio.STREAM_METHODS.iteritems(): - numargs = len(argtypes) - args = ", ".join(["v%s" % i for i in range(numargs)]) - exec py.code.Source(""" - def %(name)s(self, space, %(args)s): - acquired = self.try_acquire_lock() - try: - try: - result = self.stream.%(name)s(%(args)s) - except streamio.StreamError, e: - raise OperationError(space.w_ValueError, - space.wrap(e.message)) - except OSError, e: - raise wrap_oserror_as_ioerror(space, e) - finally: - if acquired: - self.release_lock() - return space.wrap(result) - %(name)s.unwrap_spec = [W_Stream, ObjSpace] + argtypes - """ % locals()).compile() in globals() - -W_Stream.typedef = TypeDef("Stream", - lock = interp2app(W_Stream.lock), - unlock = interp2app(W_Stream.unlock), - **dict([(name, interp2app(globals()[name])) - for name, _ in streamio.STREAM_METHODS.iteritems()])) diff --git a/pypy/module/_file/test/__init__.py b/pypy/module/_file/test/__init__.py deleted file mode 100644 diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py deleted file mode 100644 --- a/pypy/module/_file/test/test_file.py +++ /dev/null @@ -1,454 +0,0 @@ -from __future__ import with_statement -import py, os, errno - -from pypy.conftest import gettestobjspace, option - -def getfile(space): - return space.appexec([], """(): - try: - import _file - return _file.file - except ImportError: # when running with py.test -A - return file - """) - -class AppTestFile(object): - def setup_class(cls): - cls.space = gettestobjspace(usemodules=("_file", )) - cls.w_temppath = cls.space.wrap( - str(py.test.ensuretemp("fileimpl").join("foo.txt"))) - cls.w_file = getfile(cls.space) - - def test_simple(self): - f = self.file(self.temppath, "w") - f.write("foo") - f.close() - f = self.file(self.temppath, "r") - raises(TypeError, f.read, None) - try: - s = f.read() - assert s == "foo" - finally: - f.close() - - def test_readline(self): - f = self.file(self.temppath, "w") - try: - f.write("foo\nbar\n") - finally: - f.close() - f = self.file(self.temppath, "r") - raises(TypeError, f.readline, None) - try: - s = f.readline() - assert s == "foo\n" - s = f.readline() - assert s == "bar\n" - finally: - f.close() - - def test_readlines(self): - f = self.file(self.temppath, "w") - try: - f.write("foo\nbar\n") - finally: - f.close() - f = self.file(self.temppath, "r") - raises(TypeError, f.readlines, None) - try: - s = f.readlines() - assert s == ["foo\n", "bar\n"] - finally: - f.close() - - - def test_fdopen(self): - import os - f = self.file(self.temppath, "w") - try: - f.write("foo\nbaz\n") - finally: - f.close() - try: - fdopen = self.file.fdopen - except AttributeError: - fdopen = os.fdopen # when running with -A - fd = os.open(self.temppath, os.O_WRONLY | os.O_CREAT) - f2 = fdopen(fd, "a") - f2.seek(0, 2) - f2.write("bar\nboo") - f2.close() - # don't close fd, will get a whining __del__ - f = self.file(self.temppath, "r") - try: - s = f.read() - assert s == "foo\nbaz\nbar\nboo" - finally: - f.close() - - def test_badmode(self): - raises(ValueError, self.file, "foo", "bar") - - def test_wraposerror(self): - raises(IOError, self.file, "hopefully/not/existant.bar") - - def test_correct_file_mode(self): - import os - f = self.file(self.temppath, "w") - umask = os.umask(0) - os.umask(umask) - try: - f.write("foo") - finally: - f.close() - assert oct(os.stat(self.temppath).st_mode & 0777) == oct(0666 & ~umask) - - def test_newlines(self): - import os - f = self.file(self.temppath, "wb") - f.write("\r\n") - assert f.newlines is None - f.close() - assert f.newlines is None - f = self.file(self.temppath, "rU") - res = f.read() - assert res == "\n" - assert f.newlines == "\r\n" - f.close() - assert f.newlines == "\r\n" - - # now use readline() - f = self.file(self.temppath, "rU") - res = f.readline() - assert res == "\n" - # this tell() is necessary for CPython as well to update f.newlines - f.tell() - assert f.newlines == "\r\n" - res = f.readline() - assert res == "" - assert f.newlines == "\r\n" - f.close() - - def test_unicode(self): - import os - f = self.file(self.temppath, "w") - f.write(u"hello\n") - raises(UnicodeEncodeError, f.write, u'\xe9') - f.close() - f = self.file(self.temppath, "r") - res = f.read() - assert res == "hello\n" - assert type(res) is str - f.close() - - def test_unicode_filename(self): - import sys - try: - u'\xe9'.encode(sys.getfilesystemencoding()) - except UnicodeEncodeError: - skip("encoding not good enough") - f = self.file(self.temppath + u'\xe9', "w") - f.close() - - def test_oserror_has_filename(self): - try: - f = self.file("file that is clearly not there") - except IOError, e: - assert e.filename == 'file that is clearly not there' - else: - raise Exception("did not raise") - - def test_readline_mixed_with_read(self): - s = '''From MAILER-DAEMON Wed Jan 14 14:42:30 2009 -From: foo - -0 -From MAILER-DAEMON Wed Jan 14 14:42:44 2009 -Return-Path: -X-Original-To: gkj+person at localhost -Delivered-To: gkj+person at localhost -Received: from localhost (localhost [127.0.0.1]) - by andy.gregorykjohnson.com (Postfix) with ESMTP id 356ED9DD17 - for ; Wed, 13 Jul 2005 17:23:16 -0400 (EDT) -Delivered-To: gkj at sundance.gregorykjohnson.com''' - f = self.file(self.temppath, "w") - f.write(s) - f.close() - f = self.file(self.temppath, "r") - f.seek(0L) - f.readline() - pos = f.tell() - assert f.read(12L) == 'From: foo\n\n0' - f.seek(pos) - assert f.read(12L) == 'From: foo\n\n0' - f.close() - - def test_invalid_modes(self): - raises(ValueError, self.file, self.temppath, "aU") - raises(ValueError, self.file, self.temppath, "wU+") - raises(ValueError, self.file, self.temppath, "") - - def test_write_resets_softspace(self): - f = self.file(self.temppath, "w") - print >> f, '.', - f.write(',') - print >> f, '.', - f.close() - f = self.file(self.temppath, "r") - res = f.read() - assert res == ".,." - f.close() - - def test_open_dir(self): - import os - - exc = raises(IOError, self.file, os.curdir) - assert exc.value.filename == os.curdir - exc = raises(IOError, self.file, os.curdir, 'w') - assert exc.value.filename == os.curdir - - def test_encoding_errors(self): - import _file - - with self.file(self.temppath, "w") as f: - _file.set_file_encoding(f, "utf-8") - f.write(u'15\u20ac') - - assert f.encoding == "utf-8" - assert f.errors is None - - with self.file(self.temppath, "r") as f: - data = f.read() - assert data == '15\xe2\x82\xac' - - with self.file(self.temppath, "w") as f: - _file.set_file_encoding(f, "iso-8859-1", "ignore") - f.write(u'15\u20ac') - - assert f.encoding == "iso-8859-1" - assert f.errors == "ignore" - - with self.file(self.temppath, "r") as f: - data = f.read() - assert data == "15" - - def test_exception_from_close(self): - import os - f = self.file(self.temppath, 'w') - os.close(f.fileno()) - raises(IOError, f.close) # bad file descriptor - - def test_exception_from_del(self): - import os, gc, sys, cStringIO - f = self.file(self.temppath, 'w') - g = cStringIO.StringIO() - preverr = sys.stderr - try: - sys.stderr = g - os.close(f.fileno()) - del f - gc.collect() # bad file descriptor in f.__del__() - finally: - sys.stderr = preverr - import errno - assert os.strerror(errno.EBADF) in g.getvalue() - # the following is a "nice to have" feature that CPython doesn't have - if '__pypy__' in sys.builtin_module_names: - assert self.temppath in g.getvalue() - - -class AppTestNonblocking(object): - def setup_class(cls): - from pypy.module._file.interp_file import W_File - - cls.old_read = os.read - - if option.runappdirect: - py.test.skip("works with internals of _file impl on py.py") - import platform - if platform.system() == 'Windows': - # XXX This test crashes until someone implements something like - # XXX verify_fd from - # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 - # XXX and adds it to fopen - assert False - - state = [0] - def read(fd, n=None): - if fd != 42: - return cls.old_read(fd, n) - if state[0] == 0: - state[0] += 1 - return "xyz" - if state[0] < 3: - state[0] += 1 - raise OSError(errno.EAGAIN, "xyz") - return '' - os.read = read - stdin = W_File(cls.space) - stdin.file_fdopen(42, "r", 1) - stdin.name = '' - cls.w_stream = stdin - - def teardown_class(cls): - os.read = cls.old_read - - def test_nonblocking_file(self): - res = self.stream.read() - assert res == 'xyz' - -class AppTestConcurrency(object): - # these tests only really make sense on top of a translated pypy-c, - # because on top of py.py the inner calls to os.write() don't - # release our object space's GIL. - def setup_class(cls): - if not option.runappdirect: - py.test.skip("likely to deadlock when interpreted by py.py") - cls.space = gettestobjspace(usemodules=("_file", "thread")) - cls.w_temppath = cls.space.wrap( - str(py.test.ensuretemp("fileimpl").join("concurrency.txt"))) - cls.w_file = getfile(cls.space) - - def test_concurrent_writes(self): - # check that f.write() is atomic - import thread, time - f = self.file(self.temppath, "w+b") - def writer(i): - for j in range(150): - f.write('%3d %3d\n' % (i, j)) - locks[i].release() - locks = [] - for i in range(10): - lock = thread.allocate_lock() - lock.acquire() - locks.append(lock) - for i in range(10): - thread.start_new_thread(writer, (i,)) - # wait until all threads are done - for i in range(10): - locks[i].acquire() - f.seek(0) - lines = f.readlines() - lines.sort() - assert lines == ['%3d %3d\n' % (i, j) for i in range(10) - for j in range(150)] - f.close() - - def test_parallel_writes_and_reads(self): - # Warning: a test like the one below deadlocks CPython - # http://bugs.python.org/issue1164 - # It also deadlocks on py.py because the space GIL is not - # released. - import thread, sys, os - try: - fdopen = self.file.fdopen - except AttributeError: - # when running with -A - skip("deadlocks on top of CPython") - read_fd, write_fd = os.pipe() - fread = fdopen(read_fd, 'rb', 200) - fwrite = fdopen(write_fd, 'wb', 200) - run = True - readers_done = [0] - - def writer(): - f = 0.1 - while run: - print >> fwrite, f, - f = 4*f - 3*f*f - print >> fwrite, "X" - fwrite.flush() - sys.stdout.write('writer ends\n') - - def reader(j): - while True: - data = fread.read(1) - #sys.stdout.write('%d%r ' % (j, data)) - if data == "X": - break - sys.stdout.write('reader ends\n') - readers_done[0] += 1 - - for j in range(3): - thread.start_new_thread(reader, (j,)) - thread.start_new_thread(writer, ()) - - import time - t = time.time() + 5 - print "start of test" - while time.time() < t: - time.sleep(1) - print "end of test" - - assert readers_done[0] == 0 - run = False # end the writers - for i in range(600): - time.sleep(0.4) - sys.stdout.flush() - x = readers_done[0] - if x == 3: - break - print 'readers_done == %d, still waiting...' % (x,) - else: - raise Exception("time out") - print 'Passed.' - - -class AppTestFile25: - def setup_class(cls): - cls.space = gettestobjspace(usemodules=("_file", )) - cls.w_temppath = cls.space.wrap( - str(py.test.ensuretemp("fileimpl").join("foo.txt"))) - cls.w_file = getfile(cls.space) - - def test___enter__(self): - f = self.file(self.temppath, 'w') - assert f.__enter__() is f - - def test___exit__(self): - f = self.file(self.temppath, 'w') - assert f.__exit__() is None - assert f.closed - - def test_file_and_with_statement(self): - with self.file(self.temppath, 'w') as f: - f.write('foo') - assert f.closed - - with self.file(self.temppath, 'r') as f: - s = f.readline() - - assert s == "foo" - assert f.closed - - def test_subclass_with(self): - file = self.file - class C(file): - def __init__(self, *args, **kwargs): - self.subclass_closed = False - file.__init__(self, *args, **kwargs) - - def close(self): - self.subclass_closed = True - file.close(self) - - with C(self.temppath, 'w') as f: - pass - assert f.subclass_closed - -def test_flush_at_exit(): - from pypy import conftest - from pypy.tool.option import make_config, make_objspace - from pypy.tool.udir import udir - - tmpfile = udir.join('test_flush_at_exit') - config = make_config(conftest.option) - space = make_objspace(config) - space.appexec([space.wrap(str(tmpfile))], """(tmpfile): - f = open(tmpfile, 'w') - f.write('42') - # no flush() and no close() - import sys; sys._keepalivesomewhereobscure = f - """) - space.finish() - assert tmpfile.read() == '42' diff --git a/pypy/module/_file/test/test_file_extra.py b/pypy/module/_file/test/test_file_extra.py deleted file mode 100644 --- a/pypy/module/_file/test/test_file_extra.py +++ /dev/null @@ -1,608 +0,0 @@ -import os, random, sys -import pypy.tool.udir -import py -from pypy.conftest import gettestobjspace - -udir = pypy.tool.udir.udir.ensure('test_file_extra', dir=1) - - -# XXX this file is a random test. It may only fail occasionally -# depending on the details of the random string SAMPLE. - -SAMPLE = ''.join([chr(random.randrange(0, 256)) for i in range(12487)]) -for extra in ['\r\r', '\r\n', '\n\r', '\n\n']: - for i in range(20): - j = random.randrange(0, len(SAMPLE)+1) - SAMPLE = SAMPLE[:j] + extra + SAMPLE[j:] - if random.random() < 0.1: - SAMPLE += extra # occasionally, also test strings ending in an EOL - - -def setup_module(mod): - udir.join('sample').write(SAMPLE, 'wb') - - -class BaseROTests: - sample = SAMPLE - - def get_expected_lines(self): - lines = self.sample.split('\n') - for i in range(len(lines)-1): - lines[i] += '\n' - # if self.sample ends exactly in '\n', the split() gives a - # spurious empty line at the end. Fix it: - if lines[-1] == '': - del lines[-1] - return lines - - def test_simple_tell(self): - assert self.file.tell() == 0 - - def test_plain_read(self): - data1 = self.file.read() - assert data1 == self.sample - - def test_readline(self): - lines = self.expected_lines - for sampleline in lines: - inputline = self.file.readline() - assert inputline == sampleline - for i in range(5): - inputline = self.file.readline() - assert inputline == "" - - def test_readline_max(self): - import random - i = 0 - stop = 0 - while stop < 5: - max = random.randrange(0, 100) - sampleline = self.sample[i:i+max] - nexteol = sampleline.find('\n') - if nexteol >= 0: - sampleline = sampleline[:nexteol+1] - inputline = self.file.readline(max) - assert inputline == sampleline - i += len(sampleline) - if i == len(self.sample): - stop += 1 - - def test_iter(self): - inputlines = list(self.file) - assert inputlines == self.expected_lines - - def test_isatty(self): - assert not self.file.isatty() - - def test_next(self): - lines = self.expected_lines - for sampleline in lines: - inputline = self.file.next() - assert inputline == sampleline - for i in range(5): - raises(StopIteration, self.file.next) - - def test_read(self): - import random - i = 0 - stop = 0 - while stop < 5: - max = random.randrange(0, 100) - samplebuf = self.sample[i:i+max] - inputbuf = self.file.read(max) - assert inputbuf == samplebuf - i += len(samplebuf) - if i == len(self.sample): - stop += 1 - - def test_readlines(self): - lines = self.file.readlines() - assert lines == self.expected_lines - - def test_readlines_max(self): - import random - i = 0 - stop = 0 - samplelines = self.expected_lines - while stop < 5: - morelines = self.file.readlines(random.randrange(1, 300)) - for inputline in morelines: - assert inputline == samplelines[0] - samplelines.pop(0) - if not samplelines: - stop += 1 - else: - assert len(morelines) >= 1 # otherwise, this test (and - # real programs) would be prone - # to endless loops - - def test_seek(self): - import random - for i in range(100): - position = random.randrange(0, len(self.sample)) - self.file.seek(position) - inputchar = self.file.read(1) - assert inputchar == self.sample[position] - for i in range(100): - position = random.randrange(0, len(self.sample)) - self.file.seek(position - len(self.sample), 2) - inputchar = self.file.read(1) - assert inputchar == self.sample[position] - prevpos = position + 1 - for i in range(100): - position = random.randrange(0, len(self.sample)) - self.file.seek(position - prevpos, 1) - inputchar = self.file.read(1) - assert inputchar == self.sample[position] - prevpos = position + 1 - - def test_tell(self): - import random - for i in range(100): - position = random.randrange(0, len(self.sample)+1) - self.file.seek(position) - told = self.file.tell() - assert told == position - for i in range(100): - position = random.randrange(0, len(self.sample)+1) - self.file.seek(position - len(self.sample), 2) - told = self.file.tell() - assert told == position - prevpos = position - for i in range(100): - position = random.randrange(0, len(self.sample)+1) - self.file.seek(position - prevpos, 1) - told = self.file.tell() - assert told == position - prevpos = position - - def test_tell_and_seek_back(self): - import random - i = 0 - stop = 0 - secondpass = [] - while stop < 5: - max = random.randrange(0, 100) - samplebuf = self.sample[i:i+max] - secondpass.append((self.file.tell(), i)) - inputbuf = self.file.read(max) - assert inputbuf == samplebuf - i += len(samplebuf) - if i == len(self.sample): - stop += 1 - for i in range(100): - saved_position, i = random.choice(secondpass) - max = random.randrange(0, 100) - samplebuf = self.sample[i:i+max] - self.file.seek(saved_position) - inputbuf = self.file.read(max) - assert inputbuf == samplebuf - - def test_xreadlines(self): - assert self.file.xreadlines() is self.file - - def test_attr(self): - f = self.file - if self.expected_filename is not None: - assert f.name == self.expected_filename - if self.expected_mode is not None: - assert f.mode == self.expected_mode - assert f.closed == False - assert not f.softspace - raises((TypeError, AttributeError), 'f.name = 42') - raises((TypeError, AttributeError), 'f.name = "stuff"') - raises((TypeError, AttributeError), 'f.mode = "r"') - raises((TypeError, AttributeError), 'f.closed = True') - f.softspace = True - assert f.softspace - f.softspace = False - assert not f.softspace - f.close() - assert f.closed == True - - def test_repr(self): - assert repr(self.file).startswith( - " 200 - assert somelines == lines[:len(somelines)] - - def test_nasty_writelines(self): - # The stream lock should be released between writes - fn = self.temptestfile - f = file(fn, 'w') - def nasty(): - for i in range(5): - if i == 3: - # should not raise because of acquired lock - f.close() - yield str(i) - exc = raises(ValueError, f.writelines, nasty()) - assert exc.value.message == "I/O operation on closed file" - f.close() - - def test_rw_bin(self): - import random - flags = 'w+b' - checkflags = 'rb' - eolstyles = [('', ''), ('\n', '\n'), - ('\r', '\r'), ('\r\n', '\r\n')] - fn = self.temptestfile - f = file(fn, flags) - expected = '' - pos = 0 - for i in range(5000): - x = random.random() - if x < 0.4: - l = int(x*100) - buf = f.read(l) - assert buf == expected[pos:pos+l] - pos += len(buf) - elif x < 0.75: - writeeol, expecteol = random.choice(eolstyles) - x = str(x) - buf1 = x+writeeol - buf2 = x+expecteol - f.write(buf1) - expected = expected[:pos] + buf2 + expected[pos+len(buf2):] - pos += len(buf2) - elif x < 0.80: - pos = random.randint(0, len(expected)) - f.seek(pos) - elif x < 0.85: - pos = random.randint(0, len(expected)) - f.seek(pos - len(expected), 2) - elif x < 0.90: - currentpos = pos - pos = random.randint(0, len(expected)) - f.seek(pos - currentpos, 1) - elif x < 0.95: - assert f.tell() == pos - else: - f.flush() - g = open(fn, checkflags) - buf = g.read() - g.close() - assert buf == expected - f.close() - g = open(fn, checkflags) - buf = g.read() - g.close() - assert buf == expected - - def test_rw(self): - fn = self.temptestfile - f = file(fn, 'w+') - f.write('hello\nworld\n') - f.seek(0) - assert f.read() == 'hello\nworld\n' - f.close() - - def test_r_universal(self): - fn = self.temptestfile - f = open(fn, 'wb') - f.write('hello\r\nworld\r\n') - f.close() - f = file(fn, 'rU') - assert f.read() == 'hello\nworld\n' - f.close() - - def test_flush(self): - import os - fn = self.temptestfile - f = file(fn, 'w', 0) - f.write('x') - assert os.stat(fn).st_size == 1 - f.close() - - f = file(fn, 'wb', 1) - f.write('x') - assert os.stat(fn).st_size == 0 - f.write('\n') - assert os.stat(fn).st_size == 2 - f.write('x') - assert os.stat(fn).st_size == 2 - f.flush() - assert os.stat(fn).st_size == 3 - f.close() - assert os.stat(fn).st_size == 3 - - f = file(fn, 'wb', 1000) - f.write('x') - assert os.stat(fn).st_size == 0 - f.write('\n') - assert os.stat(fn).st_size == 0 - f.write('x') - assert os.stat(fn).st_size == 0 - f.flush() - assert os.stat(fn).st_size == 3 - f.close() - assert os.stat(fn).st_size == 3 - - def test_isatty(self): - try: - f = file('/dev/tty') - except IOError: - pass - else: - assert f.isatty() - f.close() - - def test_truncate(self): - fn = self.temptestfile - f = open(fn, 'w+b') - f.write('hello world') - f.seek(7) - f.truncate() - f.seek(0) - data = f.read() - assert data == 'hello w' - f.seek(0, 2) - assert f.tell() == 7 - f.seek(0) - f.truncate(3) - data = f.read(123) - assert data == 'hel' - f.close() - - import errno, sys - f = open(fn) - exc = raises(EnvironmentError, f.truncate, 3) - if sys.platform == 'win32': - assert exc.value.winerror == 5 # ERROR_ACCESS_DENIED - else: - # CPython explicitely checks the file mode - # PyPy relies on the libc to raise the error - assert (exc.value.message == "File not open for writing" or - exc.value.errno == errno.EINVAL) - f.close() - - def test_readinto(self): - from array import array - a = array('c') - a.fromstring('0123456789') - fn = self.temptestfile - f = open(fn, 'w+b') - f.write('foobar') - f.seek(0) - n = f.readinto(a) - f.close() - assert n == 6 - assert len(a) == 10 - assert a.tostring() == 'foobar6789' - - def test_weakref(self): - """Files are weakrefable.""" - import weakref - fn = self.temptestfile - f = open(fn, 'wb') - ref = weakref.ref(f) - ref().write('hello') - assert f.tell() == 5 - f.close() - - def test_weakref_dies_before_file_closes(self): - # Hard-to-reproduce failure (which should now be fixed). - # I think that this is how lib-python/modified-2.5.2/test_file.py - # sometimes failed on a Boehm pypy-c. - import weakref, gc - fn = self.temptestfile - f = open(fn, 'wb') - f.close() - f = open(fn, 'rb') - ref = weakref.ref(f) - attempts = range(10) - del f - for i in attempts: - f1 = ref() - if f1 is None: - break # all gone - assert not f1.closed # if still reachable, should be still open - del f1 - gc.collect() - - def test_ValueError(self): - fn = self.temptestfile - f = open(fn, 'wb') - f.close() - raises(ValueError, f.fileno) - raises(ValueError, f.flush) - raises(ValueError, f.isatty) - raises(ValueError, f.next) - raises(ValueError, f.read) - raises(ValueError, f.readline) - raises(ValueError, f.readlines) - raises(ValueError, f.seek, 0) - raises(ValueError, f.tell) - raises(ValueError, f.truncate) - raises(ValueError, f.write, "") - raises(ValueError, f.writelines, []) - raises(ValueError, iter, f) - raises(ValueError, f.xreadlines) - raises(ValueError, f.__enter__) - f.close() # accepted as a no-op - - def test_docstrings(self): - assert file.closed.__doc__ == 'True if the file is closed' - - def test_repr_unicode_filename(self): - f = open(unicode(self.temptestfile), 'w') - assert repr(f).startswith(" sys.maxint: - f.seek(0) - raises(OverflowError, f.read, FAR) - raises(OverflowError, f.readline, FAR) - raises(OverflowError, f.readlines, FAR) - f.close() - test_large_sparse.need_sparse_files = True From noreply at buildbot.pypy.org Tue Feb 28 16:45:22 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 28 Feb 2012 16:45:22 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: cast address Message-ID: <20120228154522.D0B458203C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r52990:cd8f4bce191a Date: 2012-02-28 16:44 +0100 http://bitbucket.org/pypy/pypy/changeset/cd8f4bce191a/ Log: cast address diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -334,7 +334,7 @@ addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): mc.store(reg.value, r.SPP.value, ofs) - mc.call(addr) + mc.call(rffi.cast(lltype.Signed, addr)) for reg, ofs in PPCRegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): mc.load(reg.value, r.SPP.value, ofs) From noreply at buildbot.pypy.org Tue Feb 28 17:13:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 28 Feb 2012 17:13:21 +0100 (CET) Subject: [pypy-commit] pypy default: Phew. A passing test checking that gc pointers are correctly Message-ID: <20120228161321.440488203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r52991:e6f349cd4a97 Date: 2012-02-28 17:13 +0100 http://bitbucket.org/pypy/pypy/changeset/e6f349cd4a97/ Log: Phew. A passing test checking that gc pointers are correctly saved and correctly restored around CALL_MALLOC_NURSERY. diff --git a/pypy/jit/backend/x86/test/test_gc_integration.py b/pypy/jit/backend/x86/test/test_gc_integration.py --- a/pypy/jit/backend/x86/test/test_gc_integration.py +++ b/pypy/jit/backend/x86/test/test_gc_integration.py @@ -184,6 +184,8 @@ self.addrs[1] = self.addrs[0] + 64 self.calls = [] def malloc_slowpath(size): + if self.gcrootmap is not None: # hook + self.gcrootmap.hook_malloc_slowpath() self.calls.append(size) # reset the nursery nadr = rffi.cast(lltype.Signed, self.nursery) @@ -320,3 +322,155 @@ s1ref = self.cpu.get_latest_value_ref(i) s1 = lltype.cast_opaque_ptr(lltype.Ptr(S1), s1ref) assert s1 == getattr(s2, 's%d' % i) + + +class MockShadowStackRootMap(MockGcRootMap): + is_shadow_stack = True + MARKER_FRAME = 88 # this marker follows the frame addr + S1 = lltype.GcStruct('S1') + + def __init__(self): + self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 20, + flavor='raw') + # root_stack_top + self.addrs[0] = rffi.cast(lltype.Signed, self.addrs) + 3*WORD + # random stuff + self.addrs[1] = 123456 + self.addrs[2] = 654321 + self.check_initial_and_final_state() + self.callshapes = {} + self.should_see = [] + + def check_initial_and_final_state(self): + assert self.addrs[0] == rffi.cast(lltype.Signed, self.addrs) + 3*WORD + assert self.addrs[1] == 123456 + assert self.addrs[2] == 654321 + + def get_root_stack_top_addr(self): + return rffi.cast(lltype.Signed, self.addrs) + + def compress_callshape(self, shape, datablockwrapper): + assert shape[0] == 'shape' + return ['compressed'] + shape[1:] + + def write_callshape(self, mark, force_index): + assert mark[0] == 'compressed' + assert force_index not in self.callshapes + assert force_index == 42 + len(self.callshapes) + self.callshapes[force_index] = mark + + def hook_malloc_slowpath(self): + num_entries = self.addrs[0] - rffi.cast(lltype.Signed, self.addrs) + assert num_entries == 5*WORD # 3 initially, plus 2 by the asm frame + assert self.addrs[1] == 123456 # unchanged + assert self.addrs[2] == 654321 # unchanged + frame_addr = self.addrs[3] # pushed by the asm frame + assert self.addrs[4] == self.MARKER_FRAME # pushed by the asm frame + # + from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS + addr = rffi.cast(rffi.CArrayPtr(lltype.Signed), + frame_addr + FORCE_INDEX_OFS) + force_index = addr[0] + assert force_index == 43 # in this test: the 2nd call_malloc_nursery + # + # The callshapes[43] saved above should list addresses both in the + # COPY_AREA and in the "normal" stack, where all the 16 values p1-p16 + # of test_save_regs_at_correct_place should have been stored. Here + # we replace them with new addresses, to emulate a moving GC. + shape = self.callshapes[force_index] + assert len(shape[1:]) == len(self.should_see) + new_objects = [None] * len(self.should_see) + for ofs in shape[1:]: + assert isinstance(ofs, int) # not a register at all here + addr = rffi.cast(rffi.CArrayPtr(lltype.Signed), frame_addr + ofs) + contains = addr[0] + for j in range(len(self.should_see)): + obj = self.should_see[j] + if contains == rffi.cast(lltype.Signed, obj): + assert new_objects[j] is None # duplicate? + break + else: + assert 0 # the value read from the stack looks random? + new_objects[j] = lltype.malloc(self.S1) + addr[0] = rffi.cast(lltype.Signed, new_objects[j]) + self.should_see[:] = new_objects + + +class TestMallocShadowStack(BaseTestRegalloc): + + def setup_method(self, method): + cpu = CPU(None, None) + cpu.gc_ll_descr = GCDescrFastpathMalloc() + cpu.gc_ll_descr.gcrootmap = MockShadowStackRootMap() + cpu.setup_once() + for i in range(42): + cpu.reserve_some_free_fail_descr_number() + self.cpu = cpu + + def test_save_regs_at_correct_place(self): + cpu = self.cpu + gc_ll_descr = cpu.gc_ll_descr + S1 = gc_ll_descr.gcrootmap.S1 + S2 = lltype.GcStruct('S2', ('s0', lltype.Ptr(S1)), + ('s1', lltype.Ptr(S1)), + ('s2', lltype.Ptr(S1)), + ('s3', lltype.Ptr(S1)), + ('s4', lltype.Ptr(S1)), + ('s5', lltype.Ptr(S1)), + ('s6', lltype.Ptr(S1)), + ('s7', lltype.Ptr(S1)), + ('s8', lltype.Ptr(S1)), + ('s9', lltype.Ptr(S1)), + ('s10', lltype.Ptr(S1)), + ('s11', lltype.Ptr(S1)), + ('s12', lltype.Ptr(S1)), + ('s13', lltype.Ptr(S1)), + ('s14', lltype.Ptr(S1)), + ('s15', lltype.Ptr(S1))) + self.namespace = self.namespace.copy() + for i in range(16): + self.namespace['ds%i' % i] = cpu.fielddescrof(S2, 's%d' % i) + ops = ''' + [p0] + p1 = getfield_gc(p0, descr=ds0) + p2 = getfield_gc(p0, descr=ds1) + p3 = getfield_gc(p0, descr=ds2) + p4 = getfield_gc(p0, descr=ds3) + p5 = getfield_gc(p0, descr=ds4) + p6 = getfield_gc(p0, descr=ds5) + p7 = getfield_gc(p0, descr=ds6) + p8 = getfield_gc(p0, descr=ds7) + p9 = getfield_gc(p0, descr=ds8) + p10 = getfield_gc(p0, descr=ds9) + p11 = getfield_gc(p0, descr=ds10) + p12 = getfield_gc(p0, descr=ds11) + p13 = getfield_gc(p0, descr=ds12) + p14 = getfield_gc(p0, descr=ds13) + p15 = getfield_gc(p0, descr=ds14) + p16 = getfield_gc(p0, descr=ds15) + # + # now all registers are in use + p17 = call_malloc_nursery(40) + p18 = call_malloc_nursery(40) # overflow + # + finish(p1, p2, p3, p4, p5, p6, p7, p8, \ + p9, p10, p11, p12, p13, p14, p15, p16) + ''' + s2 = lltype.malloc(S2) + for i in range(16): + s1 = lltype.malloc(S1) + setattr(s2, 's%d' % i, s1) + gc_ll_descr.gcrootmap.should_see.append(s1) + s2ref = lltype.cast_opaque_ptr(llmemory.GCREF, s2) + # + self.interpret(ops, [s2ref]) + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.calls == [40] + gc_ll_descr.gcrootmap.check_initial_and_final_state() + # check the returned pointers + for i in range(16): + s1ref = self.cpu.get_latest_value_ref(i) + s1 = lltype.cast_opaque_ptr(lltype.Ptr(S1), s1ref) + for j in range(16): + assert s1 != getattr(s2, 's%d' % j) + assert s1 == gc_ll_descr.gcrootmap.should_see[i] From noreply at buildbot.pypy.org Tue Feb 28 17:31:14 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 28 Feb 2012 17:31:14 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Allocate normal stack frame in _build_malloc_slowpath. Message-ID: <20120228163114.854CD8203C@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r52992:dd306ad1d898 Date: 2012-02-28 11:30 -0500 http://bitbucket.org/pypy/pypy/changeset/dd306ad1d898/ Log: Allocate normal stack frame in _build_malloc_slowpath. diff --git a/pypy/jit/backend/ppc/ppc_assembler.py b/pypy/jit/backend/ppc/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppc_assembler.py @@ -317,7 +317,7 @@ for _ in range(6): mc.write32(0) frame_size = (# add space for floats later - + BACKCHAIN_SIZE * WORD) + + (BACKCHAIN_SIZE + MAX_REG_PARAMS) * WORD) if IS_PPC_32: mc.stwu(r.SP.value, r.SP.value, -frame_size) mc.mflr(r.SCRATCH.value) From noreply at buildbot.pypy.org Tue Feb 28 18:44:43 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 18:44:43 +0100 (CET) Subject: [pypy-commit] pypy py3k: we need a BytesIO for marshal. Also, fix syntax about longs Message-ID: <20120228174443.35B2B8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52993:31b0c375cc6e Date: 2012-02-28 18:11 +0100 http://bitbucket.org/pypy/pypy/changeset/31b0c375cc6e/ Log: we need a BytesIO for marshal. Also, fix syntax about longs diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -8,12 +8,12 @@ def w_marshal_check(self, case): import marshal - from io import StringIO + from io import BytesIO s = marshal.dumps(case) print(repr(s)) x = marshal.loads(s) assert x == case and type(x) is type(case) - f = StringIO() + f = BytesIO() marshal.dump(case, f) f.seek(0) x = marshal.load(f) @@ -70,8 +70,7 @@ self.marshal_check(case) def test_long(self): - self.marshal_check(42L) - case = -1234567890123456789012345678901234567890L + case = -1234567890123456789012345678901234567890 self.marshal_check(case) def test_hello_____not_interned(self): From noreply at buildbot.pypy.org Tue Feb 28 18:44:44 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 18:44:44 +0100 (CET) Subject: [pypy-commit] pypy py3k: s/func_code/__code__ Message-ID: <20120228174444.69EEB8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52994:970e29f1f3d3 Date: 2012-02-28 18:24 +0100 http://bitbucket.org/pypy/pypy/changeset/970e29f1f3d3/ Log: s/func_code/__code__ diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -110,14 +110,14 @@ def test_func_dot_func_code(self): def func(x): return lambda y: x+y - case = func.func_code + case = func.__code__ self.marshal_check(case) def test_scopefunc_dot_func_code(self): def func(x): return lambda y: x+y scopefunc = func(42) - case = scopefunc.func_code + case = scopefunc.__code__ self.marshal_check(case) def test_u_quote_hello_quote_(self): From noreply at buildbot.pypy.org Tue Feb 28 18:44:45 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 18:44:45 +0100 (CET) Subject: [pypy-commit] pypy py3k: test bytes literals instead of unicode, and make sure to write bytes to a file opened with 'wb' Message-ID: <20120228174445.96B598203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52995:0132be21e554 Date: 2012-02-28 18:38 +0100 http://bitbucket.org/pypy/pypy/changeset/0132be21e554/ Log: test bytes literals instead of unicode, and make sure to write bytes to a file opened with 'wb' diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -120,8 +120,8 @@ case = scopefunc.__code__ self.marshal_check(case) - def test_u_quote_hello_quote_(self): - case = u'hello' + def test_b_quote_hello_quote_(self): + case = b'hello' self.marshal_check(case) def test_set_brace__ecarb_(self): @@ -149,7 +149,7 @@ f = open(self.tmpfile, 'wb') marshal.dump(obj1, f) marshal.dump(obj2, f) - f.write('END') + f.write(b'END') f.close() f = open(self.tmpfile, 'rb') obj1b = marshal.load(f) @@ -158,7 +158,7 @@ f.close() assert obj1b == obj1 assert obj2b == obj2 - assert tail == 'END' + assert tail == b'END' def test_unicode(self): import marshal, sys From noreply at buildbot.pypy.org Tue Feb 28 18:44:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 18:44:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: remove a u'', s/unichr/chr/, kill 'long' Message-ID: <20120228174446.C3FF68203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52996:f65e4286332e Date: 2012-02-28 18:43 +0100 http://bitbucket.org/pypy/pypy/changeset/f65e4286332e/ Log: remove a u'', s/unichr/chr/, kill 'long' diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -162,13 +162,13 @@ def test_unicode(self): import marshal, sys - self.marshal_check(u'\uFFFF') + self.marshal_check('\uFFFF') - self.marshal_check(unichr(sys.maxunicode)) + self.marshal_check(chr(sys.maxunicode)) def test_reject_subtypes(self): import marshal - types = (float, complex, int, long, tuple, list, dict, set, frozenset) + types = (float, complex, int, tuple, list, dict, set, frozenset) for cls in types: class subtype(cls): pass From noreply at buildbot.pypy.org Tue Feb 28 18:44:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 18:44:48 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120228174448.037368203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52997:1d15b6375c8f Date: 2012-02-28 18:44 +0100 http://bitbucket.org/pypy/pypy/changeset/1d15b6375c8f/ Log: fix syntax diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -189,7 +189,7 @@ def test_smalllong(self): import __pypy__ - x = -123456789012345L + x = -123456789012345 assert 'SmallLong' in __pypy__.internal_repr(x) y = self.marshal_check(x) assert y == x From noreply at buildbot.pypy.org Tue Feb 28 19:35:06 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 28 Feb 2012 19:35:06 +0100 (CET) Subject: [pypy-commit] pypy default: failing test Message-ID: <20120228183506.C79CE8203C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r52998:53bddc9e650a Date: 2012-02-28 19:21 +0100 http://bitbucket.org/pypy/pypy/changeset/53bddc9e650a/ Log: failing test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -441,6 +441,59 @@ jump(i55) """ self.optimize_loop(ops, expected) + + def test_issue1045(self): + ops = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + mark_opaque_ptr(p54) + i55 = getfield_gc(p54, descr=nextdescr) + p57 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p57, i55, descr=otherdescr) + p69 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p69, i55, descr=otherdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + p79 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p79, i77, descr=otherdescr) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(p57) + i3 = int_mod(i55, 2) + escape(i3) + i5 = int_rshift(i3, 63) + i6 = int_and(2, i5) + i7 = int_add(i3, i6) + i8 = int_eq(i7, 1) + escape(i8) + jump(p57) + """ + expected = """ + [p8] + p54 = getfield_gc(p8, descr=valuedescr) + i55 = getfield_gc(p54, descr=nextdescr) + i71 = int_eq(i55, -9223372036854775808) + guard_false(i71) [] + i73 = int_mod(i55, 2) + i75 = int_rshift(i73, 63) + i76 = int_and(2, i75) + i77 = int_add(i73, i76) + i81 = int_eq(i77, 1) + guard_false(i81) [] + i0 = int_ge(i55, 1) + guard_true(i0) [] + label(i55, i73) + escape(i73) + escape(i73) + jump(i55, i73) + """ + self.optimize_loop(ops, expected) class OptRenameStrlen(Optimization): def propagate_forward(self, op): From notifications-noreply at bitbucket.org Tue Feb 28 20:50:52 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Tue, 28 Feb 2012 19:50:52 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120228195052.6799.76222@bitbucket03.managed.contegix.com> You have received a notification from Micha? Bendowski. Hi, I forked pypy. My fork is at https://bitbucket.org/benol/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Tue Feb 28 22:55:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 22:55:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the u prefix, and adapt che expected bytecode to the new py3k compiler Message-ID: <20120228215537.E51128203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r52999:6b7f258424f1 Date: 2012-02-28 20:35 +0100 http://bitbucket.org/pypy/pypy/changeset/6b7f258424f1/ Log: kill the u prefix, and adapt che expected bytecode to the new py3k compiler diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -891,7 +891,7 @@ monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): - return u"\uE01F"[0] + return "\uE01F"[0] """ counts = self.count_instructions(source) assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} @@ -900,11 +900,11 @@ # getslice is not yet optimized. # Still, check a case which yields the empty string. source = """def f(): - return u"abc"[:0] + return "abc"[:0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 2, ops.SLICE+2: 1, - ops.RETURN_VALUE: 1} + assert counts == {ops.LOAD_CONST: 3, ops.BUILD_SLICE: 1, + ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} def test_remove_dead_code(self): source = """def f(x): From noreply at buildbot.pypy.org Tue Feb 28 22:55:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 22:55:39 +0100 (CET) Subject: [pypy-commit] pypy default: bah, the name of this class is clearly wrong (maybe a copy&paste)? Message-ID: <20120228215539.649658203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r53000:836fcc2fe8d8 Date: 2012-02-28 22:54 +0100 http://bitbucket.org/pypy/pypy/changeset/836fcc2fe8d8/ Log: bah, the name of this class is clearly wrong (maybe a copy&paste)? diff --git a/pypy/module/test_lib_pypy/test_collections.py b/pypy/module/test_lib_pypy/test_collections.py --- a/pypy/module/test_lib_pypy/test_collections.py +++ b/pypy/module/test_lib_pypy/test_collections.py @@ -6,7 +6,7 @@ from pypy.conftest import gettestobjspace -class AppTestcStringIO: +class AppTestCollections: def test_copy(self): import _collections def f(): From noreply at buildbot.pypy.org Tue Feb 28 23:43:45 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:45 +0100 (CET) Subject: [pypy-commit] pypy py3k: this syntax is no longer valid, kill the test Message-ID: <20120228224345.E5F128204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53002:28efb1ee061c Date: 2012-02-28 23:06 +0100 http://bitbucket.org/pypy/pypy/changeset/28efb1ee061c/ Log: this syntax is no longer valid, kill the test diff --git a/pypy/module/exceptions/test/test_exc.py b/pypy/module/exceptions/test/test_exc.py --- a/pypy/module/exceptions/test/test_exc.py +++ b/pypy/module/exceptions/test/test_exc.py @@ -43,15 +43,6 @@ x = X(x=8) assert x.x == 8 - def test_catch_with_unpack(self): - from exceptions import LookupError - - try: - raise LookupError(1, 2) - except LookupError, (one, two): - assert one == 1 - assert two == 2 - def test_exc(self): from exceptions import Exception, BaseException From noreply at buildbot.pypy.org Tue Feb 28 23:43:44 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:44 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the cStringIO module and its tests Message-ID: <20120228224344.585FD8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53001:9470a3e579f1 Date: 2012-02-28 22:57 +0100 http://bitbucket.org/pypy/pypy/changeset/9470a3e579f1/ Log: kill the cStringIO module and its tests diff --git a/lib_pypy/cStringIO.py b/lib_pypy/cStringIO.py deleted file mode 100644 --- a/lib_pypy/cStringIO.py +++ /dev/null @@ -1,16 +0,0 @@ -# -# StringIO-based cStringIO implementation. -# - -# Note that PyPy contains also a built-in module 'cStringIO' which will hide -# this one if compiled in. - -from StringIO import * -from StringIO import __doc__ - -class StringIO(StringIO): - def reset(self): - """ - reset() -- Reset the file position to the beginning - """ - self.seek(0, 0) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -46,7 +46,7 @@ translation_modules = default_modules.copy() translation_modules.update(dict.fromkeys( ["fcntl", "rctime", "select", "signal", "_rawffi", "zlib", - "struct", "_md5", "cStringIO", "array", "_ffi", + "struct", "_md5", "array", "_ffi", # the following are needed for pyrepl (and hence for the # interactive prompt/pdb) "termios", "_minimal_curses", @@ -54,7 +54,7 @@ working_oo_modules = default_modules.copy() working_oo_modules.update(dict.fromkeys( - ["_md5", "_sha", "cStringIO", "itertools"] + ["_md5", "_sha", "itertools"] )) # XXX this should move somewhere else, maybe to platform ("is this posixish" diff --git a/pypy/module/cStringIO/__init__.py b/pypy/module/cStringIO/__init__.py deleted file mode 100644 --- a/pypy/module/cStringIO/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ - -# Package initialisation -from pypy.interpreter.mixedmodule import MixedModule - -class Module(MixedModule): - appleveldefs = { - } - - interpleveldefs = { - 'StringIO': 'interp_stringio.StringIO', - 'InputType': 'interp_stringio.W_InputType', - 'OutputType': 'interp_stringio.W_OutputType', - } diff --git a/pypy/module/cStringIO/interp_stringio.py b/pypy/module/cStringIO/interp_stringio.py deleted file mode 100644 --- a/pypy/module/cStringIO/interp_stringio.py +++ /dev/null @@ -1,254 +0,0 @@ -from pypy.interpreter.error import OperationError -from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.interpreter.gateway import interp2app, unwrap_spec -from pypy.rlib.rStringIO import RStringIO - - -class W_InputOutputType(Wrappable): - softspace = 0 # part of the file object API - - def descr___iter__(self): - self.check_closed() - return self - - def descr_close(self): - self.close() - - def check_closed(self): - if self.is_closed(): - space = self.space - raise OperationError(space.w_ValueError, - space.wrap("I/O operation on closed file")) - - def descr_flush(self): - self.check_closed() - - def descr_getvalue(self): - self.check_closed() - return self.space.wrap(self.getvalue()) - - def descr_isatty(self): - self.check_closed() - return self.space.w_False - - def descr___next__(self): - space = self.space - self.check_closed() - line = self.readline() - if len(line) == 0: - raise OperationError(space.w_StopIteration, space.w_None) - return space.wrap(line) - - @unwrap_spec(n=int) - def descr_read(self, n=-1): - self.check_closed() - return self.space.wrap(self.read(n)) - - @unwrap_spec(size=int) - def descr_readline(self, size=-1): - self.check_closed() - return self.space.wrap(self.readline(size)) - - @unwrap_spec(size=int) - def descr_readlines(self, size=0): - self.check_closed() - lines_w = [] - while True: - line = self.readline() - if len(line) == 0: - break - lines_w.append(self.space.wrap(line)) - if size > 0: - size -= len(line) - if size <= 0: - break - return self.space.newlist(lines_w) - - def descr_reset(self): - self.check_closed() - self.seek(0) - - @unwrap_spec(position=int, mode=int) - def descr_seek(self, position, mode=0): - self.check_closed() - self.seek(position, mode) - - def descr_tell(self): - self.check_closed() - return self.space.wrap(self.tell()) - - # abstract methods - def close(self): assert False, "abstract" - def is_closed(self): assert False, "abstract" - def getvalue(self): assert False, "abstract" - def read(self, n=-1): assert False, "abstract" - def readline(self, size=-1): assert False, "abstract" - def seek(self, position, mode=0): assert False, "abstract" - def tell(self): assert False, "abstract" - -# ____________________________________________________________ - -class W_InputType(W_InputOutputType): - def __init__(self, space, string): - self.space = space - self.string = string - self.pos = 0 - - def close(self): - self.string = None - - def is_closed(self): - return self.string is None - - def getvalue(self): - return self.string - - def read(self, n=-1): - p = self.pos - count = len(self.string) - p - if n >= 0: - count = min(n, count) - if count <= 0: - return '' - self.pos = p + count - if count == len(self.string): - return self.string - else: - return self.string[p:p+count] - - def readline(self, size=-1): - p = self.pos - end = len(self.string) - if size >= 0 and size < end - p: - end = p + size - lastp = self.string.find('\n', p, end) - if lastp < 0: - endp = end - else: - endp = lastp + 1 - self.pos = endp - return self.string[p:endp] - - def seek(self, position, mode=0): - if mode == 1: - position += self.pos - elif mode == 2: - position += len(self.string) - if position < 0: - position = 0 - self.pos = position - - def tell(self): - return self.pos - -# ____________________________________________________________ - -class W_OutputType(RStringIO, W_InputOutputType): - def __init__(self, space): - RStringIO.__init__(self) - self.space = space - - def readline(self, size=-1): - p = self.tell() - bigbuffer = self.copy_into_bigbuffer() - end = len(bigbuffer) - if size >= 0 and size < end - p: - end = p + size - assert p >= 0 - i = p - while i < end: - finished = bigbuffer[i] == '\n' - i += 1 - if finished: - break - self.seek(i) - return ''.join(bigbuffer[p:i]) - - def descr_truncate(self, w_size=None): # note: a wrapped size! - self.check_closed() - space = self.space - if w_size is None or space.is_w(w_size, space.w_None): - size = self.tell() - else: - size = space.int_w(w_size) - if size < 0: - raise OperationError(space.w_IOError, space.wrap("negative size")) - self.truncate(size) - - @unwrap_spec(buffer='bufferstr') - def descr_write(self, buffer): - self.check_closed() - self.write(buffer) - - def descr_writelines(self, w_lines): - self.check_closed() - space = self.space - w_iterator = space.iter(w_lines) - while True: - try: - w_line = space.next(w_iterator) - except OperationError, e: - if not e.match(space, space.w_StopIteration): - raise - break # done - self.write(space.str_w(w_line)) - -# ____________________________________________________________ - -def descr_closed(self, space): - return space.wrap(self.is_closed()) - -def descr_softspace(self, space): - return space.wrap(self.softspace) - -def descr_setsoftspace(self, space, w_newvalue): - self.softspace = space.int_w(w_newvalue) - -common_descrs = { - '__iter__': interp2app(W_InputOutputType.descr___iter__), - '__next__': interp2app(W_InputOutputType.descr___next__), - 'close': interp2app(W_InputOutputType.descr_close), - 'flush': interp2app(W_InputOutputType.descr_flush), - 'getvalue': interp2app(W_InputOutputType.descr_getvalue), - 'isatty': interp2app(W_InputOutputType.descr_isatty), - 'read': interp2app(W_InputOutputType.descr_read), - 'readline': interp2app(W_InputOutputType.descr_readline), - 'readlines': interp2app(W_InputOutputType.descr_readlines), - 'reset': interp2app(W_InputOutputType.descr_reset), - 'seek': interp2app(W_InputOutputType.descr_seek), - 'tell': interp2app(W_InputOutputType.descr_tell), -} - -W_InputType.typedef = TypeDef( - "cStringIO.StringI", - __doc__ = "Simple type for treating strings as input file streams", - closed = GetSetProperty(descr_closed, cls=W_InputType), - softspace = GetSetProperty(descr_softspace, - descr_setsoftspace, - cls=W_InputType), - **common_descrs - # XXX CPython has the truncate() method here too, which is a bit strange - ) - -W_OutputType.typedef = TypeDef( - "cStringIO.StringO", - __doc__ = "Simple type for output to strings.", - truncate = interp2app(W_OutputType.descr_truncate), - write = interp2app(W_OutputType.descr_write), - writelines = interp2app(W_OutputType.descr_writelines), - closed = GetSetProperty(descr_closed, cls=W_OutputType), - softspace = GetSetProperty(descr_softspace, - descr_setsoftspace, - cls=W_OutputType), - **common_descrs - ) - -# ____________________________________________________________ - -def StringIO(space, w_string=None): - if space.is_w(w_string, space.w_None): - return space.wrap(W_OutputType(space)) - else: - string = space.bufferstr_w(w_string) - return space.wrap(W_InputType(space, string)) diff --git a/pypy/module/cStringIO/test/__init__.py b/pypy/module/cStringIO/test/__init__.py deleted file mode 100644 diff --git a/pypy/module/cStringIO/test/test_interp_stringio.py b/pypy/module/cStringIO/test/test_interp_stringio.py deleted file mode 100644 --- a/pypy/module/cStringIO/test/test_interp_stringio.py +++ /dev/null @@ -1,207 +0,0 @@ - -from pypy.conftest import gettestobjspace - -import os, sys, py - - -class AppTestcStringIO: - def setup_class(cls): - space = gettestobjspace(usemodules=('cStringIO',)) - cls.space = space - cls.w_write_many_expected_result = space.wrap(''.join( - [chr(i) for j in range(10) for i in range(253)])) - cls.w_StringIO = space.appexec([], """(): - import cStringIO - return cStringIO.StringIO - """) - - def test_simple(self): - f = self.StringIO() - f.write('hello') - f.write(' world') - assert f.getvalue() == 'hello world' - - def test_write_many(self): - f = self.StringIO() - for j in range(10): - for i in range(253): - f.write(chr(i)) - expected = ''.join([chr(i) for j in range(10) for i in range(253)]) - assert f.getvalue() == expected - - def test_seek(self): - f = self.StringIO() - f.write('0123') - f.write('456') - f.write('789') - f.seek(4) - f.write('AB') - assert f.getvalue() == '0123AB6789' - f.seek(-2, 2) - f.write('CDE') - assert f.getvalue() == '0123AB67CDE' - f.seek(2, 0) - f.seek(5, 1) - f.write('F') - assert f.getvalue() == '0123AB6FCDE' - - def test_write_beyond_end(self): - f = self.StringIO() - f.seek(20, 1) - assert f.tell() == 20 - f.write('X') - assert f.getvalue() == '\x00' * 20 + 'X' - - def test_tell(self): - f = self.StringIO() - f.write('0123') - f.write('456') - assert f.tell() == 7 - f.seek(2) - for i in range(3, 20): - f.write('X') - assert f.tell() == i - assert f.getvalue() == '01XXXXXXXXXXXXXXXXX' - - def test_read(self): - f = self.StringIO() - assert f.read() == '' - f.write('0123') - f.write('456') - assert f.read() == '' - assert f.read(5) == '' - assert f.tell() == 7 - f.seek(1) - assert f.read() == '123456' - assert f.tell() == 7 - f.seek(1) - assert f.read(12) == '123456' - assert f.tell() == 7 - f.seek(1) - assert f.read(2) == '12' - assert f.read(1) == '3' - assert f.tell() == 4 - f.seek(0) - assert f.read() == '0123456' - assert f.tell() == 7 - f.seek(0) - assert f.read(7) == '0123456' - assert f.tell() == 7 - f.seek(15) - assert f.read(2) == '' - assert f.tell() == 15 - - def test_reset(self): - f = self.StringIO() - f.write('foobar') - f.reset() - res = f.read() - assert res == 'foobar' - - def test_close(self): - f = self.StringIO() - assert not f.closed - f.close() - raises(ValueError, f.write, 'hello') - raises(ValueError, f.getvalue) - raises(ValueError, f.read, 0) - raises(ValueError, f.seek, 0) - assert f.closed - f.close() - assert f.closed - - def test_readline(self): - f = self.StringIO() - f.write('foo\nbar\nbaz') - f.seek(0) - assert f.readline() == 'foo\n' - assert f.readline(2) == 'ba' - assert f.readline() == 'r\n' - assert f.readline() == 'baz' - assert f.readline() == '' - f.seek(0) - assert iter(f) is f - assert list(f) == ['foo\n', 'bar\n', 'baz'] - f.write('\n') - f.seek(0) - assert iter(f) is f - assert list(f) == ['foo\n', 'bar\n', 'baz\n'] - f.seek(0) - assert f.readlines() == ['foo\n', 'bar\n', 'baz\n'] - f.seek(0) - assert f.readlines(2) == ['foo\n'] - - def test_misc(self): - f = self.StringIO() - f.flush() - assert f.isatty() is False - - def test_truncate(self): - f = self.StringIO() - f.truncate(20) - assert f.getvalue() == '' - assert f.tell() == 0 - f.write('\x00' * 20) - f.write('hello') - f.write(' world') - f.truncate(30) - assert f.getvalue() == '\x00' * 20 + 'hello worl' - f.truncate(25) - assert f.getvalue() == '\x00' * 20 + 'hello' - f.write('baz') - f.write('egg') - f.truncate(3) - assert f.tell() == 3 - assert f.getvalue() == '\x00' * 3 - raises(IOError, f.truncate, -1) - - def test_writelines(self): - f = self.StringIO() - f.writelines(['foo', 'bar', 'baz']) - assert f.getvalue() == 'foobarbaz' - - def test_stringi(self): - f = self.StringIO('hello world\nspam\n') - assert not hasattr(f, 'write') # it's a StringI - f.seek(3) - assert f.tell() == 3 - f.seek(50, 1) - assert f.tell() == 53 - f.seek(-3, 2) - assert f.tell() == 14 - assert f.read() == 'am\n' - f.seek(0) - assert f.readline() == 'hello world\n' - assert f.readline(4) == 'spam' - assert f.readline(400) == '\n' - f.reset() - assert f.readlines() == ['hello world\n', 'spam\n'] - f.seek(0, 0) - assert f.readlines(5) == ['hello world\n'] - f.seek(0) - assert list(f) == ['hello world\n', 'spam\n'] - - f.flush() - assert f.getvalue() == 'hello world\nspam\n' - assert f.isatty() is False - - assert not f.closed - f.close() - assert f.closed - raises(ValueError, f.flush) - raises(ValueError, f.getvalue) - raises(ValueError, f.isatty) - raises(ValueError, f.read) - raises(ValueError, f.readline) - raises(ValueError, f.readlines) - raises(ValueError, f.reset) - raises(ValueError, f.tell) - raises(ValueError, f.seek, 5) - assert f.closed - f.close() - assert f.closed - - def test_types(self): - import cStringIO - assert type(cStringIO.StringIO()) is cStringIO.OutputType - assert type(cStringIO.StringIO('')) is cStringIO.InputType diff --git a/pypy/module/cStringIO/test/test_ztranslation.py b/pypy/module/cStringIO/test/test_ztranslation.py deleted file mode 100644 --- a/pypy/module/cStringIO/test/test_ztranslation.py +++ /dev/null @@ -1,4 +0,0 @@ -from pypy.objspace.fake.checkmodule import checkmodule - -def test_checkmodule(): - checkmodule('cStringIO') diff --git a/pypy/module/test_lib_pypy/test_cstringio.py b/pypy/module/test_lib_pypy/test_cstringio.py deleted file mode 100644 --- a/pypy/module/test_lib_pypy/test_cstringio.py +++ /dev/null @@ -1,30 +0,0 @@ - -""" -Tests for the PyPy cStringIO implementation. -""" - -from pypy.conftest import gettestobjspace - -class AppTestcStringIO: - def setup_class(cls): - cls.space = gettestobjspace() - cls.w_io = cls.space.appexec([], "(): import cStringIO; return cStringIO") - cls.w_bytes = cls.space.wrap('some bytes') - - - def test_reset(self): - """ - Test that the reset method of cStringIO objects sets the position - marker to the beginning of the stream. - """ - io = self.io.StringIO() - io.write(self.bytes) - assert io.read() == '' - io.reset() - assert io.read() == self.bytes - - io = self.io.StringIO(self.bytes) - assert io.read() == self.bytes - assert io.read() == '' - io.reset() - assert io.read() == self.bytes From noreply at buildbot.pypy.org Tue Feb 28 23:43:47 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the u'' Message-ID: <20120228224347.8A4D38203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53003:ef6d5d78273e Date: 2012-02-28 23:08 +0100 http://bitbucket.org/pypy/pypy/changeset/ef6d5d78273e/ Log: kill the u'' diff --git a/pypy/module/exceptions/test/test_exc.py b/pypy/module/exceptions/test/test_exc.py --- a/pypy/module/exceptions/test/test_exc.py +++ b/pypy/module/exceptions/test/test_exc.py @@ -151,16 +151,16 @@ def test_unicode_encode_error(self): from exceptions import UnicodeEncodeError - ue = UnicodeEncodeError("x", u"y", 1, 5, "bah") + ue = UnicodeEncodeError("x", "y", 1, 5, "bah") assert ue.encoding == 'x' - assert ue.object == u'y' + assert ue.object == 'y' assert ue.start == 1 assert ue.end == 5 assert ue.reason == 'bah' - assert ue.args == ('x', u'y', 1, 5, 'bah') + assert ue.args == ('x', 'y', 1, 5, 'bah') assert ue.message == '' - ue.object = u'z9' - assert ue.object == u'z9' + ue.object = 'z9' + assert ue.object == 'z9' assert str(ue) == "'x' codec can't encode characters in position 1-4: bah" ue.end = 2 assert str(ue) == "'x' codec can't encode character u'\\x39' in position 1: bah" From noreply at buildbot.pypy.org Tue Feb 28 23:43:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:48 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the u'' Message-ID: <20120228224348.B9A658203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53004:390d9fc2b4bd Date: 2012-02-28 23:09 +0100 http://bitbucket.org/pypy/pypy/changeset/390d9fc2b4bd/ Log: kill the u'' diff --git a/pypy/module/exceptions/test/test_exc.py b/pypy/module/exceptions/test/test_exc.py --- a/pypy/module/exceptions/test/test_exc.py +++ b/pypy/module/exceptions/test/test_exc.py @@ -71,18 +71,18 @@ def test_unicode_translate_error(self): from exceptions import UnicodeTranslateError - ut = UnicodeTranslateError(u"x", 1, 5, "bah") - assert ut.object == u'x' + ut = UnicodeTranslateError("x", 1, 5, "bah") + assert ut.object == 'x' assert ut.start == 1 assert ut.end == 5 assert ut.reason == 'bah' - assert ut.args == (u'x', 1, 5, 'bah') + assert ut.args == ('x', 1, 5, 'bah') assert ut.message == '' - ut.object = u'y' - assert ut.object == u'y' + ut.object = 'y' + assert ut.object == 'y' assert str(ut) == "can't translate characters in position 1-4: bah" ut.start = 4 - ut.object = u'012345' + ut.object = '012345' assert str(ut) == "can't translate character u'\\x34' in position 4: bah" ut.object = [] assert ut.object == [] From noreply at buildbot.pypy.org Tue Feb 28 23:43:49 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:49 +0100 (CET) Subject: [pypy-commit] pypy py3k: __hex__ and __oct__ has gone in py3k, and we can return only str from __str__ and __repr__ (instead of e.g. str and unicode in py2). Adapt this test, which is now very simple Message-ID: <20120228224349.EAB768203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53005:49869aab11b9 Date: 2012-02-28 23:28 +0100 http://bitbucket.org/pypy/pypy/changeset/49869aab11b9/ Log: __hex__ and __oct__ has gone in py3k, and we can return only str from __str__ and __repr__ (instead of e.g. str and unicode in py2). Adapt this test, which is now very simple diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -280,17 +280,10 @@ return answer * 2 def __repr__(self): return answer * 3 - def __hex__(self): - return answer * 4 - def __oct__(self): - return answer * 5 - for operate, n in [(str, 2), (repr, 3), (hex, 4), (oct, 5)]: + for operate, n in [(str, 2), (repr, 3)]: answer = "hello" assert operate(A()) == "hello" * n - if operate not in (hex, oct): - answer = u"world" - assert operate(A()) == "world" * n assert type(operate(A())) is str answer = 42 raises(TypeError, operate, A()) From noreply at buildbot.pypy.org Tue Feb 28 23:43:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: objects of incompatible types are now unordeable in python3. Completely change the meaning of this test: instead of checking the ordering, we check that we always raise TypeError. Passes with -A, still fails on pypy Message-ID: <20120228224351.26AFC8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53006:62d3938e5e15 Date: 2012-02-28 23:34 +0100 http://bitbucket.org/pypy/pypy/changeset/62d3938e5e15/ Log: objects of incompatible types are now unordeable in python3. Completely change the meaning of this test: instead of checking the ordering, we check that we always raise TypeError. Passes with -A, still fails on pypy diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -300,37 +300,26 @@ x.__class__ = Y raises(AttributeError, getattr, x, 'a') - def test_silly_but_consistent_order(self): + def test_unordeable_types(self): # incomparable objects sort by type name :-/ class A(object): pass class zz(object): pass - assert A() < zz() - assert zz() > A() + raises(TypeError, "A() < zz()") + raises(TypeError, "zz() > A()") + raises(TypeError, "A() < A()") # if in doubt, CPython sorts numbers before non-numbers - assert 0 < () - assert 0L < () - assert 0.0 < () - assert 0j < () - assert 0 < [] - assert 0L < [] - assert 0.0 < [] - assert 0j < [] - assert 0 < A() - assert 0L < A() - assert 0.0 < A() - assert 0j < A() - assert 0 < zz() - assert 0L < zz() - assert 0.0 < zz() - assert 0j < zz() - # what if the type name is the same... whatever, but - # be consistent - a1 = A() - a2 = A() - class A(object): pass - a3 = A() - a4 = A() - assert (a1 < a3) == (a1 < a4) == (a2 < a3) == (a2 < a4) + raises(TypeError, "0 < ()") + raises(TypeError, "0.0 < ()") + raises(TypeError, "0j < ()") + raises(TypeError, "0 < []") + raises(TypeError, "0.0 < []") + raises(TypeError, "0j < []") + raises(TypeError, "0 < A()") + raises(TypeError, "0.0 < A()") + raises(TypeError, "0j < A()") + raises(TypeError, "0 < zz()") + raises(TypeError, "0.0 < zz()") + raises(TypeError, "0j < zz()") def test_setattrweakref(self): skip("fails, works in cpython") From noreply at buildbot.pypy.org Tue Feb 28 23:43:52 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:52 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the case about longs, we no longer have them Message-ID: <20120228224352.5C3918203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53007:f0188f4306f9 Date: 2012-02-28 23:35 +0100 http://bitbucket.org/pypy/pypy/changeset/f0188f4306f9/ Log: kill the case about longs, we no longer have them diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -586,10 +586,6 @@ def __len__(self): return -1 raises(ValueError, len, X()) - class Y(object): - def __len__(self): - return -1L - raises(ValueError, len, Y()) def test_len_custom__int__(self): class X(object): From noreply at buildbot.pypy.org Tue Feb 28 23:43:53 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:53 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the cases about longs. Also, instances of classes which define __eq__ but not __hash__ are unhashable in py3k. Fix the test, which now passes with -A but fails on py.py Message-ID: <20120228224353.8B9638203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53008:3a26b5491705 Date: 2012-02-28 23:39 +0100 http://bitbucket.org/pypy/pypy/changeset/3a26b5491705/ Log: kill the cases about longs. Also, instances of classes which define __eq__ but not __hash__ are unhashable in py3k. Fix the test, which now passes with -A but fails on py.py diff --git a/pypy/objspace/test/test_descriptor.py b/pypy/objspace/test/test_descriptor.py --- a/pypy/objspace/test/test_descriptor.py +++ b/pypy/objspace/test/test_descriptor.py @@ -110,7 +110,7 @@ # useless result). class B(object): def __eq__(self, other): pass - hash(B()) + raises(TypeError, "hash(B())") # because we define __eq__ but not __hash__ # same as above for __cmp__ class C(object): @@ -121,26 +121,12 @@ def __hash__(self): return "something" raises(TypeError, hash, E()) - class F: # can return long - def __hash__(self): - return long(2**33) - assert hash(F()) == hash(2**33) # 2.5 behavior class G: def __hash__(self): return 1 assert isinstance(hash(G()), int) - # __hash__ can return a subclass of long, but the fact that it's - # a subclass is ignored - class mylong(long): - def __hash__(self): - return 0 - class H(object): - def __hash__(self): - return mylong(42) - assert hash(H()) == hash(42L) - # don't return a subclass of int, either class myint(int): pass From noreply at buildbot.pypy.org Tue Feb 28 23:43:54 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 28 Feb 2012 23:43:54 +0100 (CET) Subject: [pypy-commit] pypy py3k: fix syntax Message-ID: <20120228224354.BC65E8203C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53009:ae3a4343a78d Date: 2012-02-28 23:43 +0100 http://bitbucket.org/pypy/pypy/changeset/ae3a4343a78d/ Log: fix syntax diff --git a/pypy/tool/pytest/test/test_pytestsupport.py b/pypy/tool/pytest/test/test_pytestsupport.py --- a/pypy/tool/pytest/test/test_pytestsupport.py +++ b/pypy/tool/pytest/test/test_pytestsupport.py @@ -49,18 +49,18 @@ except AssertionError: pass else: - raise AssertionError, "app level AssertionError mixup!" + raise AssertionError("app level AssertionError mixup!") def app_test_exception_with_message(): try: assert 0, "Failed" - except AssertionError, e: + except AssertionError as e: assert e.msg == "Failed" def app_test_comparison(): try: assert 3 > 4 - except AssertionError, e: + except AssertionError as e: assert "3 > 4" in e.msg @@ -162,7 +162,7 @@ def test_app_test_blow(testdir): conftestpath.copy(testdir.tmpdir) sorter = testdir.inline_runsource("""class AppTestBlow: - def test_one(self): exec 'blow' + def test_one(self): exec('blow') """) ev, = sorter.getreports("pytest_runtest_logreport") From noreply at buildbot.pypy.org Wed Feb 29 07:46:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 29 Feb 2012 07:46:42 +0100 (CET) Subject: [pypy-commit] pypy dead-code-optimization: work some on the tests, IN PROGRESS Message-ID: <20120229064642.804008204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: dead-code-optimization Changeset: r53010:543bfbe532c4 Date: 2012-02-28 22:46 -0800 http://bitbucket.org/pypy/pypy/changeset/543bfbe532c4/ Log: work some on the tests, IN PROGRESS diff --git a/pypy/jit/metainterp/optimizeopt/deadops.py b/pypy/jit/metainterp/optimizeopt/deadops.py --- a/pypy/jit/metainterp/optimizeopt/deadops.py +++ b/pypy/jit/metainterp/optimizeopt/deadops.py @@ -1,10 +1,16 @@ + +from pypy.jit.metainterp.history import rop def remove_dead_ops(loop): newops = [] seen = {} for i in range(len(loop.operations) -1, -1, -1): op = loop.operations[i] - if op.has_no_side_effect() and op.result not in seen: + # XXX SAME_AS is required for crazy stuff that unroll does, which + # makes dead ops sometime alive + if (op.opnum not in [rop.LABEL, rop.JUMP, rop.SAME_AS] + and op.has_no_side_effect() + and op.result not in seen): continue for arg in op.getarglist(): seen[arg] = None diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -465,7 +465,6 @@ def optimize_INT_FLOORDIV(self, op): v1 = self.getvalue(op.getarg(0)) v2 = self.getvalue(op.getarg(1)) - if v2.is_constant() and v2.box.getint() == 1: self.make_equal_to(op.result, v1) return diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -227,8 +227,9 @@ def test_two_intermediate_labels_basic_1(self): ops = """ - [p1, i1] + [p1, i1, i10] i2 = getfield_gc(p1, descr=valuedescr) + guard_true(i10) [p1, i1, i2] label(p1, i1) i3 = getfield_gc(p1, descr=valuedescr) i4 = int_add(i1, i3) @@ -237,8 +238,9 @@ jump(p1, i5) """ expected = """ - [p1, i1] + [p1, i1, i10] i2 = getfield_gc(p1, descr=valuedescr) + guard_true(i10) [p1, i1, i2] label(p1, i1, i2) i4 = int_add(i1, i2) label(p1, i4) @@ -409,8 +411,8 @@ """ expected = """ [i0] + label(i0) i1 = int_add(i0, 1) - label(i0, i1) escape(i1) jump(i0, i1) """ diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -2795,7 +2795,6 @@ """ preamble = """ [p0] - p1 = getfield_gc(p0, descr=nextdescr) jump(p0) """ expected = """ @@ -3441,7 +3440,6 @@ """ expected = """ [p1] - i0 = force_token() jump(p1) """ self.optimize_loop(ops, expected, expected) @@ -3796,16 +3794,16 @@ def test_bound_lt_noguard(self): ops = """ - [i0] + [i0, i3] i1 = int_lt(i0, 4) i2 = int_lt(i0, 5) - jump(i2) - """ - expected = """ - [i0] + jump(i2, i1) + """ + expected = """ + [i0, i3] i1 = int_lt(i0, 4) i2 = int_lt(i0, 5) - jump(i2) + jump(i2, i1) """ self.optimize_loop(ops, expected, expected) @@ -3890,7 +3888,6 @@ [i0] i1 = int_lt(i0, 4) guard_true(i1) [] - i2 = int_add(i0, 10) jump(i0) """ expected = """ @@ -3937,7 +3934,6 @@ [i0] i1 = int_lt(i0, 4) guard_true(i1) [] - i2 = int_add(i0, 10) jump(i0) """ expected = """ @@ -4014,7 +4010,6 @@ guard_true(i1) [] i1p = int_gt(i0, -4) guard_true(i1p) [] - i2 = int_sub(i0, 10) jump(i0) """ expected = """ @@ -4194,7 +4189,6 @@ expected = """ [i0, p0] p1 = new_array(i0, descr=arraydescr) - i1 = arraylen_gc(p1) setarrayitem_gc(p0, 0, p1) jump(i0, p0) """ @@ -4210,7 +4204,6 @@ """ preamble = """ [p0] - i0 = strlen(p0) jump(p0) """ expected = """ @@ -4336,7 +4329,6 @@ """ expected = """ [p0, i22] - i331 = force_token() jump(p0, i22) """ self.optimize_loop(ops, expected) @@ -4346,7 +4338,7 @@ [p4, p7, i30] p16 = getfield_gc(p4, descr=valuedescr) p17 = getarrayitem_gc(p4, 1, descr=arraydescr) - guard_value(p16, ConstPtr(myptr), descr=) [] + guard_value(p16, ConstPtr(myptr), descr=) [p17] i1 = getfield_raw(p7, descr=nextdescr) i2 = int_add(i1, i30) setfield_raw(p7, 7, descr=nextdescr) @@ -4375,6 +4367,16 @@ setarrayitem_raw(p7, 1, i2, descr=arraydescr) jump(p4, p7, i30) """ + preamble = """ + [p4, p7, i30] + p16 = getfield_gc(p4, descr=valuedescr) + guard_value(p16, ConstPtr(myptr), descr=) [] + i1 = getarrayitem_raw(p7, 1, descr=arraydescr) + i2 = int_add(i1, i30) + setarrayitem_raw(p7, 1, 7, descr=arraydescr) + setarrayitem_raw(p7, 1, i2, descr=arraydescr) + jump(p4, p7, i30) + """ expected = """ [p4, p7, i30] i1 = getarrayitem_raw(p7, 1, descr=arraydescr) @@ -4383,7 +4385,7 @@ setarrayitem_raw(p7, 1, i2, descr=arraydescr) jump(p4, p7, i30) """ - self.optimize_loop(ops, expected, ops) + self.optimize_loop(ops, expected, preamble) def test_pure(self): ops = """ @@ -4441,7 +4443,14 @@ setfield_gc(p3, p1, descr=nextdescr) jump() """ - self.optimize_loop(ops, ops) + expected = """ + [] + p1 = escape() + p3 = escape() + setfield_gc(p3, p1, descr=nextdescr) + jump() + """ + self.optimize_loop(ops, expected) def test_getfield_guard_const(self): ops = """ @@ -4627,7 +4636,6 @@ ix3 = int_xor(i1, i0) ix3t = int_ge(ix3, 0) guard_true(ix3t) [] - ix4 = int_xor(i1, i2) jump(i0, i1, i2) """ expected = """ @@ -4666,7 +4674,6 @@ ix3 = int_floordiv(i1, i0) ix3t = int_ge(ix3, 0) guard_true(ix3t) [] - ix4 = int_floordiv(i1, i2) jump(i0, i1, i2) """ expected = """ @@ -4694,14 +4701,12 @@ """ preamble = """ [i1, i2a, i2b, i2c] - i3 = int_is_zero(i1) i4 = int_gt(i2a, 7) guard_true(i4) [] i6 = int_le(i2b, -7) guard_true(i6) [] i8 = int_gt(i2c, -7) guard_true(i8) [] - i9 = int_is_zero(i2c) jump(i1, i2a, i2b, i2c) """ expected = """ @@ -4741,9 +4746,6 @@ i17 = int_eq(i15, -1) guard_false(i17) [] i18 = int_floordiv(i7, i6) - i19 = int_xor(i7, i6) - i22 = int_mod(i7, i6) - i23 = int_is_true(i22) jump(i7, i18, i8) """ expected = """ @@ -4754,48 +4756,45 @@ i17 = int_eq(i15, -1) guard_false(i17) [] i18 = int_floordiv(i7, i6) - i19 = int_xor(i7, i6) - i22 = int_mod(i7, i6) - i23 = int_is_true(i22) jump(i7, i18, i8) """ self.optimize_loop(ops, expected, preamble) def test_division_to_rshift(self): ops = """ - [i1, i2] + [i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13] it = int_gt(i1, 0) guard_true(it)[] - i3 = int_floordiv(i1, i2) - i4 = int_floordiv(2, i2) - i5 = int_floordiv(i1, 2) - i6 = int_floordiv(3, i2) - i7 = int_floordiv(i1, 3) - i8 = int_floordiv(4, i2) - i9 = int_floordiv(i1, 4) - i10 = int_floordiv(i1, 0) - i11 = int_floordiv(i1, 1) - i12 = int_floordiv(i2, 2) - i13 = int_floordiv(i2, 3) - i14 = int_floordiv(i2, 4) - jump(i5, i14) - """ - expected = """ - [i1, i2] + i14 = int_floordiv(i1, i2) + i15 = int_floordiv(2, i3) + i16 = int_floordiv(i1, 2) + i17 = int_floordiv(3, i5) + i18 = int_floordiv(i6, 3) + i19 = int_floordiv(4, i7) + i20 = int_floordiv(i1, 4) + i21 = int_floordiv(i9, 0) + i22 = int_floordiv(i10, 1) + i23 = int_floordiv(i11, 2) + i24 = int_floordiv(i12, 3) + i25 = int_floordiv(i13, 4) + jump(i25, i14, i15, i16, i17, i18, i19, i20, i21, i22, i23, i24, i25) + """ + expected = """ + [i1, i2, i3, i4, i5, i6, i7, i8, i9, i11, i12, i13] it = int_gt(i1, 0) guard_true(it)[] - i3 = int_floordiv(i1, i2) - i4 = int_floordiv(2, i2) - i5 = int_rshift(i1, 1) - i6 = int_floordiv(3, i2) - i7 = int_floordiv(i1, 3) - i8 = int_floordiv(4, i2) - i9 = int_rshift(i1, 2) - i10 = int_floordiv(i1, 0) - i12 = int_floordiv(i2, 2) - i13 = int_floordiv(i2, 3) - i14 = int_floordiv(i2, 4) - jump(i5, i14) + i14 = int_floordiv(i1, i2) + i15 = int_floordiv(2, i3) + i16 = int_rshift(i1, 1) + i17 = int_floordiv(3, i5) + i18 = int_floordiv(i6, 3) + i19 = int_floordiv(4, i7) + i20 = int_rshift(i1, 2) + i21 = int_floordiv(i9, 0) + i23 = int_floordiv(i11, 2) + i24 = int_floordiv(i12, 3) + i25 = int_floordiv(i13, 4) + jump(i25, i14, i15, i16, i17, i18, i19, i20, i21, i23, i24) """ self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -435,12 +435,12 @@ token = JitCellToken() preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ operations + \ - [ResOperation(rop.LABEL, jump_args, None, descr=token)] + [ResOperation(rop.LABEL, jump_args[:], None, descr=token)] self._do_optimize_loop(preamble, call_pure_results) assert preamble.operations[-1].getopnum() == rop.LABEL - inliner = Inliner(inputargs, jump_args) + inliner = Inliner(inputargs, jump_args[:]) loop.resume_at_jump_descr = preamble.resume_at_jump_descr loop.operations = [preamble.operations[-1]] + \ [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ @@ -450,14 +450,15 @@ assert loop.operations[-1].getopnum() == rop.JUMP assert loop.operations[0].getopnum() == rop.LABEL loop.inputargs = loop.operations[0].getarglist() + self._do_optimize_loop(loop, call_pure_results) - self._do_optimize_loop(loop, call_pure_results) extra_same_as = [] while loop.operations[0].getopnum() != rop.LABEL: extra_same_as.append(loop.operations[0]) del loop.operations[0] # Hack to prevent random order of same_as ops + extra_same_as.sort(key=lambda op: str(preamble.operations).find(str(op.getarg(0)))) for op in extra_same_as: From noreply at buildbot.pypy.org Wed Feb 29 09:15:33 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 29 Feb 2012 09:15:33 +0100 (CET) Subject: [pypy-commit] pypy default: simplify test Message-ID: <20120229081533.258208204C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r53011:65946bb70fd9 Date: 2012-02-29 08:00 +0100 http://bitbucket.org/pypy/pypy/changeset/65946bb70fd9/ Log: simplify test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -398,100 +398,37 @@ with raises(InvalidLoop): self.optimize_loop(ops, ops) - def test_maybe_issue1045_related(self): + def test_issue1045(self): ops = """ - [p8] - p54 = getfield_gc(p8, descr=valuedescr) - mark_opaque_ptr(p54) - i55 = getfield_gc(p54, descr=nextdescr) - p57 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p57, i55, descr=otherdescr) - p69 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p69, i55, descr=otherdescr) - i71 = int_eq(i55, -9223372036854775808) - guard_false(i71) [] - i73 = int_mod(i55, 2) - i75 = int_rshift(i73, 63) - i76 = int_and(2, i75) - i77 = int_add(i73, i76) - p79 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p79, i77, descr=otherdescr) - i81 = int_eq(i77, 1) - guard_false(i81) [] - i0 = int_ge(i55, 1) - guard_true(i0) [] - label(p57) - jump(p57) - """ - expected = """ - [p8] - p54 = getfield_gc(p8, descr=valuedescr) - i55 = getfield_gc(p54, descr=nextdescr) - i71 = int_eq(i55, -9223372036854775808) - guard_false(i71) [] + [i55] i73 = int_mod(i55, 2) i75 = int_rshift(i73, 63) i76 = int_and(2, i75) i77 = int_add(i73, i76) i81 = int_eq(i77, 1) - guard_false(i81) [] i0 = int_ge(i55, 1) guard_true(i0) [] label(i55) - jump(i55) - """ - self.optimize_loop(ops, expected) - - def test_issue1045(self): - ops = """ - [p8] - p54 = getfield_gc(p8, descr=valuedescr) - mark_opaque_ptr(p54) - i55 = getfield_gc(p54, descr=nextdescr) - p57 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p57, i55, descr=otherdescr) - p69 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p69, i55, descr=otherdescr) - i71 = int_eq(i55, -9223372036854775808) - guard_false(i71) [] - i73 = int_mod(i55, 2) - i75 = int_rshift(i73, 63) - i76 = int_and(2, i75) - i77 = int_add(i73, i76) - p79 = new_with_vtable(ConstClass(node_vtable)) - setfield_gc(p79, i77, descr=otherdescr) - i81 = int_eq(i77, 1) - guard_false(i81) [] - i0 = int_ge(i55, 1) - guard_true(i0) [] - label(p57) i3 = int_mod(i55, 2) - escape(i3) i5 = int_rshift(i3, 63) i6 = int_and(2, i5) i7 = int_add(i3, i6) i8 = int_eq(i7, 1) escape(i8) - jump(p57) + jump(i55) """ expected = """ - [p8] - p54 = getfield_gc(p8, descr=valuedescr) - i55 = getfield_gc(p54, descr=nextdescr) - i71 = int_eq(i55, -9223372036854775808) - guard_false(i71) [] + [i55] i73 = int_mod(i55, 2) i75 = int_rshift(i73, 63) i76 = int_and(2, i75) i77 = int_add(i73, i76) i81 = int_eq(i77, 1) - guard_false(i81) [] i0 = int_ge(i55, 1) guard_true(i0) [] - label(i55, i73) - escape(i73) - escape(i73) - jump(i55, i73) + label(i55, i81) + escape(i81) + jump(i55, i81) """ self.optimize_loop(ops, expected) From noreply at buildbot.pypy.org Wed Feb 29 09:15:34 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 29 Feb 2012 09:15:34 +0100 (CET) Subject: [pypy-commit] pypy default: Dont import boxes proven constant while setting up the short_boxes and dont use the fallback to produce boxes with same_as if the box was already produced (should fix issue1045) Message-ID: <20120229081534.57AF48204C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r53012:6233cafefc45 Date: 2012-02-29 09:14 +0100 http://bitbucket.org/pypy/pypy/changeset/6233cafefc45/ Log: Dont import boxes proven constant while setting up the short_boxes and dont use the fallback to produce boxes with same_as if the box was already produced (should fix issue1045) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -457,7 +457,7 @@ metainterp_sd = FakeMetaInterpStaticData(self.cpu) optimize_unroll(metainterp_sd, loop, [OptRenameStrlen(), OptPure()], True) - def test_optimizer_renaming_boxes(self): + def test_optimizer_renaming_boxes1(self): ops = """ [p1] i1 = strlen(p1) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -260,7 +260,7 @@ if op and op.result: preamble_value = exported_state.exported_values[op.result] value = self.optimizer.getvalue(op.result) - if not value.is_virtual(): + if not value.is_virtual() and not value.is_constant(): imp = ValueImporter(self, preamble_value, op) self.optimizer.importable_values[value] = imp newvalue = self.optimizer.getvalue(op.result) @@ -268,7 +268,9 @@ # note that emitting here SAME_AS should not happen, but # in case it does, we would prefer to be suboptimal in asm # to a fatal RPython exception. - if newresult is not op.result and not newvalue.is_constant(): + if newresult is not op.result and \ + not self.short_boxes.has_producer(newresult) and \ + not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) if self.optimizer.loop.logops: From noreply at buildbot.pypy.org Wed Feb 29 09:15:35 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 29 Feb 2012 09:15:35 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20120229081535.874768204C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r53013:3291a609bfd7 Date: 2012-02-29 09:15 +0100 http://bitbucket.org/pypy/pypy/changeset/3291a609bfd7/ Log: hg merge diff --git a/pypy/module/test_lib_pypy/test_collections.py b/pypy/module/test_lib_pypy/test_collections.py --- a/pypy/module/test_lib_pypy/test_collections.py +++ b/pypy/module/test_lib_pypy/test_collections.py @@ -6,7 +6,7 @@ from pypy.conftest import gettestobjspace -class AppTestcStringIO: +class AppTestCollections: def test_copy(self): import _collections def f(): From noreply at buildbot.pypy.org Wed Feb 29 11:53:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 11:53:28 +0100 (CET) Subject: [pypy-commit] pypy default: Slight changes of the interface, to make it clear that callers don't Message-ID: <20120229105328.CAD588204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r53014:62365cdbdeb6 Date: 2012-02-29 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/62365cdbdeb6/ Log: Slight changes of the interface, to make it clear that callers don't expect to do anything with the token --- just check if it's 0 or not. diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2349,7 +2349,7 @@ # warmstate.py. virtualizable_box = self.virtualizable_boxes[-1] virtualizable = vinfo.unwrap_virtualizable_box(virtualizable_box) - assert not vinfo.gettoken(virtualizable) + assert not vinfo.is_token_nonnull_gcref(virtualizable) # fill the virtualizable with the local boxes self.synchronize_virtualizable() # diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -1101,14 +1101,14 @@ virtualizable = self.decode_ref(numb.nums[index]) if self.resume_after_guard_not_forced == 1: # in the middle of handle_async_forcing() - assert vinfo.gettoken(virtualizable) - vinfo.settoken(virtualizable, vinfo.TOKEN_NONE) + assert vinfo.is_token_nonnull_gcref(virtualizable) + vinfo.reset_token_gcref(virtualizable) else: # just jumped away from assembler (case 4 in the comment in # virtualizable.py) into tracing (case 2); check that vable_token # is and stays 0. Note the call to reset_vable_token() in # warmstate.py. - assert not vinfo.gettoken(virtualizable) + assert not vinfo.is_token_nonnull_gcref(virtualizable) return vinfo.write_from_resume_data_partial(virtualizable, self, numb) def load_value_of_type(self, TYPE, tagged): diff --git a/pypy/jit/metainterp/virtualizable.py b/pypy/jit/metainterp/virtualizable.py --- a/pypy/jit/metainterp/virtualizable.py +++ b/pypy/jit/metainterp/virtualizable.py @@ -262,15 +262,15 @@ force_now._dont_inline_ = True self.force_now = force_now - def gettoken(virtualizable): + def is_token_nonnull_gcref(virtualizable): virtualizable = cast_gcref_to_vtype(virtualizable) - return virtualizable.vable_token - self.gettoken = gettoken + return bool(virtualizable.vable_token) + self.is_token_nonnull_gcref = is_token_nonnull_gcref - def settoken(virtualizable, token): + def reset_token_gcref(virtualizable): virtualizable = cast_gcref_to_vtype(virtualizable) - virtualizable.vable_token = token - self.settoken = settoken + virtualizable.vable_token = VirtualizableInfo.TOKEN_NONE + self.reset_token_gcref = reset_token_gcref def _freeze_(self): return True From noreply at buildbot.pypy.org Wed Feb 29 12:10:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 12:10:51 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the config docs for two old modules Message-ID: <20120229111051.D97478204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53015:a8ec35d89d7a Date: 2012-02-29 09:25 +0100 http://bitbucket.org/pypy/pypy/changeset/a8ec35d89d7a/ Log: kill the config docs for two old modules diff --git a/pypy/doc/config/objspace.usemodules._file.txt b/pypy/doc/config/objspace.usemodules._file.txt deleted file mode 100644 --- a/pypy/doc/config/objspace.usemodules._file.txt +++ /dev/null @@ -1,4 +0,0 @@ -Use the '_file' module. It is an internal module that contains helper -functionality for the builtin ``file`` type. - -.. internal diff --git a/pypy/doc/config/objspace.usemodules.cStringIO.txt b/pypy/doc/config/objspace.usemodules.cStringIO.txt deleted file mode 100644 --- a/pypy/doc/config/objspace.usemodules.cStringIO.txt +++ /dev/null @@ -1,4 +0,0 @@ -Use the built-in cStringIO module. - -If not enabled, importing cStringIO gives you the app-level -implementation from the standard library StringIO module. From noreply at buildbot.pypy.org Wed Feb 29 12:10:53 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 12:10:53 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that bin() calls __index__ Message-ID: <20120229111053.7417F820D1@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53016:9a0f43982dd5 Date: 2012-02-29 11:50 +0100 http://bitbucket.org/pypy/pypy/changeset/9a0f43982dd5/ Log: make sure that bin() calls __index__ diff --git a/pypy/module/__builtin__/app_operation.py b/pypy/module/__builtin__/app_operation.py --- a/pypy/module/__builtin__/app_operation.py +++ b/pypy/module/__builtin__/app_operation.py @@ -1,4 +1,5 @@ +import operator + def bin(x): - if not isinstance(x, int): - raise TypeError("%s object cannot be interpreted as an integer" % type(x)) + x = operator.index(x) return x.__format__("#b") diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -77,10 +77,14 @@ r"""'\'\x00"\n\r\t abcd\x85\xe9\U00012fff\ud800\U0001d121xxx.'""" def test_bin(self): + class Foo: + def __index__(self): + return 4 assert bin(0) == "0b0" assert bin(-1) == "-0b1" assert bin(2) == "0b10" assert bin(-2) == "-0b10" + assert bin(Foo()) == "0b100" raises(TypeError, bin, 0.) def test_chr(self): From noreply at buildbot.pypy.org Wed Feb 29 12:10:55 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 12:10:55 +0100 (CET) Subject: [pypy-commit] pypy py3k: refactor hex() and oct(). Message-ID: <20120229111055.D7F518236C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53017:4d73687506dd Date: 2012-02-29 12:10 +0100 http://bitbucket.org/pypy/pypy/changeset/4d73687506dd/ Log: refactor hex() and oct(). In py2 they support the special methods __hex__ and __oct__, and thus are rendered as regular objspace methods (space.oct and space.hex). In py3k, __hex__ and __oct__ are gone, so they are just normal builtins, which are implemented on top of __format__ for consistency with bin() (the alternative would be to implement them on top of the rpython hex() and oct()). Killing support for hex and oct from the objspace is not trivial, e.g. we need to keep them in the method table because the flow objspace expects them. I hope not to have broken anything. Any review is appreciated :-) diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1528,8 +1528,10 @@ ('neg', 'neg', 1, ['__neg__']), ('nonzero', 'truth', 1, ['__bool__']), ('abs' , 'abs', 1, ['__abs__']), - ('hex', 'hex', 1, ['__hex__']), - ('oct', 'oct', 1, ['__oct__']), + # hex and oct no longer calls special methods in py3k, but we need to keep + # them in this table for the flow object space + ('hex', 'hex', 1, []), + ('oct', 'oct', 1, []), ('ord', 'ord', 1, []), ('invert', '~', 1, ['__invert__']), ('add', '+', 2, ['__add__', '__radd__']), diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -28,7 +28,8 @@ 'dir' : 'app_inspect.dir', 'bin' : 'app_operation.bin', - + 'oct' : 'app_operation.oct', + 'hex' : 'app_operation.hex', } interpleveldefs = { @@ -53,8 +54,6 @@ 'pow' : 'operation.pow', 'repr' : 'operation.repr', 'hash' : 'operation.hash', - 'oct' : 'operation.oct', - 'hex' : 'operation.hex', 'round' : 'operation.round', 'cmp' : 'operation.cmp', 'divmod' : 'operation.divmod', diff --git a/pypy/module/__builtin__/app_operation.py b/pypy/module/__builtin__/app_operation.py --- a/pypy/module/__builtin__/app_operation.py +++ b/pypy/module/__builtin__/app_operation.py @@ -1,5 +1,16 @@ import operator def bin(x): + """Return the binary representation of an integer.""" x = operator.index(x) return x.__format__("#b") + +def oct(x): + """Return the octal representation of an integer.""" + x = operator.index(x) + return x.__format__("#o") + +def hex(x): + """Return the hexadecimal representation of an integer.""" + x = operator.index(x) + return x.__format__("#x") diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -94,15 +94,6 @@ two un-equal objects to have the same hash value.""" return space.hash(w_object) -def oct(space, w_val): - """Return the octal representation of an integer.""" - # XXX does this need to be a space operation? - return space.oct(w_val) - -def hex(space, w_val): - """Return the hexadecimal representation of an integer.""" - return space.hex(w_val) - def id(space, w_object): "Return the identity of an object: id(x) == id(y) if and only if x is y." return space.id(w_object) diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -87,6 +87,28 @@ assert bin(Foo()) == "0b100" raises(TypeError, bin, 0.) + def test_oct(self): + class Foo: + def __index__(self): + return 4 + assert oct(0) == "0o0" + assert oct(-1) == "-0o1" + assert oct(8) == "0o10" + assert oct(-8) == "-0o10" + assert oct(Foo()) == "0o4" + raises(TypeError, oct, 0.) + + def test_hex(self): + class Foo: + def __index__(self): + return 4 + assert hex(0) == "0x0" + assert hex(-1) == "-0x1" + assert hex(16) == "0x10" + assert hex(-16) == "-0x10" + assert hex(Foo()) == "0x4" + raises(TypeError, hex, 0.) + def test_chr(self): import sys assert chr(65) == 'A' diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -718,9 +718,7 @@ for targetname, specialname in [ ('str', '__str__'), - ('repr', '__repr__'), - ('oct', '__oct__'), - ('hex', '__hex__')]: + ('repr', '__repr__')]: source = """if 1: def %(targetname)s(space, w_obj): diff --git a/pypy/objspace/std/builtinshortcut.py b/pypy/objspace/std/builtinshortcut.py --- a/pypy/objspace/std/builtinshortcut.py +++ b/pypy/objspace/std/builtinshortcut.py @@ -35,7 +35,7 @@ 'setattr', 'delattr', 'userdel', # mostly for non-builtins 'get', 'set', 'delete', # uncommon (except on functions) 'delitem', 'trunc', # rare stuff? - 'abs', 'hex', 'oct', # rare stuff? + 'abs', # rare stuff? 'pos', 'divmod', 'cmp', # rare stuff? 'float', 'long', # rare stuff? 'isinstance', 'issubtype', diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -330,12 +330,6 @@ x = float(a) return space.newfloat(x) -def oct__Int(space, w_int1): - return space.wrap(oct(w_int1.intval)) - -def hex__Int(space, w_int1): - return space.wrap(hex(w_int1.intval)) - def getnewargs__Int(space, w_int1): return space.newtuple([wrapint(space, w_int1.intval)]) diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -287,12 +287,6 @@ def or__Long_Long(space, w_long1, w_long2): return W_LongObject(w_long1.num.or_(w_long2.num)) -def oct__Long(space, w_long1): - return space.wrap(w_long1.num.oct()) - -def hex__Long(space, w_long1): - return space.wrap(w_long1.num.hex()) - def getnewargs__Long(space, w_long1): return space.newtuple([W_LongObject(w_long1.num)]) diff --git a/pypy/objspace/std/test/test_intobject.py b/pypy/objspace/std/test/test_intobject.py --- a/pypy/objspace/std/test/test_intobject.py +++ b/pypy/objspace/std/test/test_intobject.py @@ -271,18 +271,6 @@ result = iobj.int__Int(self.space, f1) assert result == f1 - def test_oct(self): - x = 012345 - f1 = iobj.W_IntObject(x) - result = iobj.oct__Int(self.space, f1) - assert self.space.unwrap(result) == oct(x) - - def test_hex(self): - x = 0x12345 - f1 = iobj.W_IntObject(x) - result = iobj.hex__Int(self.space, f1) - assert self.space.unwrap(result) == hex(x) - class AppTestInt: def test_conjugate(self): diff --git a/pypy/objspace/std/test/test_smallintobject.py b/pypy/objspace/std/test/test_smallintobject.py --- a/pypy/objspace/std/test/test_smallintobject.py +++ b/pypy/objspace/std/test/test_smallintobject.py @@ -213,18 +213,6 @@ result = self.space.int(f1) assert result == f1 - def test_oct(self): - x = 012345 - f1 = wrapint(self.space, x) - result = self.space.oct(f1) - assert self.space.unwrap(result) == oct(x) - - def test_hex(self): - x = 0x12345 - f1 = wrapint(self.space, x) - result = self.space.hex(f1) - assert self.space.unwrap(result) == hex(x) - class AppTestSmallInt(AppTestInt): diff --git a/pypy/objspace/test/test_descriptor.py b/pypy/objspace/test/test_descriptor.py --- a/pypy/objspace/test/test_descriptor.py +++ b/pypy/objspace/test/test_descriptor.py @@ -97,7 +97,7 @@ raises(TypeError, repr, inst) raises(TypeError, oct, inst) raises(TypeError, hex, inst) - assert A.seen == [1,2,3,4] + assert A.seen == [1,2] # __oct__ and __hex__ are no longer called def test_hash(self): class A(object): From noreply at buildbot.pypy.org Wed Feb 29 13:43:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 13:43:27 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit: Clarify. Message-ID: <20120229124327.3A1A08204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit Changeset: r53018:2953383cec07 Date: 2012-02-29 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/2953383cec07/ Log: Clarify. diff --git a/pypy/rlib/rstacklet.py b/pypy/rlib/rstacklet.py --- a/pypy/rlib/rstacklet.py +++ b/pypy/rlib/rstacklet.py @@ -72,6 +72,7 @@ def _freeze_(self): self.enabled = False + return False def enable(self): if not self.enabled: From noreply at buildbot.pypy.org Wed Feb 29 13:43:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 13:43:54 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit: hg merge default Message-ID: <20120229124354.E3FA98204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit Changeset: r53019:177cd8821288 Date: 2012-02-29 11:54 +0100 http://bitbucket.org/pypy/pypy/changeset/177cd8821288/ Log: hg merge default diff too long, truncating to 10000 out of 232489 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/ctypes_configure/cbuild.py b/ctypes_configure/cbuild.py --- a/ctypes_configure/cbuild.py +++ b/ctypes_configure/cbuild.py @@ -206,8 +206,9 @@ cfiles += eci.separate_module_files include_dirs = list(eci.include_dirs) library_dirs = list(eci.library_dirs) - if sys.platform == 'darwin': # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in include_dirs and \ os.path.exists(s + 'include'): include_dirs.append(s + 'include') @@ -380,9 +381,9 @@ self.link_extra += ['-pthread'] if sys.platform == 'win32': self.link_extra += ['/DEBUG'] # generate .pdb file - if sys.platform == 'darwin': - # support Fink & Darwinports - for s in ('/sw/', '/opt/local/'): + if (sys.platform == 'darwin' or # support Fink & Darwinports + sys.platform.startswith('freebsd')): + for s in ('/sw/', '/opt/local/', '/usr/local/'): if s + 'include' not in self.include_dirs and \ os.path.exists(s + 'include'): self.include_dirs.append(s + 'include') @@ -395,7 +396,6 @@ self.outputfilename = py.path.local(cfilenames[0]).new(ext=ext) else: self.outputfilename = py.path.local(outputfilename) - self.eci = eci def build(self, noerr=False): basename = self.outputfilename.new(ext='') @@ -436,7 +436,7 @@ old = cfile.dirpath().chdir() try: res = compiler.compile([cfile.basename], - include_dirs=self.eci.include_dirs, + include_dirs=self.include_dirs, extra_preargs=self.compile_extra) assert len(res) == 1 cobjfile = py.path.local(res[0]) @@ -445,9 +445,9 @@ finally: old.chdir() compiler.link_executable(objects, str(self.outputfilename), - libraries=self.eci.libraries, + libraries=self.libraries, extra_preargs=self.link_extra, - library_dirs=self.eci.library_dirs) + library_dirs=self.library_dirs) def build_executable(*args, **kwds): noerr = kwds.pop('noerr', False) diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, From noreply at buildbot.pypy.org Wed Feb 29 13:43:56 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 13:43:56 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit: Change again the API: this (unimplemented) version looks like it can be Message-ID: <20120229124356.2C49C8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit Changeset: r53020:e497e20b2937 Date: 2012-02-29 13:42 +0100 http://bitbucket.org/pypy/pypy/changeset/e497e20b2937/ Log: Change again the API: this (unimplemented) version looks like it can be used by the JIT. diff --git a/pypy/translator/c/src/stacklet/stacklet.h b/pypy/translator/c/src/stacklet/stacklet.h --- a/pypy/translator/c/src/stacklet/stacklet.h +++ b/pypy/translator/c/src/stacklet/stacklet.h @@ -59,17 +59,22 @@ */ char **_stacklet_translate_pointer(stacklet_handle context, char **ptr); -/* The "stacklet id" is a value that remain valid and unchanged if the - * stacklet is suspended and resumed. WARNING: DON'T USE unless you have - * no other choice, because it is not "composable" at all. +/* To use with the previous function: turn a 'char**' that points into + * the currently running stack into an opaque 'long'. The 'long' + * remains valid as long as the original stack location is valid. At + * any point in time we can ask '_stacklet_get_...()' to convert it back + * into a 'stacklet_handle, char**' pair. The 'char**' will always be + * the same, but the 'stacklet_handle' might change over time. + * Together, they are valid arguments for _stacklet_translate_pointer(). + * + * The returned 'long' is an odd value if currently running in a non- + * main stacklet, or directly '(long)stackptr' if currently running in + * the main stacklet. This guarantees that it is possible to use + * '_stacklet_get_...()' on a regular address taken before starting + * to use stacklets. */ -typedef struct stacklet_id_s *stacklet_id; -#define _stacklet_id_of_stacklet(stacklet) (*(stacklet_id*)(stacklet)) -#define _stacklet_id_current(thrd) (*(stacklet_id*)(thrd)) -/* Returns the current stacklet with the given id. - If 'id' == NULL, returns the main stacklet in the thread. - In both cases the return value is NULL if the id specifies the currently - running "stacklet". */ -stacklet_handle _stacklet_with_id(stacklet_thread_handle thrd, stacklet_id id); +long _stacklet_capture_stack_pointer(char **stackptr); +char **_stacklet_get_captured_pointer(long captured); +stacklet_handle _stacklet_get_captured_context(long captured); #endif /* _STACKLET_H_ */ From noreply at buildbot.pypy.org Wed Feb 29 15:43:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 15:43:59 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit: Adapt tests.c. Message-ID: <20120229144359.2AE268204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit Changeset: r53021:fcbe36c1dc9b Date: 2012-02-29 15:43 +0100 http://bitbucket.org/pypy/pypy/changeset/fcbe36c1dc9b/ Log: Adapt tests.c. diff --git a/pypy/translator/c/src/stacklet/tests.c b/pypy/translator/c/src/stacklet/tests.c --- a/pypy/translator/c/src/stacklet/tests.c +++ b/pypy/translator/c/src/stacklet/tests.c @@ -627,90 +627,139 @@ #endif /************************************************************/ -struct test_id_s { - stacklet_id idmain; - stacklet_id id1; - stacklet_id id2; +struct test_captr_s { + long c, c1, c2; + char **fooref; + char **fooref1; + char **fooref2; stacklet_handle hmain; stacklet_handle h1; stacklet_handle h2; } tid; -stacklet_handle stacklet_id_callback_1(stacklet_handle h, void *arg) +void cap_check_all(int depth); + +stacklet_handle stacklet_captr_callback_1(stacklet_handle h, void *arg) { - stacklet_id myid = _stacklet_id_current(thrd); - assert(_stacklet_with_id(thrd, myid) == NULL); tid.hmain = h; tid.h1 = NULL; + assert(status == 0); + status = 1; + + assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); + assert(_stacklet_get_captured_context(tid.c) == tid.hmain); + assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); + + char *ref1 = (char*)1111; + tid.c1 = _stacklet_capture_stack_pointer(&ref1); + tid.fooref1 = &ref1; + assert(_stacklet_get_captured_pointer(tid.c1) == &ref1); + assert(_stacklet_get_captured_context(tid.c1) == NULL); h = stacklet_switch(thrd, h); tid.hmain = h; tid.h1 = NULL; - tid.id1 = _stacklet_id_current(thrd); - assert(tid.id1 != tid.idmain); - assert(tid.id1 == myid); - assert(status == 0); - status = 1; + cap_check_all(20); + assert(status == 2); + status = 3; return stacklet_switch(thrd, h); } -stacklet_handle stacklet_id_callback_2(stacklet_handle h, void *arg) +stacklet_handle stacklet_captr_callback_2(stacklet_handle h, void *arg) { - stacklet_id myid = _stacklet_id_current(thrd); - assert(_stacklet_with_id(thrd, myid) == NULL); tid.hmain = h; tid.h2 = NULL; + assert(status == 1); + status = 2; + + char *ref2 = (char*)2222; + tid.c2 = _stacklet_capture_stack_pointer(&ref2); + tid.fooref2 = &ref2; + assert(_stacklet_get_captured_pointer(tid.c2) == &ref2); + assert(_stacklet_get_captured_context(tid.c2) == NULL); + + cap_check_all(20); h = stacklet_switch(thrd, h); tid.hmain = h; tid.h2 = NULL; - tid.id2 = _stacklet_id_current(thrd); - assert(tid.id2 != tid.idmain); - assert(tid.id2 != tid.id1); - assert(tid.id2 == myid); - assert(_stacklet_with_id(thrd, tid.idmain) == tid.hmain); - assert(_stacklet_with_id(thrd, tid.id1) == tid.h1); - assert(_stacklet_with_id(thrd, tid.id2) == tid.h2); + cap_check_all(20); - assert(status == 1); - status = 2; + assert(status == 5); + status = 6; return stacklet_switch(thrd, h); } -void test_stacklet_id(void) +void cap_check_all(int depth) { + assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); + assert(_stacklet_get_captured_context(tid.c) == tid.hmain); + assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); + + assert(_stacklet_get_captured_pointer(tid.c1) == tid.fooref1); + assert(_stacklet_get_captured_context(tid.c1) == tid.h1); + assert(*_stacklet_translate_pointer(tid.h1, tid.fooref1) == (char*)1111); + + assert(_stacklet_get_captured_pointer(tid.c2) == tid.fooref2); + assert(_stacklet_get_captured_context(tid.c2) == tid.h2); + assert(*_stacklet_translate_pointer(tid.h2, tid.fooref2) == (char*)2222); + + if (depth > 0) + cap_check_all(depth - 1); + + assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); + assert(_stacklet_get_captured_context(tid.c) == tid.hmain); + assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); + + assert(_stacklet_get_captured_pointer(tid.c1) == tid.fooref1); + assert(_stacklet_get_captured_context(tid.c1) == tid.h1); + assert(*_stacklet_translate_pointer(tid.h1, tid.fooref1) == (char*)1111); + + assert(_stacklet_get_captured_pointer(tid.c2) == tid.fooref2); + assert(_stacklet_get_captured_context(tid.c2) == tid.h2); + assert(*_stacklet_translate_pointer(tid.h2, tid.fooref2) == (char*)2222); +} + +void test_stacklet_capture(void) +{ + char *foo = (char*)-42; + tid.c = _stacklet_capture_stack_pointer(&foo); + tid.fooref = &foo; + assert(tid.c == (long)tid.fooref); + assert(_stacklet_get_captured_pointer(tid.c) == &foo); + assert(_stacklet_get_captured_context(tid.c) == NULL); + status = 0; - stacklet_handle h1 = stacklet_new(thrd, stacklet_id_callback_1, NULL); - stacklet_handle h2 = stacklet_new(thrd, stacklet_id_callback_2, NULL); + stacklet_handle h1 = stacklet_new(thrd, stacklet_captr_callback_1, NULL); + stacklet_handle h2 = stacklet_new(thrd, stacklet_captr_callback_2, NULL); tid.hmain = NULL; tid.h1 = h1; tid.h2 = h2; - tid.idmain = _stacklet_id_current(thrd); - assert(_stacklet_with_id(thrd, tid.idmain) == NULL); + cap_check_all(20); + assert(_stacklet_capture_stack_pointer(tid.fooref) == tid.c); - assert(status == 0); + assert(status == 2); + status = 3; h1 = stacklet_switch(thrd, h1); tid.hmain = NULL; tid.h1 = h1; - assert(status == 1); + cap_check_all(20); + + assert(status == 4); + status = 5; h2 = stacklet_switch(thrd, h2); tid.hmain = NULL; tid.h2 = h2; - assert(status == 2); - assert(_stacklet_id_of_stacklet(h1) == tid.id1); - assert(_stacklet_id_of_stacklet(h2) == tid.id2); - assert(_stacklet_id_current(thrd) == tid.idmain); - assert(_stacklet_with_id(thrd, tid.idmain) == NULL); - assert(_stacklet_with_id(thrd, tid.id1) == tid.h1); - assert(_stacklet_with_id(thrd, tid.id2) == tid.h2); + cap_check_all(20); + assert(status == 6); h1 = stacklet_switch(thrd, h1); assert(h1 == EMPTY_STACKLET_HANDLE); h2 = stacklet_switch(thrd, h2); @@ -741,7 +790,7 @@ TEST(test_double), TEST(test_random), #endif - TEST(test_stacklet_id), + TEST(test_stacklet_capture), { NULL, NULL } }; From noreply at buildbot.pypy.org Wed Feb 29 17:54:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 17:54:02 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit: Random progress. Message-ID: <20120229165402.4E3A68204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit Changeset: r53022:e8a39aa24c9b Date: 2012-02-29 17:53 +0100 http://bitbucket.org/pypy/pypy/changeset/e8a39aa24c9b/ Log: Random progress. diff --git a/pypy/translator/c/src/stacklet/stacklet.c b/pypy/translator/c/src/stacklet/stacklet.c --- a/pypy/translator/c/src/stacklet/stacklet.c +++ b/pypy/translator/c/src/stacklet/stacklet.c @@ -33,9 +33,13 @@ /************************************************************/ +struct cap_loc_s { + char **cl_original_pointer; + struct stacklet_s **cl_contained_in_stacklet; + struct cap_loc_s *cl_next; +} + struct stacklet_s { - stacklet_id id; /* first field */ - /* The portion of the real stack claimed by this paused tealet. */ char *stack_start; /* the "near" end of the stack */ char *stack_stop; /* the "far" end of the stack */ @@ -54,6 +58,9 @@ * main stack. */ struct stacklet_s *stack_prev; + + /* the captured stack locations */ + struct cap_loc_s *stack_cap_locs; }; void *(*_stacklet_switchstack)(void*(*)(void*, void*), @@ -61,20 +68,18 @@ void (*_stacklet_initialstub)(struct stacklet_thread_s *, stacklet_run_fn, void *) = NULL; -struct stacklet_id_s { - stacklet_handle stacklet; -}; - struct stacklet_thread_s { - stacklet_id g_current_id; /* first field */ struct stacklet_s *g_stack_chain_head; /* NULL <=> running main */ char *g_current_stack_stop; char *g_current_stack_marker; struct stacklet_s *g_source; struct stacklet_s *g_target; - struct stacklet_id_s g_main_id; + struct stacklet_thread_s *g_prev_thread, *g_next_thread; }; +/* circular doubly-linked list */ +static struct stacklet_thread_s *g_all_threads = NULL; + /***************************************************************/ static void g_save(struct stacklet_s* g, char* stop @@ -136,8 +141,6 @@ return -1; stacklet = thrd->g_source; - stacklet->id = thrd->g_current_id; - stacklet->id->stacklet = stacklet; stacklet->stack_start = old_stack_pointer; stacklet->stack_stop = thrd->g_current_stack_stop; stacklet->stack_saved = 0; @@ -237,8 +240,6 @@ memcpy(g->stack_start - stack_saved, g+1, stack_saved); #endif thrd->g_current_stack_stop = g->stack_stop; - thrd->g_current_id = g->id; - thrd->g_current_id->stacklet = NULL; free(g); return EMPTY_STACKLET_HANDLE; } @@ -247,12 +248,12 @@ stacklet_run_fn run, void *run_arg) { struct stacklet_s *result; - stacklet_id sid1 = thrd->g_current_id; + /*stacklet_id sid1 = thrd->g_current_id; stacklet_id sid = malloc(sizeof(struct stacklet_id_s)); if (sid == NULL) { thrd->g_source = NULL; return; - } + }*/ /* The following call returns twice! */ result = (struct stacklet_s *) _stacklet_switchstack(g_initial_save_state, @@ -261,18 +262,18 @@ if (result == NULL) { /* First time it returns. */ if (thrd->g_source == NULL) { /* out of memory */ - free(sid); + /*free(sid);*/ return; } /* Only g_initial_save_state() has run and has created 'g_source'. Call run(). */ - sid->stacklet = NULL; - thrd->g_current_id = sid; + /*sid->stacklet = NULL; + thrd->g_current_id = sid;*/ thrd->g_current_stack_stop = thrd->g_current_stack_marker; result = run(thrd->g_source, run_arg); /* Then switch to 'result'. */ - free(sid); + /*free(sid);*/ thrd->g_target = result; _stacklet_switchstack(g_destroy_state, g_restore_state, thrd); @@ -280,7 +281,7 @@ abort(); } /* The second time it returns. */ - assert(thrd->g_current_id == sid1); + /*assert(thrd->g_current_id == sid1);*/ } /************************************************************/ @@ -299,13 +300,36 @@ thrd = malloc(sizeof(struct stacklet_thread_s)); if (thrd != NULL) { memset(thrd, 0, sizeof(struct stacklet_thread_s)); - thrd->g_current_id = &thrd->g_main_id; + if (g_all_threads == NULL) { + g_all_threads = thrd; + thrd->g_prev_thread = thrd; + thrd->g_next_thread = thrd; + } + else { + struct stacklet_thread_s *next = g_all_threads->g_next_thread; + thrd->g_prev_thread = g_all_threads; + thrd->g_next_thread = next; + g_all_threads->g_next_thread = thrd; + next->g_prev_thread = thrd; + } } return thrd; } void stacklet_deletethread(stacklet_thread_handle thrd) { + /* remove 'thrd' from the circular doubly-linked list */ + stacklet_thread_handle prev = thrd->g_prev_thread; + stacklet_thread_handle next = thrd->g_next_thread; + assert(next->g_prev_thread == thrd); + assert(prev->g_next_thread == thrd); + next->g_prev_thread = prev; + prev->g_next_thread = next; + assert(g_all_threads != NULL); + if (g_all_threads == thrd) { + g_all_threads = (next == thrd) ? NULL : next; + } + /* free it */ free(thrd); } @@ -343,9 +367,9 @@ *pp = target->stack_prev; break; } - assert(target->id->stacklet == target); + /*assert(target->id->stacklet == target); if (target->id != &thrd->g_main_id) - free(target->id); + free(target->id);*/ free(target); } @@ -371,3 +395,64 @@ } return ptr; } + +long _stacklet_capture_stack_pointer(stacklet_thread_handle thrd, + char **stackptr) +{ + if (thrd->g_stack_chain_head == NULL) { + /* running in 'main' */ + return (long)stackptr; + } + else { + fprintf(stderr, "1!\n"); + abort(); + } +} + +char **_stacklet_get_captured_pointer(long captured) +{ + if ((captured & 1) == 0) { + return (char**)captured; + } + else { + fprintf(stderr, "2!\n"); + abort(); + } +} + +stacklet_handle _stacklet_get_captured_context(long captured) +{ + if ((captured & 1) == 0) { + /* it is one of the 'main' stacklets. If it was moved away, + we need to figure out which one it was. */ + char *p = (char *)captured; + struct stacklet_thread_s *thrd = g_all_threads; + if (thrd == NULL) + return NULL; /* no stacklet_thread at all */ + + while (1) { + struct stacklet_s *stacklet = thrd->g_stack_chain_head; + if (stacklet != NULL) { + /* not running 'main'. Find the main stacklet */ + while (stacklet->stack_prev) + stacklet = stacklet->stack_prev; + + /* is 'captured' among the moved-away data? */ + if (stacklet->stack_start <= p && p < stacklet->stack_stop) { + /* yes. to optimize the next calls make g_all_threads + point directly to thrd. */ + g_all_threads = thrd; + return stacklet; + } + } + thrd = thrd->g_next_thread; + if (thrd == g_all_threads) + break; + } + return NULL; + } + else { + fprintf(stderr, "3!\n"); + abort(); + } +} diff --git a/pypy/translator/c/src/stacklet/stacklet.h b/pypy/translator/c/src/stacklet/stacklet.h --- a/pypy/translator/c/src/stacklet/stacklet.h +++ b/pypy/translator/c/src/stacklet/stacklet.h @@ -72,8 +72,14 @@ * the main stacklet. This guarantees that it is possible to use * '_stacklet_get_...()' on a regular address taken before starting * to use stacklets. + * + * XXX assumes a single stacklet_thread_handle per thread + * + * XXX _stacklet_capture_stack_pointer() invalidates all 'long' values + * previously returned for the same stacklet that were for addresses + * later in the stack (i.e. lower). */ -long _stacklet_capture_stack_pointer(char **stackptr); +long _stacklet_capture_stack_pointer(stacklet_thread_handle, char **stackptr); char **_stacklet_get_captured_pointer(long captured); stacklet_handle _stacklet_get_captured_context(long captured); diff --git a/pypy/translator/c/src/stacklet/tests.c b/pypy/translator/c/src/stacklet/tests.c --- a/pypy/translator/c/src/stacklet/tests.c +++ b/pypy/translator/c/src/stacklet/tests.c @@ -628,8 +628,9 @@ /************************************************************/ struct test_captr_s { - long c, c1, c2; + long c, c0, c1, c2; char **fooref; + char **fooref0; char **fooref1; char **fooref2; stacklet_handle hmain; @@ -637,7 +638,7 @@ stacklet_handle h2; } tid; -void cap_check_all(int depth); +int cap_check_all(int depth); stacklet_handle stacklet_captr_callback_1(stacklet_handle h, void *arg) { @@ -647,11 +648,11 @@ status = 1; assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); - assert(_stacklet_get_captured_context(tid.c) == tid.hmain); + assert(_stacklet_get_captured_context(tid.c) == NULL); assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); char *ref1 = (char*)1111; - tid.c1 = _stacklet_capture_stack_pointer(&ref1); + tid.c1 = _stacklet_capture_stack_pointer(thrd, &ref1); tid.fooref1 = &ref1; assert(_stacklet_get_captured_pointer(tid.c1) == &ref1); assert(_stacklet_get_captured_context(tid.c1) == NULL); @@ -675,13 +676,11 @@ status = 2; char *ref2 = (char*)2222; - tid.c2 = _stacklet_capture_stack_pointer(&ref2); + tid.c2 = _stacklet_capture_stack_pointer(thrd, &ref2); tid.fooref2 = &ref2; assert(_stacklet_get_captured_pointer(tid.c2) == &ref2); assert(_stacklet_get_captured_context(tid.c2) == NULL); - cap_check_all(20); - h = stacklet_switch(thrd, h); tid.hmain = h; tid.h2 = NULL; @@ -694,11 +693,17 @@ return stacklet_switch(thrd, h); } -void cap_check_all(int depth) +int cap_check_all(int depth) { assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); + /* we always get NULL because it's before the portion of the stack + that is copied away and restored: */ + assert(_stacklet_get_captured_context(tid.c) == NULL); + assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); + + assert(_stacklet_get_captured_pointer(tid.c0) == tid.fooref0); assert(_stacklet_get_captured_context(tid.c) == tid.hmain); - assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); + assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)6363); assert(_stacklet_get_captured_pointer(tid.c1) == tid.fooref1); assert(_stacklet_get_captured_context(tid.c1) == tid.h1); @@ -709,39 +714,26 @@ assert(*_stacklet_translate_pointer(tid.h2, tid.fooref2) == (char*)2222); if (depth > 0) - cap_check_all(depth - 1); - - assert(_stacklet_get_captured_pointer(tid.c) == tid.fooref); - assert(_stacklet_get_captured_context(tid.c) == tid.hmain); - assert(*_stacklet_translate_pointer(tid.hmain, tid.fooref) == (char*)-42); - - assert(_stacklet_get_captured_pointer(tid.c1) == tid.fooref1); - assert(_stacklet_get_captured_context(tid.c1) == tid.h1); - assert(*_stacklet_translate_pointer(tid.h1, tid.fooref1) == (char*)1111); - - assert(_stacklet_get_captured_pointer(tid.c2) == tid.fooref2); - assert(_stacklet_get_captured_context(tid.c2) == tid.h2); - assert(*_stacklet_translate_pointer(tid.h2, tid.fooref2) == (char*)2222); + return cap_check_all(depth - 1) * depth; + return 1; } -void test_stacklet_capture(void) +int cap_with_extra_stack(int depth) { - char *foo = (char*)-42; - tid.c = _stacklet_capture_stack_pointer(&foo); - tid.fooref = &foo; - assert(tid.c == (long)tid.fooref); - assert(_stacklet_get_captured_pointer(tid.c) == &foo); - assert(_stacklet_get_captured_context(tid.c) == NULL); + stacklet_handle h1, h2; + char *foo0 = (char*)6363; - status = 0; - stacklet_handle h1 = stacklet_new(thrd, stacklet_captr_callback_1, NULL); - stacklet_handle h2 = stacklet_new(thrd, stacklet_captr_callback_2, NULL); - tid.hmain = NULL; - tid.h1 = h1; - tid.h2 = h2; + if (depth > 0) + return cap_with_extra_stack(depth - 1) * depth; + + tid.c0 = _stacklet_capture_stack_pointer(thrd, &foo0); + tid.fooref0 = &foo0; + assert(tid.c0 == (long)tid.fooref0); + assert(_stacklet_get_captured_pointer(tid.c0) == &foo0); + assert(_stacklet_get_captured_context(tid.c0) == NULL); cap_check_all(20); - assert(_stacklet_capture_stack_pointer(tid.fooref) == tid.c); + assert(_stacklet_capture_stack_pointer(thrd, tid.fooref) == tid.c); assert(status == 2); status = 3; @@ -758,11 +750,31 @@ tid.h2 = h2; cap_check_all(20); + return 1; +} + +void test_stacklet_capture(void) +{ + char *foo = (char*)-42; + tid.c = _stacklet_capture_stack_pointer(thrd, &foo); + tid.fooref = &foo; + assert(tid.c == (long)tid.fooref); + assert(_stacklet_get_captured_pointer(tid.c) == &foo); + assert(_stacklet_get_captured_context(tid.c) == NULL); + + status = 0; + stacklet_handle h1 = stacklet_new(thrd, stacklet_captr_callback_1, NULL); + stacklet_handle h2 = stacklet_new(thrd, stacklet_captr_callback_2, NULL); + tid.hmain = NULL; + tid.h1 = h1; + tid.h2 = h2; + + cap_with_extra_stack(20); assert(status == 6); - h1 = stacklet_switch(thrd, h1); + h1 = stacklet_switch(thrd, tid.h1); assert(h1 == EMPTY_STACKLET_HANDLE); - h2 = stacklet_switch(thrd, h2); + h2 = stacklet_switch(thrd, tid.h2); assert(h2 == EMPTY_STACKLET_HANDLE); } From noreply at buildbot.pypy.org Wed Feb 29 18:20:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 29 Feb 2012 18:20:23 +0100 (CET) Subject: [pypy-commit] pypy continulet-jit-2: A different attempt: change jit/backend/x86. If stacklets are actually Message-ID: <20120229172023.0B53E8204C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: continulet-jit-2 Changeset: r53023:137a6bd449b0 Date: 2012-02-29 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/137a6bd449b0/ Log: A different attempt: change jit/backend/x86. If stacklets are actually used (as detected by a quasi-immutable field read), we generate backend code that uses malloc()/realloc()/free() instead of storing stuff in the stack. This might solve most of the problems that we get so far by accessing 'vable_token' and 'virtual_token': they would become heap pointers, and so remain valid even if the stack is moved away. The performance impact should be null as long as we don't use stacklets, but when we do, we have to consider: * calls to malloc()/realloc()/free() whose result goes into 'ebp'; maybe later we can use a custom version of these functions, particularly for realloc() at the start of every bridge. * on the other hand, it makes the stack much smaller if everything is JITted, which is good for programs that heavily switch(). From noreply at buildbot.pypy.org Wed Feb 29 20:39:55 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 29 Feb 2012 20:39:55 +0100 (CET) Subject: [pypy-commit] pypy default: resurrect 3 lost overflow tests Message-ID: <20120229193955.B362B8204C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r53024:e81b673c6344 Date: 2012-02-29 20:38 +0100 http://bitbucket.org/pypy/pypy/changeset/e81b673c6344/ Log: resurrect 3 lost overflow tests diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -144,7 +144,7 @@ 'int_mul': 1, 'guard_true': 2, 'int_sub': 2}) - def test_loop_invariant_mul_ovf(self): + def test_loop_invariant_mul_ovf1(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) def f(x, y): res = 0 @@ -235,6 +235,65 @@ 'guard_true': 4, 'int_sub': 4, 'jump': 3, 'int_mul': 3, 'int_add': 4}) + def test_loop_invariant_mul_ovf2(self): + myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) + def f(x, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x=x, y=y, res=res) + myjitdriver.jit_merge_point(x=x, y=y, res=res) + b = y * 2 + try: + res += ovfcheck(x * x) + b + except OverflowError: + res += 1 + y -= 1 + return res + res = self.meta_interp(f, [sys.maxint, 7]) + assert res == f(sys.maxint, 7) + self.check_trace_count(1) + res = self.meta_interp(f, [6, 7]) + assert res == 308 + + def test_loop_invariant_mul_bridge_ovf1(self): + myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x1', 'x2']) + def f(x1, x2, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x1=x1, x2=x2, y=y, res=res) + myjitdriver.jit_merge_point(x1=x1, x2=x2, y=y, res=res) + try: + res += ovfcheck(x1 * x1) + except OverflowError: + res += 1 + if y<32 and (y>>2)&1==0: + x1, x2 = x2, x1 + y -= 1 + return res + res = self.meta_interp(f, [6, sys.maxint, 48]) + assert res == f(6, sys.maxint, 48) + + def test_loop_invariant_mul_bridge_ovf2(self): + myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x1', 'x2', 'n']) + def f(x1, x2, n, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x1=x1, x2=x2, y=y, res=res, n=n) + myjitdriver.jit_merge_point(x1=x1, x2=x2, y=y, res=res, n=n) + try: + res += ovfcheck(x1 * x1) + except OverflowError: + res += 1 + y -= 1 + if y&4 == 0: + x1, x2 = x2, x1 + return res + res = self.meta_interp(f, [sys.maxint, 6, 32, 48]) + assert res == f(sys.maxint, 6, 32, 48) + res = self.meta_interp(f, [6, sys.maxint, 32, 48]) + assert res == f(6, sys.maxint, 32, 48) + + def test_loop_invariant_intbox(self): myjitdriver = JitDriver(greens = [], reds = ['y', 'res', 'x']) class I: From noreply at buildbot.pypy.org Wed Feb 29 20:39:56 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 29 Feb 2012 20:39:56 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20120229193956.F19B0820D1@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r53025:5c8ac2b9d3bd Date: 2012-02-29 20:39 +0100 http://bitbucket.org/pypy/pypy/changeset/5c8ac2b9d3bd/ Log: hg merge diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -2349,7 +2349,7 @@ # warmstate.py. virtualizable_box = self.virtualizable_boxes[-1] virtualizable = vinfo.unwrap_virtualizable_box(virtualizable_box) - assert not vinfo.gettoken(virtualizable) + assert not vinfo.is_token_nonnull_gcref(virtualizable) # fill the virtualizable with the local boxes self.synchronize_virtualizable() # diff --git a/pypy/jit/metainterp/resume.py b/pypy/jit/metainterp/resume.py --- a/pypy/jit/metainterp/resume.py +++ b/pypy/jit/metainterp/resume.py @@ -1101,14 +1101,14 @@ virtualizable = self.decode_ref(numb.nums[index]) if self.resume_after_guard_not_forced == 1: # in the middle of handle_async_forcing() - assert vinfo.gettoken(virtualizable) - vinfo.settoken(virtualizable, vinfo.TOKEN_NONE) + assert vinfo.is_token_nonnull_gcref(virtualizable) + vinfo.reset_token_gcref(virtualizable) else: # just jumped away from assembler (case 4 in the comment in # virtualizable.py) into tracing (case 2); check that vable_token # is and stays 0. Note the call to reset_vable_token() in # warmstate.py. - assert not vinfo.gettoken(virtualizable) + assert not vinfo.is_token_nonnull_gcref(virtualizable) return vinfo.write_from_resume_data_partial(virtualizable, self, numb) def load_value_of_type(self, TYPE, tagged): diff --git a/pypy/jit/metainterp/virtualizable.py b/pypy/jit/metainterp/virtualizable.py --- a/pypy/jit/metainterp/virtualizable.py +++ b/pypy/jit/metainterp/virtualizable.py @@ -262,15 +262,15 @@ force_now._dont_inline_ = True self.force_now = force_now - def gettoken(virtualizable): + def is_token_nonnull_gcref(virtualizable): virtualizable = cast_gcref_to_vtype(virtualizable) - return virtualizable.vable_token - self.gettoken = gettoken + return bool(virtualizable.vable_token) + self.is_token_nonnull_gcref = is_token_nonnull_gcref - def settoken(virtualizable, token): + def reset_token_gcref(virtualizable): virtualizable = cast_gcref_to_vtype(virtualizable) - virtualizable.vable_token = token - self.settoken = settoken + virtualizable.vable_token = VirtualizableInfo.TOKEN_NONE + self.reset_token_gcref = reset_token_gcref def _freeze_(self): return True From noreply at buildbot.pypy.org Wed Feb 29 21:29:14 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 29 Feb 2012 21:29:14 +0100 (CET) Subject: [pypy-commit] jitviewer default: Update the log with 1.8 Message-ID: <20120229202914.88EE48204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r188:00da9b3de09c Date: 2012-02-28 22:44 -0800 http://bitbucket.org/pypy/jitviewer/changeset/00da9b3de09c/ Log: Update the log with 1.8 diff --git a/log.pypylog b/log.pypylog --- a/log.pypylog +++ b/log.pypylog @@ -1,131 +1,131 @@ -[7e18c33e717a] {jit-backend-dump +[101d9320139e7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56000 +0 4157415641554154415341524151415057565554535251504889E341BBC0BAF20041FFD34889DF4883E4F041BBC0C9D20041FFD3488D65D8415F415E415D415C5B5DC3 -[7e18c33f9678] jit-backend-dump} -[7e18c33fb1c9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447000 +0 4157415641554154415341524151415057565554535251504889E341BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[101d932027117] jit-backend-dump} +[101d932028a7b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56043 +0 4157415641554154415341524151415057565554535251504889E341BB70BAF20041FFD34889DF4883E4F041BBC0C9D20041FFD3488D65D8415F415E415D415C5B5DC3 -[7e18c33fcadd] jit-backend-dump} -[7e18c340052a] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447043 +0 4157415641554154415341524151415057565554535251504889E341BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[101d93202a9db] jit-backend-dump} +[101d93202e2e7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56086 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BBC0BAF20041FFD34889DF4883E4F041BBC0C9D20041FFD3488D65D8415F415E415D415C5B5DC3 -[7e18c340297b] jit-backend-dump} -[7e18c34037a3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447086 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[101d9320319ab] jit-backend-dump} +[101d932032bff] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56137 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BB70BAF20041FFD34889DF4883E4F041BBC0C9D20041FFD3488D65D8415F415E415D415C5B5DC3 -[7e18c3405738] jit-backend-dump} -[7e18c3408501] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447137 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[101d93203580f] jit-backend-dump} +[101d9320397f7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56210 +0 41BBD0B9F20041FFD3B803000000488D65D8415F415E415D415C5B5DC3 -[7e18c340953c] jit-backend-dump} -[7e18c340f7f4] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447210 +0 41BBE01AF30041FFD3B803000000488D65D8415F415E415D415C5B5DC3 +[101d93203ae3f] jit-backend-dump} +[101d932041e57] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5622d +0 F20F11442410F20F114C2418F20F11542420F20F115C2428F20F11642430F20F116C2438F20F11742440F20F117C2448F2440F11442450F2440F114C2458F2440F11542460F2440F115C2468F2440F11642470F2440F116C2478F2440F11B42480000000F2440F11BC24880000004829C24C8945A048894D804C8955B0488975904C894DA848897D984889D741BBC080CE0041FFE3 -[7e18c34118c4] jit-backend-dump} -[7e18c341703c] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744722d +0 F20F11442410F20F114C2418F20F11542420F20F115C2428F20F11642430F20F116C2438F20F11742440F20F117C2448F2440F11442450F2440F114C2458F2440F11542460F2440F115C2468F2440F11642470F2440F116C2478F2440F11B42480000000F2440F11BC24880000004829C24C8955B048894D80488975904C8945A04C894DA848897D984889D741BB1096CF0041FFE3 +[101d932044b27] jit-backend-dump} +[101d93204bad3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf562c2 +0 4C8B45A0488B4D804C8B55B0488B75904C8B4DA8488B7D98F20F10442410F20F104C2418F20F10542420F20F105C2428F20F10642430F20F106C2438F20F10742440F20F107C2448F2440F10442450F2440F104C2458F2440F10542460F2440F105C2468F2440F10642470F2440F106C2478F2440F10B42480000000F2440F10BC24880000004885C07409488B1425B0685501C349BB1062F5CB747F000041FFE3 -[7e18c3419028] jit-backend-dump} -[7e18c341ba40] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74472c2 +0 4C8B55B0488B4D80488B75904C8B45A04C8B4DA8488B7D98F20F10442410F20F104C2418F20F10542420F20F105C2428F20F10642430F20F106C2438F20F10742440F20F107C2448F2440F10442450F2440F104C2458F2440F10542460F2440F105C2468F2440F10642470F2440F106C2478F2440F10B42480000000F2440F10BC24880000004885C07409488B142530255601C349BB107244C70F7F000041FFE3 +[101d93204e53b] jit-backend-dump} +[101d932051243] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56363 +0 57565251415041514883EC40F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C24384889E741BBF0F0A80041FFD3488B0425203B9D024885C0753CF20F107C2438F20F10742430F20F106C2428F20F10642420F20F105C2418F20F10542410F20F104C2408F20F1004244883C44041594158595A5E5FC341BB70BAF20041FFD3B8030000004883C478C3 -[7e18c341d8eb] jit-backend-dump} -[7e18c341e71f] {jit-backend-counts -[7e18c341f46c] jit-backend-counts} -[7e18c39d710c] {jit-backend -[7e18c3ee50f0] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447363 +0 57565251415041514883EC40F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C24384889E741BBD036A90041FFD3488B0425A046A0024885C0753CF20F107C2438F20F10742430F20F106C2428F20F10642420F20F105C2418F20F10542410F20F104C2408F20F1004244883C44041594158595A5E5FC341BB801BF30041FFD3B8030000004883C478C3 +[101d932053c4b] jit-backend-dump} +[101d93205af97] {jit-backend-counts +[101d93205bb73] jit-backend-counts} +[101d93272b48f] {jit-backend +[101d932e3d033] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56406 +0 488B0425C0399D024829E0483B0425E08C5001760D49BB6363F5CB747F000041FFD3554889E5534154415541564157488DA50000000049BBF0F082CE747F00004D8B3B4983C70149BBF0F082CE747F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284D8B40304889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48899548FFFFFF48898D40FFFFFF4C898538FFFFFF49BB08F182CE747F00004D8B034983C00149BB08F182CE747F00004D89034983FA030F85000000008138104D00000F85000000004C8B50104D85D20F84000000004C8B4008498B4A108139302303000F85000000004D8B5208498B4A08498B52104D8B52184983F8000F8C000000004D39D00F8D000000004D89C14C0FAFC24989CC4C01C14983C1014C8948084983FD000F85000000004883FB017206813BF81600000F850000000049BBE05C09CC747F00004D39DE0F85000000004C8B73084983C6010F8000000000488B1C25C8399D024883FB000F8C0000000048898D30FFFFFF49BB20F182CE747F0000498B0B4883C10149BB20F182CE747F000049890B4D39D10F8D000000004C89C94C0FAFCA4C89E34D01CC4883C101488948084D89F14983C6010F80000000004C8B0C25C8399D024983F9000F8C000000004C89A530FFFFFF4989C94989DCE993FFFFFF49BB0060F5CB747F000041FFD32944404838354C510C5400585C030400000049BB0060F5CB747F000041FFD34440004838354C0C54585C030500000049BB0060F5CB747F000041FFD3444000284838354C0C54585C030600000049BB0060F5CB747F000041FFD34440002104284838354C0C54585C030700000049BB0060F5CB747F000041FFD3444000212909054838354C0C54585C030800000049BB0060F5CB747F000041FFD34440002109054838354C0C54585C030900000049BB0060F5CB747F000041FFD335444048384C0C54005C05030A00000049BB0060F5CB747F000041FFD344400C48384C005C05030B00000049BB0060F5CB747F000041FFD3444038484C0C005C05030C00000049BB0060F5CB747F000041FFD344400C39484C0005030D00000049BB0060F5CB747F000041FFD34440484C003905030E00000049BB0060F5CB747F000041FFD34440484C003905030F00000049BB0060F5CB747F000041FFD3444000250931484C3961031000000049BB0060F5CB747F000041FFD3444039484C00312507031100000049BB0060F5CB747F000041FFD34440484C0039310707031200000049BB0060F5CB747F000041FFD34440484C00393107070313000000 -[7e18c3f0324c] jit-backend-dump} -[7e18c3f03d0e] {jit-backend-addr -Loop 0 ( #19 FOR_ITER) has address 7f74cbf5643c to 7f74cbf56619 (bootstrap 7f74cbf56406) -[7e18c3f0502b] jit-backend-addr} -[7e18c3f05d6c] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447406 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBF0E021CA0F7F00004D8B3B4983C70149BBF0E021CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284D8B40304889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48899548FFFFFF48898D40FFFFFF4C898538FFFFFF49BB08E121CA0F7F00004D8B034983C00149BB08E121CA0F7F00004D89034983FA030F85000000008138806300000F85000000004C8B50104D85D20F84000000004C8B4008498B4A108139582D03000F85000000004D8B5208498B4A08498B52104D8B52184983F8000F8C000000004D39D00F8D000000004D89C14C0FAFC24989CC4C01C14983C1014C8948084983FD000F85000000004883FB017206813BF82200000F850000000049BB206055C70F7F00004D39DE0F85000000004C8B73084983C6010F8000000000488B1C254845A0024883FB000F8C0000000048898D30FFFFFF49BB20E121CA0F7F0000498B0B4883C10149BB20E121CA0F7F000049890B4D39D10F8D000000004C89C94C0FAFCA4C89E34D01CC4883C101488948084D89F14983C6010F80000000004C8B0C254845A0024983F9000F8C000000004C89A530FFFFFF4989C94989DCE993FFFFFF49BB007044C70F7F000041FFD32944404838354C510C5400585C030400000049BB007044C70F7F000041FFD34440004838354C0C54585C030500000049BB007044C70F7F000041FFD3444000284838354C0C54585C030600000049BB007044C70F7F000041FFD34440002104284838354C0C54585C030700000049BB007044C70F7F000041FFD3444000212909054838354C0C54585C030800000049BB007044C70F7F000041FFD34440002109054838354C0C54585C030900000049BB007044C70F7F000041FFD335444048384C0C54005C05030A00000049BB007044C70F7F000041FFD344400C48384C005C05030B00000049BB007044C70F7F000041FFD3444038484C0C005C05030C00000049BB007044C70F7F000041FFD344400C39484C0005030D00000049BB007044C70F7F000041FFD34440484C003905030E00000049BB007044C70F7F000041FFD34440484C003905030F00000049BB007044C70F7F000041FFD3444000250931484C3961031000000049BB007044C70F7F000041FFD3444039484C00312507031100000049BB007044C70F7F000041FFD34440484C0039310707031200000049BB007044C70F7F000041FFD34440484C00393107070313000000 +[101d932e5dfe7] jit-backend-dump} +[101d932e5ec67] {jit-backend-addr +Loop 0 ( #19 FOR_ITER) has address 7f0fc744743c to 7f0fc7447619 (bootstrap 7f0fc7447406) +[101d932e60303] jit-backend-addr} +[101d932e61047] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56438 +0 30FFFFFF -[7e18c3f06bf1] jit-backend-dump} -[7e18c3f079f8] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447438 +0 30FFFFFF +[101d932e62273] jit-backend-dump} +[101d932e62f17] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf564ed +0 28010000 -[7e18c3f08526] jit-backend-dump} -[7e18c3f08a60] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74474ed +0 28010000 +[101d932e63cfb] jit-backend-dump} +[101d932e64423] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf564f9 +0 3B010000 -[7e18c3f09621] jit-backend-dump} -[7e18c3f09c00] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74474f9 +0 3B010000 +[101d932e65267] jit-backend-dump} +[101d932e658ff] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56506 +0 4B010000 -[7e18c3f0a6c5] jit-backend-dump} -[7e18c3f0ab39] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447506 +0 4B010000 +[101d932e666d3] jit-backend-dump} +[101d932e66c5f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5651a +0 55010000 -[7e18c3f0b517] jit-backend-dump} -[7e18c3f0b97c] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744751a +0 55010000 +[101d932e678b7] jit-backend-dump} +[101d932e67e33] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56534 +0 5B010000 -[7e18c3f0c32d] jit-backend-dump} -[7e18c3f0c7bc] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447534 +0 5B010000 +[101d932e68a53] jit-backend-dump} +[101d932e68fd3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5653d +0 73010000 -[7e18c3f0d287] jit-backend-dump} -[7e18c3f0d80f] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744753d +0 73010000 +[101d932e69df7] jit-backend-dump} +[101d932e6a463] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5655c +0 74010000 -[7e18c3f0e340] jit-backend-dump} -[7e18c3f0e7c3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744755c +0 74010000 +[101d932e6b25f] jit-backend-dump} +[101d932e6b7d3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5656e +0 7F010000 -[7e18c3f0f168] jit-backend-dump} -[7e18c3f0f5cd] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744756e +0 7F010000 +[101d932e6c417] jit-backend-dump} +[101d932e6c96b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56581 +0 87010000 -[7e18c3f0ff6c] jit-backend-dump} -[7e18c3f103f2] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447581 +0 87010000 +[101d932e6d66f] jit-backend-dump} +[101d932e6dbc3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5658f +0 94010000 -[7e18c3f10d79] jit-backend-dump} -[7e18c3f115a1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744758f +0 94010000 +[101d932e6e80b] jit-backend-dump} +[101d932e6f127] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf565a1 +0 B5010000 -[7e18c3f1217d] jit-backend-dump} -[7e18c3f12714] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74475a1 +0 B5010000 +[101d932e6ff33] jit-backend-dump} +[101d932e705bf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf565cf +0 A0010000 -[7e18c3f131af] jit-backend-dump} -[7e18c3f13617] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74475cf +0 A0010000 +[101d932e744c3] jit-backend-dump} +[101d932e74c73] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf565f1 +0 9A010000 -[7e18c3f13fcb] jit-backend-dump} -[7e18c3f1459e] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74475f1 +0 9A010000 +[101d932e75abb] jit-backend-dump} +[101d932e761d3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56603 +0 BE010000 -[7e18c3f14f22] jit-backend-dump} -[7e18c3f15b01] jit-backend} -[7e18c3f17cc1] {jit-log-opt-loop +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447603 +0 BE010000 +[101d932e76f5b] jit-backend-dump} +[101d932e77c97] jit-backend} +[101d932e7b5d7] {jit-log-opt-loop # Loop 0 ( #19 FOR_ITER) : loop with 73 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -141,15 +141,15 @@ +131: p16 = getarrayitem_gc(p8, 3, descr=) +135: p18 = getarrayitem_gc(p8, 4, descr=) +139: p19 = getfield_gc(p0, descr=) -+139: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, descr=TargetToken(140139616183984)) ++139: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, descr=TargetToken(139705745523760)) debug_merge_point(0, ' #19 FOR_ITER') +225: guard_value(i6, 3, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18] -+235: guard_class(p14, 38352528, descr=) [p1, p0, p14, p2, p3, i4, p5, p10, p12, p16, p18] ++235: guard_class(p14, 38562496, descr=) [p1, p0, p14, p2, p3, i4, p5, p10, p12, p16, p18] +247: p22 = getfield_gc(p14, descr=) +251: guard_nonnull(p22, descr=) [p1, p0, p14, p22, p2, p3, i4, p5, p10, p12, p16, p18] +260: i23 = getfield_gc(p14, descr=) +264: p24 = getfield_gc(p22, descr=) -+268: guard_class(p24, 38538416, descr=) [p1, p0, p14, i23, p24, p22, p2, p3, i4, p5, p10, p12, p16, p18] ++268: guard_class(p24, 38745240, descr=) [p1, p0, p14, i23, p24, p22, p2, p3, i4, p5, p10, p12, p16, p18] +280: p26 = getfield_gc(p22, descr=) +284: i27 = getfield_gc_pure(p26, descr=) +288: i28 = getfield_gc_pure(p26, descr=) @@ -175,11 +175,11 @@ debug_merge_point(0, ' #32 STORE_FAST') debug_merge_point(0, ' #35 JUMP_ABSOLUTE') +397: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p14, i42, i34] -+397: i44 = getfield_raw(43858376, descr=) ++397: i44 = getfield_raw(44057928, descr=) +405: i46 = int_lt(i44, 0) guard_false(i46, descr=) [p1, p0, p2, p5, p14, i42, i34] debug_merge_point(0, ' #19 FOR_ITER') -+415: label(p0, p1, p2, p5, i42, i34, p14, i36, i29, i28, i27, descr=TargetToken(140139616184064)) ++415: label(p0, p1, p2, p5, i42, i34, p14, i36, i29, i28, i27, descr=TargetToken(139705745523840)) debug_merge_point(0, ' #19 FOR_ITER') +452: i47 = int_ge(i36, i29) guard_false(i47, descr=) [p1, p0, p14, i36, i28, i27, p2, p5, i42, i34] @@ -196,104 +196,104 @@ debug_merge_point(0, ' #32 STORE_FAST') debug_merge_point(0, ' #35 JUMP_ABSOLUTE') +495: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p14, i51, i49, None, None] -+495: i53 = getfield_raw(43858376, descr=) ++495: i53 = getfield_raw(44057928, descr=) +503: i54 = int_lt(i53, 0) guard_false(i54, descr=) [p1, p0, p2, p5, p14, i51, i49, None, None] debug_merge_point(0, ' #19 FOR_ITER') -+513: jump(p0, p1, p2, p5, i51, i49, p14, i50, i29, i28, i27, descr=TargetToken(140139616184064)) ++513: jump(p0, p1, p2, p5, i51, i49, p14, i50, i29, i28, i27, descr=TargetToken(139705745523840)) +531: --end of the loop-- -[7e18c3fba5a9] jit-log-opt-loop} -[7e18c4351e29] {jit-backend -[7e18c43b55d5] {jit-backend-dump +[101d932f3c31b] jit-log-opt-loop} +[101d933493233] {jit-backend +[101d93351df77] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf567e0 +0 488B0425C0399D024829E0483B0425E08C5001760D49BB6363F5CB747F000041FFD3554889E5534154415541564157488DA50000000049BBD8F082CE747F00004D8B3B4983C70149BBD8F082CE747F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B40204D8B40284889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48898548FFFFFF4C898540FFFFFF49BB38F182CE747F00004D8B034983C00149BB38F182CE747F00004D89034983FA020F85000000004883FA017206813AF81600000F85000000004983FD000F850000000049BB985D09CC747F00004D39DE0F85000000004C8B72084981FE102700000F8D0000000049BB00000000000000804D39DE0F84000000004C89F0B90200000048899538FFFFFF48898530FFFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F85000000004883FB017206813BF81600000F8500000000488B43084883C0010F8000000000488B9D30FFFFFF4883C3014C8B3425C8399D024983FE000F8C0000000049BB50F182CE747F00004D8B334983C60149BB50F182CE747F00004D89334881FB102700000F8D0000000049BB00000000000000804C39DB0F840000000048898528FFFFFF4889D8B90200000048898520FFFFFF489948F7F94889D048C1FA3FBB020000004821D34801D84883F8000F8500000000488B8528FFFFFF4883C0010F8000000000488B9D20FFFFFF4883C301488B1425C8399D024883FA000F8C00000000E958FFFFFF49BB0060F5CB747F000041FFD32944404838354C510C085458031400000049BB0060F5CB747F000041FFD34440084838354C0C5458031500000049BB0060F5CB747F000041FFD335444048384C0C0858031600000049BB0060F5CB747F000041FFD3444038484C0C0858031700000049BB0060F5CB747F000041FFD3444008484C0C031800000049BB0060F5CB747F000041FFD344400839484C0C031900000049BB0060F5CB747F000041FFD34440484C0C5C01031A00000049BB0060F5CB747F000041FFD344400C484C5C07031B00000049BB0060F5CB747F000041FFD344400C01484C5C07031C00000049BB0060F5CB747F000041FFD34440484C010D07031D00000049BB0060F5CB747F000041FFD34440484C010D07031E00000049BB0060F5CB747F000041FFD34440484C010D031F00000049BB0060F5CB747F000041FFD344400D484C0107032000000049BB0060F5CB747F000041FFD34440484C016569032100000049BB0060F5CB747F000041FFD3444001484C076569032200000049BB0060F5CB747F000041FFD34440484C0D01070707032300000049BB0060F5CB747F000041FFD34440484C0D010707070324000000 -[7e18c43be371] jit-backend-dump} -[7e18c43be9a7] {jit-backend-addr -Loop 1 ( #15 LOAD_FAST) has address 7f74cbf56816 to 7f74cbf56a30 (bootstrap 7f74cbf567e0) -[7e18c43bf74b] jit-backend-addr} -[7e18c43bfe05] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74477e0 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBD8E021CA0F7F00004D8B3B4983C70149BBD8E021CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B40204D8B40284889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48898548FFFFFF4C898540FFFFFF49BB38E121CA0F7F00004D8B034983C00149BB38E121CA0F7F00004D89034983FA020F85000000004883FA017206813AF82200000F85000000004983FD000F850000000049BBD86055C70F7F00004D39DE0F85000000004C8B72084981FE102700000F8D0000000049BB00000000000000804D39DE0F84000000004C89F0B90200000048899538FFFFFF48898530FFFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F85000000004883FB017206813BF82200000F8500000000488B43084883C0010F8000000000488B9D30FFFFFF4883C3014C8B34254845A0024983FE000F8C0000000049BB50E121CA0F7F00004D8B334983C60149BB50E121CA0F7F00004D89334881FB102700000F8D0000000049BB00000000000000804C39DB0F840000000048898528FFFFFF4889D8B90200000048898520FFFFFF489948F7F94889D048C1FA3FBB020000004821D34801D84883F8000F8500000000488B8528FFFFFF4883C0010F8000000000488B9D20FFFFFF4883C301488B14254845A0024883FA000F8C00000000E958FFFFFF49BB007044C70F7F000041FFD32944404838354C510C085458031400000049BB007044C70F7F000041FFD34440084838354C0C5458031500000049BB007044C70F7F000041FFD335444048384C0C0858031600000049BB007044C70F7F000041FFD3444038484C0C0858031700000049BB007044C70F7F000041FFD3444008484C0C031800000049BB007044C70F7F000041FFD344400839484C0C031900000049BB007044C70F7F000041FFD34440484C0C5C01031A00000049BB007044C70F7F000041FFD344400C484C5C07031B00000049BB007044C70F7F000041FFD344400C01484C5C07031C00000049BB007044C70F7F000041FFD34440484C0D0107031D00000049BB007044C70F7F000041FFD34440484C0D0107031E00000049BB007044C70F7F000041FFD34440484C0D01031F00000049BB007044C70F7F000041FFD344400D484C0701032000000049BB007044C70F7F000041FFD34440484C016965032100000049BB007044C70F7F000041FFD3444001484C076965032200000049BB007044C70F7F000041FFD34440484C0D01070707032300000049BB007044C70F7F000041FFD34440484C0D010707070324000000 +[101d93353cbd3] jit-backend-dump} +[101d93353d813] {jit-backend-addr +Loop 1 ( #15 LOAD_FAST) has address 7f0fc7447816 to 7f0fc7447a30 (bootstrap 7f0fc74477e0) +[101d93353ee6f] jit-backend-addr} +[101d93353fb2f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56812 +0 20FFFFFF -[7e18c43c0ab9] jit-backend-dump} -[7e18c43c1193] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447812 +0 20FFFFFF +[101d933540ca7] jit-backend-dump} +[101d9335418d7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf568bc +0 70010000 -[7e18c43c1cdd] jit-backend-dump} -[7e18c43c2219] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74478bc +0 70010000 +[101d933542823] jit-backend-dump} +[101d933542e1b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf568ce +0 7C010000 -[7e18c43ca71b] jit-backend-dump} -[7e18c43cae15] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74478ce +0 7C010000 +[101d933543c4f] jit-backend-dump} +[101d93354423b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf568d8 +0 8E010000 -[7e18c43cb8f3] jit-backend-dump} -[7e18c43cbe13] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74478d8 +0 8E010000 +[101d933544f7b] jit-backend-dump} +[101d9335455b3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf568eb +0 96010000 -[7e18c43cc7d3] jit-backend-dump} -[7e18c43cccf7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74478eb +0 96010000 +[101d9335461bf] jit-backend-dump} +[101d93354670b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf568fc +0 9F010000 -[7e18c43cd745] jit-backend-dump} -[7e18c43cdc4f] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74478fc +0 9F010000 +[101d93354732b] jit-backend-dump} +[101d9335479cf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5690f +0 A4010000 -[7e18c43ce623] jit-backend-dump} -[7e18c43cea43] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744790f +0 A4010000 +[101d9335486b3] jit-backend-dump} +[101d933548d3b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56947 +0 85010000 -[7e18c43cf30d] jit-backend-dump} -[7e18c43cf73b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447947 +0 85010000 +[101d933549a0f] jit-backend-dump} +[101d933549f5b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56959 +0 8C010000 -[7e18c43cfff9] jit-backend-dump} -[7e18c43d0425] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447959 +0 8C010000 +[101d93354ab6b] jit-backend-dump} +[101d93354b0bb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56967 +0 97010000 -[7e18c43d0ebd] jit-backend-dump} -[7e18c43d14e9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447967 +0 97010000 +[101d93354bcf3] jit-backend-dump} +[101d93354c46f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56984 +0 AD010000 -[7e18c43d1f75] jit-backend-dump} -[7e18c43d24eb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447984 +0 AD010000 +[101d93354d1ff] jit-backend-dump} +[101d93354d887] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf569af +0 9B010000 -[7e18c43d2da3] jit-backend-dump} -[7e18c43d31db] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74479af +0 9B010000 +[101d93354e61b] jit-backend-dump} +[101d93354ec37] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf569c2 +0 A0010000 -[7e18c43d3aa7] jit-backend-dump} -[7e18c43d3eb7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74479c2 +0 A0010000 +[101d93354f8fb] jit-backend-dump} +[101d93354ff73] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf569f9 +0 82010000 -[7e18c43d4771] jit-backend-dump} -[7e18c43d4bd1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74479f9 +0 82010000 +[101d933550b97] jit-backend-dump} +[101d9335510ef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56a0a +0 8A010000 -[7e18c43d5615] jit-backend-dump} -[7e18c43d5bc5] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447a0a +0 8A010000 +[101d933551d13] jit-backend-dump} +[101d93355230b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56a27 +0 A2010000 -[7e18c43d6569] jit-backend-dump} -[7e18c43d6f39] jit-backend} -[7e18c43d8261] {jit-log-opt-loop +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447a27 +0 A2010000 +[101d9335530cb] jit-backend-dump} +[101d933553d03] jit-backend} +[101d933556d57] {jit-log-opt-loop # Loop 1 ( #15 LOAD_FAST) : loop with 92 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -308,7 +308,7 @@ +127: p14 = getarrayitem_gc(p8, 2, descr=) +131: p16 = getarrayitem_gc(p8, 3, descr=) +135: p17 = getfield_gc(p0, descr=) -+135: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, descr=TargetToken(140139616188064)) ++135: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, descr=TargetToken(139705745528160)) debug_merge_point(0, ' #15 LOAD_FAST') +214: guard_value(i6, 2, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16] +224: guard_nonnull_class(p12, ConstClass(W_IntObject), descr=) [p1, p0, p12, p2, p3, i4, p5, p10, p14, p16] @@ -346,35 +346,35 @@ +395: i40 = int_add(i22, 1) debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') -+406: guard_not_invalidated(, descr=) [p1, p0, p2, p5, i38, i40, None] -+406: i42 = getfield_raw(43858376, descr=) ++406: guard_not_invalidated(, descr=) [p1, p0, p2, p5, i40, i38, None] ++406: i42 = getfield_raw(44057928, descr=) +414: i44 = int_lt(i42, 0) -guard_false(i44, descr=) [p1, p0, p2, p5, i38, i40, None] +guard_false(i44, descr=) [p1, p0, p2, p5, i40, i38, None] debug_merge_point(0, ' #15 LOAD_FAST') -+424: label(p0, p1, p2, p5, i38, i40, descr=TargetToken(140139616188144)) ++424: label(p0, p1, p2, p5, i38, i40, descr=TargetToken(139705745528240)) debug_merge_point(0, ' #15 LOAD_FAST') debug_merge_point(0, ' #18 LOAD_CONST') debug_merge_point(0, ' #21 COMPARE_OP') +454: i45 = int_lt(i40, 10000) -guard_true(i45, descr=) [p1, p0, p2, p5, i38, i40] +guard_true(i45, descr=) [p1, p0, p2, p5, i40, i38] debug_merge_point(0, ' #24 POP_JUMP_IF_FALSE') debug_merge_point(0, ' #27 LOAD_FAST') debug_merge_point(0, ' #30 LOAD_CONST') debug_merge_point(0, ' #33 BINARY_MODULO') +467: i46 = int_eq(i40, -9223372036854775808) -guard_false(i46, descr=) [p1, p0, i40, p2, p5, i38, None] +guard_false(i46, descr=) [p1, p0, i40, p2, p5, None, i38] +486: i47 = int_mod(i40, 2) +513: i48 = int_rshift(i47, 63) +520: i49 = int_and(2, i48) +528: i50 = int_add(i47, i49) debug_merge_point(0, ' #34 POP_JUMP_IF_FALSE') +531: i51 = int_is_true(i50) -guard_false(i51, descr=) [p1, p0, p2, p5, i50, i38, i40] +guard_false(i51, descr=) [p1, p0, p2, p5, i50, i40, i38] debug_merge_point(0, ' #53 LOAD_FAST') debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 INPLACE_ADD') +541: i52 = int_add_ovf(i38, 1) -guard_no_overflow(, descr=) [p1, p0, i52, p2, p5, None, i38, i40] +guard_no_overflow(, descr=) [p1, p0, i52, p2, p5, None, i40, i38] debug_merge_point(0, ' #60 STORE_FAST') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') @@ -383,60 +383,60 @@ debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') +569: guard_not_invalidated(, descr=) [p1, p0, p2, p5, i53, i52, None, None, None] -+569: i54 = getfield_raw(43858376, descr=) ++569: i54 = getfield_raw(44057928, descr=) +577: i55 = int_lt(i54, 0) guard_false(i55, descr=) [p1, p0, p2, p5, i53, i52, None, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+587: jump(p0, p1, p2, p5, i52, i53, descr=TargetToken(140139616188144)) ++587: jump(p0, p1, p2, p5, i52, i53, descr=TargetToken(139705745528240)) +592: --end of the loop-- -[7e18c441d911] jit-log-opt-loop} -[7e18c44e87e5] {jit-backend -[7e18c452918b] {jit-backend-dump +[101d9335bb7f7] jit-log-opt-loop} +[101d9336ee0c7] {jit-backend +[101d933752597] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56bf5 +0 488DA50000000049BB68F182CE747F00004D8B234983C40149BB68F182CE747F00004D89234C8BA558FFFFFF498B54241048C740100000000041813C24288801000F85000000004D8B6424184983FC020F85000000004885D20F8500000000488B9570FFFFFF4C8B6268488B0425B0685501488D5020483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C700F8160000488B9570FFFFFF40C68295000000014C8B8D60FFFFFFF64204017417504151524889D74C89CE41BBB0E5C40041FFD35A4159584C894A50F6420401741D50524889D749BBE05C09CC747F00004C89DE41BBB0E5C40041FFD35A5849BBE05C09CC747F00004C895A7840C682960000000048C742600000000048C782800000000200000048C742582A00000041F644240401742641F6442404407518504C89E7BE000000004889C241BB10E3C40041FFD358EB0641804C24FF0149894424104889C24883C01048C700F81600004C8B8D30FFFFFF4C89480841F644240401742841F644240440751A52504C89E7BE010000004889C241BB10E3C40041FFD3585AEB0641804C24FF01498944241849C74424200000000049C74424280000000049C7442430000000004C89720848891425D084720141BBC0BAF20041FFD3B801000000488D65D8415F415E415D415C5B5DC349BB0060F5CB747F000041FFD344403048083961032500000049BB0060F5CB747F000041FFD344403148083961032600000049BB0060F5CB747F000041FFD34440084839610327000000 -[7e18c452f77d] jit-backend-dump} -[7e18c4530253] {jit-backend-addr -bridge out of Guard 16 has address 7f74cbf56bf5 to 7f74cbf56dee -[7e18c4530fcd] jit-backend-addr} -[7e18c453162d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447bf5 +0 488DA50000000049BB68E121CA0F7F00004D8B234983C40149BB68E121CA0F7F00004D89234C8BA558FFFFFF498B54241048C740100000000041813C24388F01000F85000000004D8B6424184983FC020F85000000004885D20F8500000000488B9570FFFFFF4C8B6268488B042530255601488D5020483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700F8220000488B9570FFFFFF40C68295000000014C8B8D60FFFFFFF64204017417504151524889D74C89CE41BBF0C4C50041FFD35A4159584C894A50F6420401741D50524889D749BB206055C70F7F00004C89DE41BBF0C4C50041FFD35A5849BB206055C70F7F00004C895A7840C682960000000048C742600000000048C782800000000200000048C742582A00000041F644240401742641F6442404407518504C89E7BE000000004889C241BB50C2C50041FFD358EB0641804C24FF0149894424104889C24883C01048C700F82200004C8B8D30FFFFFF4C89480841F644240401742841F644240440751A52504C89E7BE010000004889C241BB50C2C50041FFD3585AEB0641804C24FF01498944241849C74424200000000049C74424280000000049C7442430000000004C89720848891425B039720141BBD01BF30041FFD3B801000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD344403048086139032500000049BB007044C70F7F000041FFD344403148086139032600000049BB007044C70F7F000041FFD34440084861390327000000 +[101d93375b0db] jit-backend-dump} +[101d93375bdb3] {jit-backend-addr +bridge out of Guard 16 has address 7f0fc7447bf5 to 7f0fc7447dee +[101d93375ce13] jit-backend-addr} +[101d93375d5b3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56bf8 +0 A0FEFFFF -[7e18c4532319] jit-backend-dump} -[7e18c4532965] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447bf8 +0 A0FEFFFF +[101d93375e7df] jit-backend-dump} +[101d933766b53] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56c38 +0 B2010000 -[7e18c45334f7] jit-backend-dump} -[7e18c4533a7b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447c38 +0 B2010000 +[101d933767ccb] jit-backend-dump} +[101d933768297] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56c47 +0 BC010000 -[7e18c45344c3] jit-backend-dump} -[7e18c453490d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447c47 +0 BC010000 +[101d93376905f] jit-backend-dump} +[101d9337695f3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56c50 +0 CC010000 -[7e18c45351f1] jit-backend-dump} -[7e18c4535a93] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447c50 +0 CC010000 +[101d93376a23b] jit-backend-dump} +[101d93376abf7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf565cf +0 22060000 -[7e18c4536359] jit-backend-dump} -[7e18c4536ac9] jit-backend} -[7e18c453732b] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74475cf +0 22060000 +[101d93376b887] jit-backend-dump} +[101d93376c36f] jit-backend} +[101d93376d483] {jit-log-opt-bridge # bridge out of Guard 16 with 28 ops [p0, p1, p2, i3, i4, i5, p6, p7, i8, i9] debug_merge_point(0, ' #38 POP_BLOCK') +37: p10 = getfield_gc_pure(p7, descr=) +49: setfield_gc(p2, ConstPtr(ptr11), descr=) -+57: guard_class(p7, 38433192, descr=) [p0, p1, p7, p6, p10, i8, i9] ++57: guard_class(p7, 38639224, descr=) [p0, p1, p7, p6, p10, i9, i8] +71: i13 = getfield_gc_pure(p7, descr=) -+76: guard_value(i13, 2, descr=) [p0, p1, i13, p6, p10, i8, i9] ++76: guard_value(i13, 2, descr=) [p0, p1, i13, p6, p10, i9, i8] debug_merge_point(0, ' #39 LOAD_FAST') debug_merge_point(0, ' #42 RETURN_VALUE') -+86: guard_isnull(p10, descr=) [p0, p1, p10, p6, i8, i9] ++86: guard_isnull(p10, descr=) [p0, p1, p10, p6, i9, i8] +95: p15 = getfield_gc(p1, descr=) +106: p16 = getfield_gc(p1, descr=) p18 = new_with_vtable(ConstClass(W_IntObject)) @@ -455,155 +455,155 @@ +446: setarrayitem_gc(p15, 3, ConstPtr(ptr32), descr=) +455: setarrayitem_gc(p15, 4, ConstPtr(ptr32), descr=) +464: setfield_gc(p18, i8, descr=) -+468: finish(p18, descr=) ++468: finish(p18, descr=) +505: --end of the loop-- -[7e18c4558fdf] jit-log-opt-bridge} -[7e18c4a4b601] {jit-backend -[7e18c4e146bb] {jit-backend-dump +[101d9337940af] jit-log-opt-bridge} +[101d933e4fadb] {jit-backend +[101d9342684e3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56e87 +0 488DA50000000049BB80F182CE747F0000498B034883C00149BB80F182CE747F0000498903488B8570FFFFFF4C8B780849BB908B07CC747F00004D39DF0F85000000004D8B771049BBA88B07CC747F00004D39DE0F850000000041BB30698D0041FFD34C8B78404C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8B3425403E86014981FE207088010F85000000004C8B3425C8399D024983FE000F8C0000000048898518FFFFFF488B0425B0685501488D9048010000483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C700388001004889C24881C09800000048C7008800000048C74008050000004989C64883C03848C700F81600004989C54883C01048C700F81600004989C44883C01048C700104D00004989C24883C01848C700A82D00004989C14883C01848C7008800000048C74008000000004989C04883C01048C7004083010048896808488BBD18FFFFFFF6470401741E5741515241524150504889C641BBB0E5C40041FFD3584158415A5A41595F48894740488BB570FFFFFF48896E1848C742700200000048C7425813000000C78290000000150000004C897A3049BBE05C09CC747F00004C895A7848C782800000000300000049C74508010000004D896E104D89661849C742080100000049BBC0DA73CE747F00004D89590849C7411000F3A0014D894A104D8956204C8972684C89422849BBE0DA73CE747F00004C895A6049BB908B07CC747F00004C895A0848899510FFFFFF48898508FFFFFF48C78578FFFFFF280000004889FE4889D749BB0664F5CB747F000041FFD34883F80174154889C7488BB510FFFFFF41BB40E9940041FFD3EB23488B8510FFFFFF48C7401800000000488B0425D084720148C70425D0847201000000004883BD78FFFFFF000F8C0000000048833C25203B9D02000F8500000000488BBD18FFFFFF488B77504885F60F8500000000488B7728488B9510FFFFFF48C74250000000004883FE000F8500000000488B77404C8B42304C0FB6B294000000F647040174185641505257504C89C641BBB0E5C40041FFD3585F5A41585E4C8947404D85F60F85000000004C8BB508FFFFFF49C74608FDFFFFFF8138F81600000F85000000004C8B7008488BBD28FFFFFF4C01F70F8000000000488B8520FFFFFF4883C0010F80000000004C8B3425C8399D024983FE000F8C0000000049BB98F182CE747F00004D8B334983C60149BB98F182CE747F00004D89334881F8102700000F8D0000000049BB00000000000000804C39D80F8400000000B90200000048898500FFFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F85000000004889F84883C7010F8000000000488B8500FFFFFF4883C0014C8B3425C8399D024983FE000F8C000000004889C34889F849BB8869F5CB747F000041FFE349BB0060F5CB747F000041FFD344003C484C6569032900000049BB0060F5CB747F000041FFD34400383C484C6569032A00000049BB0060F5CB747F000041FFD344003C484C6569032B00000049BB0060F5CB747F000041FFD344400038484C3C156569032C00000049BB0060F5CB747F000041FFD3444000484C3C156569032D00000049BB0060F5CB747F000041FFD3444000484C3C156569032E00000049BB0060F5CB747F000041FFD344400038484C3C156569032F00000049BB0060F5CB747F000041FFD3444000484C3C156569033000000049BB4360F5CB747F000041FFD344406C700074484C6569032800000049BB4360F5CB747F000041FFD344406C700074484C6569033100000049BB0060F5CB747F000041FFD344401C00701874484C6569033200000049BB0060F5CB747F000041FFD3444000081C74484C6569033300000049BB0060F5CB747F000041FFD344400018081C74484C6569033400000049BB0060F5CB747F000041FFD3444000484C6569033500000049BB0060F5CB747F000041FFD34440001D484C6569033600000049BB0060F5CB747F000041FFD3444001484C1D0769033700000049BB0060F5CB747F000041FFD34440484C011D0707033800000049BB0060F5CB747F000041FFD34440484C011D0707033900000049BB0060F5CB747F000041FFD34440484C1D01033A00000049BB0060F5CB747F000041FFD3444001484C1D07033B00000049BB0060F5CB747F000041FFD34440484C011D79033C00000049BB0060F5CB747F000041FFD344401D484C070179033D00000049BB0060F5CB747F000041FFD34440484C1D01070707033E00000049BB0060F5CB747F000041FFD34440484C1D01070707033F000000 -[7e18c4e313b6] jit-backend-dump} -[7e18c4e31efc] {jit-backend-addr -bridge out of Guard 33 has address 7f74cbf56e87 to 7f74cbf572ab -[7e18c4e33270] jit-backend-addr} -[7e18c4e33e6d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447e87 +0 488DA50000000049BB80E121CA0F7F0000498B034883C00149BB80E121CA0F7F0000498903488B8570FFFFFF4C8B780849BB302855C70F7F00004D39DF0F85000000004D8B771049BBF02855C70F7F00004D39DE0F850000000041BB201B8D0041FFD34C8B78404C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8B342500D785014981FE201288010F85000000004C8B34254845A0024983FE000F8C0000000048898518FFFFFF488B042530255601488D9048010000483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700488701004889C24881C09800000048C7008800000048C74008050000004989C64883C03848C700F82200004989C54883C01048C700F82200004989C44883C01048C700806300004989C24883C01848C700783600004989C14883C01848C7008800000048C74008000000004989C04883C01048C700508A010048896808488BBD18FFFFFFF6470401741E4151524152505741504889C641BBF0C4C50041FFD341585F58415A5A415948894740488BB570FFFFFF48896E184C897A3049C74508010000004D896E104D89661849C74110400FA10149BB80881ACA0F7F00004D8959084D894A1049C74208010000004D8956204C89726848C742700200000049BB302855C70F7F00004C895A0848C742581300000048C7828000000003000000C782900000001500000049BB206055C70F7F00004C895A7849BBA0881ACA0F7F00004C895A604C89422848899510FFFFFF48898508FFFFFF48C78578FFFFFF280000004889FE4889D749BB067444C70F7F000041FFD34883F80174154889C7488BB510FFFFFF41BB4091940041FFD3EB23488B8510FFFFFF48C7401800000000488B0425B039720148C70425B0397201000000004883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488B9518FFFFFF488B7A504885FF0F8500000000488B7A28488BB510FFFFFF48C74650000000004883FF000F8500000000488B7A404C8B46304C0FB6B694000000F6420401741B5256504150574889D74C89C641BBF0C4C50041FFD35F4158585E5A4C8942404D85F60F85000000004C8BB508FFFFFF49C74608FDFFFFFF8138F82200000F85000000004C8B7008488B9528FFFFFF4C01F20F8000000000488B8520FFFFFF4883C0010F80000000004C8B34254845A0024983FE000F8C0000000049BB98E121CA0F7F00004D8B334983C60149BB98E121CA0F7F00004D89334881F8102700000F8D0000000049BB00000000000000804C39D80F8400000000B90200000048899500FFFFFF488985F8FEFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F8500000000488B8500FFFFFF4883C0010F80000000004C8BB5F8FEFFFF4983C601488B14254845A0024883FA000F8C000000004C89F349BB887944C70F7F000041FFE349BB007044C70F7F000041FFD344003C484C6569032900000049BB007044C70F7F000041FFD34400383C484C6569032A00000049BB007044C70F7F000041FFD344003C484C6569032B00000049BB007044C70F7F000041FFD344400038484C3C156569032C00000049BB007044C70F7F000041FFD3444000484C3C156569032D00000049BB007044C70F7F000041FFD3444000484C3C156569032E00000049BB007044C70F7F000041FFD344400038484C3C156569032F00000049BB007044C70F7F000041FFD3444000484C3C156569033000000049BB437044C70F7F000041FFD344406C700074484C6569032800000049BB437044C70F7F000041FFD344406C700074484C6569033100000049BB007044C70F7F000041FFD344400800701C74484C6569033200000049BB007044C70F7F000041FFD3444000180874484C6569033300000049BB007044C70F7F000041FFD34440001C180874484C6569033400000049BB007044C70F7F000041FFD3444000484C6569033500000049BB007044C70F7F000041FFD344400009484C6569033600000049BB007044C70F7F000041FFD3444001484C090769033700000049BB007044C70F7F000041FFD34440484C01090707033800000049BB007044C70F7F000041FFD34440484C01090707033900000049BB007044C70F7F000041FFD34440484C0109033A00000049BB007044C70F7F000041FFD3444001484C0709033B00000049BB007044C70F7F000041FFD34440484C017D79033C00000049BB007044C70F7F000041FFD3444001484C077D79033D00000049BB007044C70F7F000041FFD34440484C0139070707033E00000049BB007044C70F7F000041FFD34440484C0139070707033F000000 +[101d9342870eb] jit-backend-dump} +[101d934287cc7] {jit-backend-addr +bridge out of Guard 33 has address 7f0fc7447e87 to 7f0fc74482b6 +[101d934289067] jit-backend-addr} +[101d934289bfb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56e8a +0 70FEFFFF -[7e18c4e34de2] jit-backend-dump} -[7e18c4e35a12] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447e8a +0 70FEFFFF +[101d93428af1b] jit-backend-dump} +[101d93428ba07] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56ec6 +0 E1030000 -[7e18c4e3657c] jit-backend-dump} -[7e18c4e36a8c] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447ec6 +0 EC030000 +[101d93428c88b] jit-backend-dump} +[101d93428cfbf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56edd +0 E3030000 -[7e18c4e3749d] jit-backend-dump} -[7e18c4e37a88] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447edd +0 EE030000 +[101d93428dcd3] jit-backend-dump} +[101d93428e49b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56ef7 +0 FC030000 -[7e18c4e38442] jit-backend-dump} -[7e18c4e389eb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447ef7 +0 07040000 +[101d93428f367] jit-backend-dump} +[101d93428f9f3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56f05 +0 0A040000 -[7e18c4e394da] jit-backend-dump} -[7e18c4e39ace] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447f05 +0 15040000 +[101d934290813] jit-backend-dump} +[101d934290f17] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56f1a +0 2B040000 -[7e18c4e3a45e] jit-backend-dump} -[7e18c4e3a8e7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447f1a +0 36040000 +[101d934291c43] jit-backend-dump} +[101d9342950ef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56f2c +0 35040000 -[7e18c4e3b259] jit-backend-dump} -[7e18c4e3b700] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447f2c +0 40040000 +[101d93429620f] jit-backend-dump} +[101d93429687b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57131 +0 4B020000 -[7e18c4e3c07b] jit-backend-dump} -[7e18c4e3c4ef] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448131 +0 56020000 +[101d9342976bb] jit-backend-dump} +[101d934297d93] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57140 +0 58020000 -[7e18c4e3cfc9] jit-backend-dump} -[7e18c4e3d572] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448140 +0 63020000 +[101d934298bd7] jit-backend-dump} +[101d934299233] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57154 +0 60020000 -[7e18c4e3e046] jit-backend-dump} -[7e18c4e3e4ea] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448154 +0 6B020000 +[101d934299ec3] jit-backend-dump} +[101d93429a4ff] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57171 +0 60020000 -[7e18c4e3ee4d] jit-backend-dump} -[7e18c4e3f2eb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448171 +0 6B020000 +[101d93429b213] jit-backend-dump} +[101d93429b7ab] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571ac +0 41020000 -[7e18c4e3fc4e] jit-backend-dump} -[7e18c4e400da] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74481af +0 49020000 +[101d93429c3bb] jit-backend-dump} +[101d93429c923] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571c7 +0 43020000 -[7e18c4e40a55] jit-backend-dump} -[7e18c4e40ff8] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74481ca +0 4B020000 +[101d93429d653] jit-backend-dump} +[101d93429dccb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571db +0 48020000 -[7e18c4e41b7a] jit-backend-dump} -[7e18c4e42111] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74481de +0 50020000 +[101d93429e98b] jit-backend-dump} +[101d93429ef73] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571ec +0 51020000 -[7e18c4e42beb] jit-backend-dump} -[7e18c4e434d6] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74481ef +0 59020000 +[101d93429fc8b] jit-backend-dump} +[101d9342a0737] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571fe +0 73020000 -[7e18c4e43e54] jit-backend-dump} -[7e18c4e442e3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448201 +0 7B020000 +[101d9342a13ab] jit-backend-dump} +[101d9342a192f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57229 +0 62020000 -[7e18c4e44c58] jit-backend-dump} -[7e18c4e450d8] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744822c +0 6A020000 +[101d9342a25a7] jit-backend-dump} +[101d9342a2c27] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5723c +0 67020000 -[7e18c4e45e04] jit-backend-dump} -[7e18c4e4638c] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744823f +0 6F020000 +[101d9342a3acb] jit-backend-dump} +[101d9342a410b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5726a +0 52020000 -[7e18c4e46eab] jit-backend-dump} -[7e18c4e47322] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448274 +0 53020000 +[101d9342a4e27] jit-backend-dump} +[101d9342a53af] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57277 +0 5E020000 -[7e18c4e47d75] jit-backend-dump} -[7e18c4e4824f] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448285 +0 5B020000 +[101d9342a6027] jit-backend-dump} +[101d9342a65db] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57294 +0 76020000 -[7e18c4e48bb2] jit-backend-dump} -[7e18c4e493c2] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74482a2 +0 73020000 +[101d9342a7263] jit-backend-dump} +[101d9342a7acb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf569f9 +0 8A040000 -[7e18c4e49d2b] jit-backend-dump} -[7e18c4e4a8fb] jit-backend} -[7e18c4e4b8e2] {jit-log-opt-bridge -# bridge out of Guard 33 with 137 ops +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74479f9 +0 8A040000 +[101d9342a88cb] jit-backend-dump} +[101d9342a9513] jit-backend} +[101d9342aabeb] {jit-log-opt-bridge +# bridge out of Guard 33 with 138 ops [p0, p1, p2, p3, i4, i5, i6] debug_merge_point(0, ' #37 LOAD_FAST') debug_merge_point(0, ' #40 LOAD_GLOBAL') +37: p7 = getfield_gc(p1, descr=) -+48: guard_value(p7, ConstPtr(ptr8), descr=) [p0, p1, p7, p2, p3, i5, i6] ++48: guard_value(p7, ConstPtr(ptr8), descr=) [p0, p1, p7, p2, p3, i6, i5] +67: p9 = getfield_gc(p7, descr=) -+71: guard_value(p9, ConstPtr(ptr10), descr=) [p0, p1, p9, p7, p2, p3, i5, i6] -+90: guard_not_invalidated(, descr=) [p0, p1, p7, p2, p3, i5, i6] ++71: guard_value(p9, ConstPtr(ptr10), descr=) [p0, p1, p9, p7, p2, p3, i6, i5] ++90: guard_not_invalidated(, descr=) [p0, p1, p7, p2, p3, i6, i5] debug_merge_point(0, ' #43 CALL_FUNCTION') +90: p12 = call(ConstClass(getexecutioncontext), descr=) +99: p13 = getfield_gc(p12, descr=) +103: i14 = force_token() +103: p15 = getfield_gc(p12, descr=) -+107: guard_isnull(p15, descr=) [p0, p1, p12, p15, p2, p3, p13, i14, i5, i6] ++107: guard_isnull(p15, descr=) [p0, p1, p12, p15, p2, p3, p13, i14, i6, i5] +116: i16 = getfield_gc(p12, descr=) +120: i17 = int_is_zero(i16) -guard_true(i17, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] +guard_true(i17, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] debug_merge_point(1, ' #0 LOAD_CONST') debug_merge_point(1, ' #3 STORE_FAST') debug_merge_point(1, ' #6 SETUP_LOOP') debug_merge_point(1, ' #9 LOAD_GLOBAL') -+130: guard_not_invalidated(, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] ++130: guard_not_invalidated(, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] +130: p19 = getfield_gc(ConstPtr(ptr18), descr=) -+138: guard_value(p19, ConstPtr(ptr20), descr=) [p0, p1, p12, p19, p2, p3, p13, i14, i5, i6] ++138: guard_value(p19, ConstPtr(ptr20), descr=) [p0, p1, p12, p19, p2, p3, p13, i14, i6, i5] debug_merge_point(1, ' #12 LOAD_CONST') debug_merge_point(1, ' #15 CALL_FUNCTION') debug_merge_point(1, ' #18 GET_ITER') @@ -614,253 +614,254 @@ debug_merge_point(1, ' #31 INPLACE_ADD') debug_merge_point(1, ' #32 STORE_FAST') debug_merge_point(1, ' #35 JUMP_ABSOLUTE') -+151: i22 = getfield_raw(43858376, descr=) ++151: i22 = getfield_raw(44057928, descr=) +159: i24 = int_lt(i22, 0) -guard_false(i24, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] +guard_false(i24, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] debug_merge_point(1, ' #19 FOR_ITER') +169: i25 = force_token() -p27 = new_with_vtable(38431160) +p27 = new_with_vtable(38637192) p29 = new_array(5, descr=) p31 = new_with_vtable(ConstClass(W_IntObject)) p33 = new_with_vtable(ConstClass(W_IntObject)) -p35 = new_with_vtable(38352528) +p35 = new_with_vtable(38562496) p37 = new_with_vtable(ConstClass(W_ListObject)) p39 = new_array(0, descr=) -p41 = new_with_vtable(38431936) +p41 = new_with_vtable(38637968) +359: setfield_gc(p41, i14, descr=) setfield_gc(p12, p41, descr=) +410: setfield_gc(p1, i25, descr=) -+421: setfield_gc(p27, 2, descr=) -+429: setfield_gc(p27, 19, descr=) -+437: setfield_gc(p27, 21, descr=) -+447: setfield_gc(p27, p13, descr=) -+451: setfield_gc(p27, ConstPtr(ptr45), descr=) -+465: setfield_gc(p27, 3, descr=) -+476: setfield_gc(p31, 1, descr=) -+484: setarrayitem_gc(p29, 0, p31, descr=) -+488: setarrayitem_gc(p29, 1, p33, descr=) -+492: setfield_gc(p35, 1, descr=) -+500: setfield_gc(p37, ConstPtr(ptr51), descr=) -+514: setfield_gc(p37, ConstPtr(ptr52), descr=) -+522: setfield_gc(p35, p37, descr=) -+526: setarrayitem_gc(p29, 2, p35, descr=) -+530: setfield_gc(p27, p29, descr=) -+534: setfield_gc(p27, p39, descr=) -+538: setfield_gc(p27, ConstPtr(ptr54), descr=) -+552: setfield_gc(p27, ConstPtr(ptr8), descr=) ++421: setfield_gc(p27, p13, descr=) ++425: setfield_gc(p31, 1, descr=) ++433: setarrayitem_gc(p29, 0, p31, descr=) ++437: setarrayitem_gc(p29, 1, p33, descr=) ++441: setfield_gc(p37, ConstPtr(ptr45), descr=) ++449: setfield_gc(p37, ConstPtr(ptr46), descr=) ++463: setfield_gc(p35, p37, descr=) ++467: setfield_gc(p35, 1, descr=) ++475: setarrayitem_gc(p29, 2, p35, descr=) ++479: setfield_gc(p27, p29, descr=) ++483: setfield_gc(p27, 2, descr=) ++491: setfield_gc(p27, ConstPtr(ptr8), descr=) ++505: setfield_gc(p27, 19, descr=) ++513: setfield_gc(p27, 3, descr=) ++524: setfield_gc(p27, 21, descr=) ++534: setfield_gc(p27, ConstPtr(ptr53), descr=) ++548: setfield_gc(p27, ConstPtr(ptr54), descr=) ++562: setfield_gc(p27, p39, descr=) +566: p55 = call_assembler(p27, p12, descr=) -guard_not_forced(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i5, i6] -+686: guard_no_exception(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i5, i6] +guard_not_forced(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i6, i5] ++686: keepalive(p27) ++686: guard_no_exception(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i6, i5] +701: p56 = getfield_gc(p12, descr=) -+712: guard_isnull(p56, descr=) [p0, p1, p12, p55, p27, p56, p41, p2, p3, i5, i6] ++712: guard_isnull(p56, descr=) [p0, p1, p12, p55, p27, p56, p41, p2, p3, i6, i5] +721: i57 = getfield_gc(p12, descr=) +725: setfield_gc(p27, ConstPtr(ptr58), descr=) +740: i59 = int_is_true(i57) -guard_false(i59, descr=) [p0, p1, p55, p27, p12, p41, p2, p3, i5, i6] +guard_false(i59, descr=) [p0, p1, p55, p27, p12, p41, p2, p3, i6, i5] +750: p60 = getfield_gc(p12, descr=) +754: p61 = getfield_gc(p27, descr=) +758: i62 = getfield_gc(p27, descr=) setfield_gc(p12, p61, descr=) -+800: guard_false(i62, descr=) [p0, p1, p55, p60, p27, p12, p41, p2, p3, i5, i6] ++803: guard_false(i62, descr=) [p0, p1, p55, p60, p27, p12, p41, p2, p3, i6, i5] debug_merge_point(0, ' #46 INPLACE_ADD') -+809: setfield_gc(p41, -3, descr=) -+824: guard_class(p55, ConstClass(W_IntObject), descr=) [p0, p1, p55, p2, p3, i5, i6] -+836: i65 = getfield_gc_pure(p55, descr=) -+840: i66 = int_add_ovf(i5, i65) -guard_no_overflow(, descr=) [p0, p1, p55, i66, p2, p3, i5, i6] ++812: setfield_gc(p41, -3, descr=) ++827: guard_class(p55, ConstClass(W_IntObject), descr=) [p0, p1, p55, p2, p3, i6, i5] ++839: i65 = getfield_gc_pure(p55, descr=) ++843: i66 = int_add_ovf(i6, i65) +guard_no_overflow(, descr=) [p0, p1, p55, i66, p2, p3, i6, i5] debug_merge_point(0, ' #47 STORE_FAST') debug_merge_point(0, ' #50 JUMP_FORWARD') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') debug_merge_point(0, ' #69 INPLACE_ADD') -+856: i68 = int_add_ovf(i6, 1) -guard_no_overflow(, descr=) [p0, p1, i68, p2, p3, i66, None, i6] ++859: i68 = int_add_ovf(i5, 1) +guard_no_overflow(, descr=) [p0, p1, i68, p2, p3, i66, None, i5] debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') -+873: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i68, i66, None, None] -+873: i71 = getfield_raw(43858376, descr=) -+881: i73 = int_lt(i71, 0) ++876: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i68, i66, None, None] ++876: i71 = getfield_raw(44057928, descr=) ++884: i73 = int_lt(i71, 0) guard_false(i73, descr=) [p0, p1, p2, p3, i68, i66, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+891: label(p1, p0, p2, p3, i66, i68, descr=TargetToken(140139616190144)) ++894: label(p1, p0, p2, p3, i66, i68, descr=TargetToken(139705745530240)) debug_merge_point(0, ' #18 LOAD_CONST') debug_merge_point(0, ' #21 COMPARE_OP') -+921: i75 = int_lt(i68, 10000) -guard_true(i75, descr=) [p0, p1, p2, p3, i66, i68] ++924: i75 = int_lt(i68, 10000) +guard_true(i75, descr=) [p0, p1, p2, p3, i68, i66] debug_merge_point(0, ' #24 POP_JUMP_IF_FALSE') debug_merge_point(0, ' #27 LOAD_FAST') debug_merge_point(0, ' #30 LOAD_CONST') debug_merge_point(0, ' #33 BINARY_MODULO') -+934: i77 = int_eq(i68, -9223372036854775808) -guard_false(i77, descr=) [p0, p1, i68, p2, p3, i66, None] -+953: i79 = int_mod(i68, 2) -+970: i81 = int_rshift(i79, 63) -+977: i82 = int_and(2, i81) -+986: i83 = int_add(i79, i82) ++937: i77 = int_eq(i68, -9223372036854775808) +guard_false(i77, descr=) [p0, p1, i68, p2, p3, None, i66] ++956: i79 = int_mod(i68, 2) ++980: i81 = int_rshift(i79, 63) ++987: i82 = int_and(2, i81) ++996: i83 = int_add(i79, i82) debug_merge_point(0, ' #34 POP_JUMP_IF_FALSE') -+989: i84 = int_is_true(i83) -guard_false(i84, descr=) [p0, p1, p2, p3, i83, i66, i68] ++999: i84 = int_is_true(i83) +guard_false(i84, descr=) [p0, p1, p2, p3, i83, i68, i66] debug_merge_point(0, ' #53 LOAD_FAST') debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 INPLACE_ADD') -+999: i86 = int_add_ovf(i66, 1) -guard_no_overflow(, descr=) [p0, p1, i86, p2, p3, None, i66, i68] ++1009: i86 = int_add_ovf(i66, 1) +guard_no_overflow(, descr=) [p0, p1, i86, p2, p3, None, i68, i66] debug_merge_point(0, ' #60 STORE_FAST') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') debug_merge_point(0, ' #69 INPLACE_ADD') -+1012: i88 = int_add(i68, 1) ++1026: i88 = int_add(i68, 1) debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') -+1023: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i86, i88, None, None, None] -+1023: i90 = getfield_raw(43858376, descr=) -+1031: i92 = int_lt(i90, 0) ++1037: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i86, i88, None, None, None] ++1037: i90 = getfield_raw(44057928, descr=) ++1045: i92 = int_lt(i90, 0) guard_false(i92, descr=) [p0, p1, p2, p3, i86, i88, None, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+1041: jump(p1, p0, p2, p3, i86, i88, descr=TargetToken(140139616188144)) -+1060: --end of the loop-- -[7e18c4ec4dd6] jit-log-opt-bridge} -[7e18c4fe89a3] {jit-backend-dump ++1055: jump(p1, p0, p2, p3, i86, i88, descr=TargetToken(139705745528240)) ++1071: --end of the loop-- +[101d93433fbbf] jit-log-opt-bridge} +[101d934497aaf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56976 +0 E9A1010000 -[7e18c4fec25e] jit-backend-dump} -[7e18c4fec9b1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447976 +0 E9A1010000 +[101d93449ad5f] jit-backend-dump} +[101d93449b473] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56a19 +0 E994010000 -[7e18c4fed7af] jit-backend-dump} -[7e18c4fedda9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447a19 +0 E994010000 +[101d93449c38f] jit-backend-dump} +[101d93449ca23] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56ee1 +0 E9F8030000 -[7e18c4fee916] jit-backend-dump} -[7e18c4feee9b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447ee1 +0 E903040000 +[101d93449d84f] jit-backend-dump} +[101d93449de23] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf56f09 +0 E920040000 -[7e18c4fef984] jit-backend-dump} -[7e18c4fefdf2] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7447f09 +0 E92B040000 +[101d93449eb0f] jit-backend-dump} +[101d93449f06f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf571f0 +0 E966020000 -[7e18c4ff07dc] jit-backend-dump} -[7e18c4ff0d67] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74481f3 +0 E96E020000 +[101d93449fd23] jit-backend-dump} +[101d9344a036f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57286 +0 E968020000 -[7e18c4ff1751] jit-backend-dump} -[7e18c53eb6fc] {jit-backend -[7e18c54c6a6b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448294 +0 E965020000 +[101d9344a1233] jit-backend-dump} +[101d93494cec3] {jit-backend +[101d934a39f17] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5759f +0 488B0425C0399D024829E0483B0425E08C5001760D49BB6363F5CB747F000041FFD3554889E5534154415541564157488DA50000000049BBB0F182CE747F00004D8B3B4983C70149BBB0F182CE747F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B482848899570FFFFFF498B503048899568FFFFFF498B50384889BD60FFFFFF498B78404D8B40484889B558FFFFFF4C89BD50FFFFFF4C89A548FFFFFF4C898D40FFFFFF48899D38FFFFFF48898530FFFFFF48899528FFFFFF4889BD20FFFFFF4C898518FFFFFF49BBC8F182CE747F00004D8B034983C00149BBC8F182CE747F00004D89034983FA050F85000000004C8B9568FFFFFF41813A104D00000F85000000004D8B42104D85C00F8400000000498B7A08498B5010813A302303000F85000000004D8B4008498B5008498B40104D8B40184883FF000F8C000000004C39C70F8D000000004889FB480FAFF84989D14801FA4883C30149895A084983FD000F850000000049BB90C10BCC747F00004D39DE0F85000000004C8BB560FFFFFF4D8B6E0849BB908B07CC747F00004D39DD0F8500000000498B7D1049BBA88B07CC747F00004C39DF0F85000000004C8B2C25403E86014981FD207088010F850000000048899510FFFFFF48898508FFFFFF4C898500FFFFFF4C898DF8FEFFFF48898DF0FEFFFF4889D741BB2087EE0041FFD348833C25203B9D02000F8500000000488B8DF0FEFFFF4C8B491041813960CA01000F85000000004C8B49084D8B41084D89C24983C0014C898DE8FEFFFF488985E0FEFFFF4C8995D8FEFFFF4C89CF4C89C641BBC009790041FFD348833C25203B9D02000F8500000000488B8DE8FEFFFF4C8B5110488B85D8FEFFFF4C8B85E0FEFFFF41F6420401743541F642044075205041504152514C89D74889C64C89C241BB10E3C40041FFD359415A415858EB0E5048C1E8074883F0F8490FAB02584D8944C2104C8B0425C8399D024983F8000F8C0000000049BBE0F182CE747F00004D8B334983C60149BBE0F182CE747F00004D8933483B9D00FFFFFF0F8D000000004989DE480FAF9D08FFFFFF4C8B85F8FEFFFF4901D84983C601488B9D68FFFFFF4C89730848898DD0FEFFFF4C898510FFFFFF4C89C741BB2087EE0041FFD348833C25203B9D02000F85000000004C8B85D0FEFFFF498B48084989CA4883C1014C8995C8FEFFFF488985C0FEFFFF4C89C74889CE41BBC009790041FFD348833C25203B9D02000F8500000000488B85D0FEFFFF4C8B4010488B8DC8FEFFFF4C8B95C0FEFFFF41F6400401743541F640044075204152504150514C89C74889CE4C89D241BB10E3C40041FFD359415858415AEB0E5148C1E9074883F1F8490FAB08594D8954C8104C8B1425C8399D024983FA000F8C0000000048899D68FFFFFF4C89F34889C1E9CCFEFFFF49BB0060F5CB747F000041FFD3294C4850383554595C4060044464686C034000000049BB0060F5CB747F000041FFD34C4828503835545C40600464686C034100000049BB0060F5CB747F000041FFD34C482820503835545C40600464686C034200000049BB0060F5CB747F000041FFD34C48281D0820503835545C40600464686C034300000049BB0060F5CB747F000041FFD34C48281D210109503835545C40600464686C034400000049BB0060F5CB747F000041FFD34C48281D0109503835545C40600464686C034500000049BB0060F5CB747F000041FFD3354C485038545C40600428686C09034600000049BB0060F5CB747F000041FFD34C4838505440600428686C09034700000049BB0060F5CB747F000041FFD34C3834505440600428686C09034800000049BB0060F5CB747F000041FFD34C381C34505440600428686C09034900000049BB0060F5CB747F000041FFD34C3834505440600428686C09034A00000049BB0060F5CB747F000041FFD34C3834505440600428686C09034B00000049BB4360F5CB747F000041FFD34C3800505440608001446C71034C00000049BB0060F5CB747F000041FFD34C38240450544060446C0071034D00000049BB4360F5CB747F000041FFD34C388D0188018401505440608001446C0771034E00000049BB0060F5CB747F000041FFD34C38505440608001446C0771034F00000049BB0060F5CB747F000041FFD34C48440D757D5054406080016C71035000000049BB0060F5CB747F000041FFD34C485054406080010C6C2107035100000049BB4360F5CB747F000041FFD34C48005054406080010C6C7107035200000049BB4360F5CB747F000041FFD34C489501980190015054406080010C6C7107035300000049BB0060F5CB747F000041FFD34C485054406080010C6C71070354000000 -[7e18c54e5e5a] jit-backend-dump} -[7e18c54e6a2d] {jit-backend-addr -Loop 2 ( #13 FOR_ITER) has address 7f74cbf575d5 to 7f74cbf579be (bootstrap 7f74cbf5759f) -[7e18c54e7e94] jit-backend-addr} -[7e18c54e89fb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74485a8 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBB0E121CA0F7F00004D8B3B4983C70149BBB0E121CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284889BD70FFFFFF498B78304C89BD68FFFFFF4D8B783848898D60FFFFFF498B48404D8B40484889B558FFFFFF4C89A550FFFFFF4C898D48FFFFFF48899D40FFFFFF48899538FFFFFF48898530FFFFFF4C89BD28FFFFFF48898D20FFFFFF4C898518FFFFFF49BBC8E121CA0F7F00004D8B034983C00149BBC8E121CA0F7F00004D89034983FA050F8500000000813F806300000F85000000004C8B57104D85D20F84000000004C8B4708498B4A108139582D03000F85000000004D8B5208498B4A084D8B7A104D8B52184983F8000F8C000000004D39D00F8D000000004C89C04D0FAFC74889CA4C01C14883C001488947084983FD000F850000000049BB28DC58C70F7F00004D39DE0F85000000004C8BB570FFFFFF4D8B6E0849BB302855C70F7F00004D39DD0F85000000004D8B451049BBF02855C70F7F00004D39D80F85000000004C8B2C2500D785014981FD201288010F850000000048899510FFFFFF48898D08FFFFFF48898500FFFFFF4889BDF8FEFFFF4C8995F0FEFFFF4889CF41BBA01FEF0041FFD348833C25A046A002000F85000000004C8B9560FFFFFF498B7A10813FF0CE01000F8500000000498B7A08488B4F084889CA4883C101488985E8FEFFFF4889BDE0FEFFFF488995D8FEFFFF4889CE41BB9029790041FFD348833C25A046A002000F8500000000488B8DE0FEFFFF488B5110488BBDD8FEFFFF4C8B95E8FEFFFFF64204017432F6420440751E52575141524889FE4889D74C89D241BB50C2C50041FFD3415A595F5AEB0E5748C1EF074883F7F8480FAB3A5F4C8954FA104C8B14254845A0024983FA000F8C0000000049BBE0E121CA0F7F00004D8B334983C60149BBE0E121CA0F7F00004D89334C8BB500FFFFFF4C3BB5F0FEFFFF0F8D000000004D0FAFF74C8B9510FFFFFF4D01F24C8BB500FFFFFF4983C601488BBDF8FEFFFF4C8977084C899508FFFFFF48898DD0FEFFFF4C89D741BBA01FEF0041FFD348833C25A046A002000F8500000000488B8DD0FEFFFF4C8B51084C89D74983C2014889BDC8FEFFFF488985C0FEFFFF4889CF4C89D641BB9029790041FFD348833C25A046A002000F85000000004C8B95D0FEFFFF498B4A10488B85C8FEFFFF488BBDC0FEFFFFF64104017432F6410440751E51415257504889C64889FA4889CF41BB50C2C50041FFD3585F415A59EB0E5048C1E8074883F0F8480FAB015848897CC110488B3C254845A0024883FF000F8C000000004C89B500FFFFFF4C89D1E9CCFEFFFF49BB007044C70F7F000041FFD3294C404438355055585C60481C64686C034000000049BB007044C70F7F000041FFD34C401C44383550585C604864686C034100000049BB007044C70F7F000041FFD34C401C2844383550585C604864686C034200000049BB007044C70F7F000041FFD34C401C21042844383550585C604864686C034300000049BB007044C70F7F000041FFD34C401C21293D0544383550585C604864686C034400000049BB007044C70F7F000041FFD34C401C213D0544383550585C604864686C034500000049BB007044C70F7F000041FFD3354C40443850585C60481C686C05034600000049BB007044C70F7F000041FFD34C403844505C60481C686C05034700000049BB007044C70F7F000041FFD34C383444505C60481C686C05034800000049BB007044C70F7F000041FFD34C38203444505C60481C686C05034900000049BB007044C70F7F000041FFD34C383444505C60481C686C05034A00000049BB007044C70F7F000041FFD34C383444505C60481C686C05034B00000049BB437044C70F7F000041FFD34C380044505C60487C6C75034C00000049BB007044C70F7F000041FFD34C381C2844505C607C6C0075034D00000049BB437044C70F7F000041FFD34C388D018401880144505C60487C6C0775034E00000049BB007044C70F7F000041FFD34C3844505C60487C6C0775034F00000049BB007044C70F7F000041FFD34C407C393D7144505C60486C75035000000049BB007044C70F7F000041FFD34C4044505C60481C6C2907035100000049BB437044C70F7F000041FFD34C400044505C60487C6C7507035200000049BB437044C70F7F000041FFD34C4095019801900144505C60487C6C7507035300000049BB007044C70F7F000041FFD34C4044505C60487C6C75070354000000 +[101d934a5a737] jit-backend-dump} +[101d934a5b3bf] {jit-backend-addr +Loop 2 ( #13 FOR_ITER) has address 7f0fc74485de to 7f0fc74489b7 (bootstrap 7f0fc74485a8) +[101d934a5ce03] jit-backend-addr} +[101d934a5da27] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf575d1 +0 C0FEFFFF -[7e18c54e9883] jit-backend-dump} -[7e18c54ea5f4] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74485da +0 C0FEFFFF +[101d934a5eba3] jit-backend-dump} +[101d934a5f88f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf576ae +0 0C030000 -[7e18c54eb1d9] jit-backend-dump} -[7e18c54eb6d1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74486b7 +0 FC020000 +[101d934a605fb] jit-backend-dump} +[101d934a60bdf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf576c2 +0 1A030000 -[7e18c54ec22f] jit-backend-dump} -[7e18c54ec817] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74486c3 +0 12030000 +[101d934a6187b] jit-backend-dump} +[101d934a61e07] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf576cf +0 2D030000 -[7e18c54ed42c] jit-backend-dump} -[7e18c54ed8eb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74486d0 +0 25030000 +[101d934a62c73] jit-backend-dump} +[101d934a63327] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf576e3 +0 3A030000 -[7e18c54ee2f9] jit-backend-dump} -[7e18c54ee7ca] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74486e4 +0 32030000 +[101d934a6407f] jit-backend-dump} +[101d934a6461f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf576fd +0 43030000 -[7e18c54ef1b4] jit-backend-dump} -[7e18c54f23d3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74486fe +0 3B030000 +[101d934a65297] jit-backend-dump} +[101d934a65837] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57706 +0 5E030000 -[7e18c54f310e] jit-backend-dump} -[7e18c54f36b7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448707 +0 56030000 +[101d934a664cf] jit-backend-dump} +[101d934a66a3f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57725 +0 62030000 -[7e18c54f41b2] jit-backend-dump} -[7e18c54f4737] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448726 +0 5A030000 +[101d934a6769b] jit-backend-dump} +[101d934a67d27] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57738 +0 6F030000 -[7e18c54f50b2] jit-backend-dump} -[7e18c54f5574] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448739 +0 67030000 +[101d934a6d79b] jit-backend-dump} +[101d934a6df23] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57756 +0 6F030000 -[7e18c54f5eef] jit-backend-dump} -[7e18c54f638a] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448757 +0 67030000 +[101d934a6ec23] jit-backend-dump} +[101d934a6f2a3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5776d +0 76030000 -[7e18c54f6d05] jit-backend-dump} -[7e18c54f73a7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744876e +0 6E030000 +[101d934a6ff77] jit-backend-dump} +[101d934a70607] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57782 +0 9E030000 -[7e18c54f7e18] jit-backend-dump} -[7e18c54f83e2] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448783 +0 96030000 +[101d934a7120f] jit-backend-dump} +[101d934a717bb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf577c0 +0 7E030000 -[7e18c54f8eb3] jit-backend-dump} -[7e18c54f9465] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74487c1 +0 76030000 +[101d934a72643] jit-backend-dump} +[101d934a72ccb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf577d8 +0 84030000 -[7e18c54f9de0] jit-backend-dump} -[7e18c54fa3b6] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74487d8 +0 7C030000 +[101d934a73aa3] jit-backend-dump} +[101d934a74113] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5781a +0 60030000 -[7e18c54fad34] jit-backend-dump} -[7e18c54fb1db] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448817 +0 5B030000 +[101d934a74dc3] jit-backend-dump} +[101d934a75477] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57886 +0 18030000 -[7e18c54fbb5c] jit-backend-dump} -[7e18c54fc0ea] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744887f +0 16030000 +[101d934a7612b] jit-backend-dump} +[101d934a766ab] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf578b1 +0 0B030000 -[7e18c54fcbe5] jit-backend-dump} -[7e18c54fd227] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74488b1 +0 01030000 +[101d934a7726f] jit-backend-dump} +[101d934a77913] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf578fe +0 FC020000 -[7e18c54fdc74] jit-backend-dump} -[7e18c54fe136] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74488fe +0 F0020000 +[101d934a7860f] jit-backend-dump} +[101d934a78c87] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5793c +0 DD020000 -[7e18c54feaab] jit-backend-dump} -[7e18c54fef52] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744893c +0 D0020000 +[101d934a7998f] jit-backend-dump} +[101d934a7a003] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf579a8 +0 95020000 -[7e18c54ff8d0] jit-backend-dump} -[7e18c55004fa] jit-backend} -[7e18c5501bb3] {jit-log-opt-loop +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74489a4 +0 8B020000 +[101d934a7aca7] jit-backend-dump} +[101d934a7ba63] jit-backend} +[101d934a7e39b] {jit-log-opt-loop # Loop 2 ( #13 FOR_ITER) : loop with 100 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -879,116 +880,116 @@ +157: p22 = getarrayitem_gc(p8, 6, descr=) +168: p24 = getarrayitem_gc(p8, 7, descr=) +172: p25 = getfield_gc(p0, descr=) -+172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(140139656776704)) ++172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(139705792105152)) debug_merge_point(0, ' #13 FOR_ITER') +265: guard_value(i6, 5, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] -+275: guard_class(p18, 38352528, descr=) [p1, p0, p18, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+295: p28 = getfield_gc(p18, descr=) -+299: guard_nonnull(p28, descr=) [p1, p0, p18, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+308: i29 = getfield_gc(p18, descr=) -+312: p30 = getfield_gc(p28, descr=) -+316: guard_class(p30, 38538416, descr=) [p1, p0, p18, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+328: p32 = getfield_gc(p28, descr=) -+332: i33 = getfield_gc_pure(p32, descr=) -+336: i34 = getfield_gc_pure(p32, descr=) -+340: i35 = getfield_gc_pure(p32, descr=) -+344: i37 = int_lt(i29, 0) ++275: guard_class(p18, 38562496, descr=) [p1, p0, p18, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++287: p28 = getfield_gc(p18, descr=) ++291: guard_nonnull(p28, descr=) [p1, p0, p18, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++300: i29 = getfield_gc(p18, descr=) ++304: p30 = getfield_gc(p28, descr=) ++308: guard_class(p30, 38745240, descr=) [p1, p0, p18, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++320: p32 = getfield_gc(p28, descr=) ++324: i33 = getfield_gc_pure(p32, descr=) ++328: i34 = getfield_gc_pure(p32, descr=) ++332: i35 = getfield_gc_pure(p32, descr=) ++336: i37 = int_lt(i29, 0) guard_false(i37, descr=) [p1, p0, p18, i29, i35, i34, i33, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+354: i38 = int_ge(i29, i35) ++346: i38 = int_ge(i29, i35) guard_false(i38, descr=) [p1, p0, p18, i29, i34, i33, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+363: i39 = int_mul(i29, i34) -+370: i40 = int_add(i33, i39) -+376: i42 = int_add(i29, 1) -+380: setfield_gc(p18, i42, descr=) -+384: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p18, p22, p24, i40] ++355: i39 = int_mul(i29, i34) ++362: i40 = int_add(i33, i39) ++368: i42 = int_add(i29, 1) ++372: setfield_gc(p18, i42, descr=) ++376: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p18, p22, p24, i40] debug_merge_point(0, ' #16 STORE_FAST') debug_merge_point(0, ' #19 LOAD_GLOBAL') -+394: guard_value(p3, ConstPtr(ptr44), descr=) [p1, p0, p3, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+413: p45 = getfield_gc(p0, descr=) -+424: guard_value(p45, ConstPtr(ptr46), descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+443: p47 = getfield_gc(p45, descr=) -+447: guard_value(p47, ConstPtr(ptr48), descr=) [p1, p0, p47, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+466: guard_not_invalidated(, descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+466: p50 = getfield_gc(ConstPtr(ptr49), descr=) -+474: guard_value(p50, ConstPtr(ptr51), descr=) [p1, p0, p50, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++386: guard_value(p3, ConstPtr(ptr44), descr=) [p1, p0, p3, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++405: p45 = getfield_gc(p0, descr=) ++416: guard_value(p45, ConstPtr(ptr46), descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++435: p47 = getfield_gc(p45, descr=) ++439: guard_value(p47, ConstPtr(ptr48), descr=) [p1, p0, p47, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++458: guard_not_invalidated(, descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++458: p50 = getfield_gc(ConstPtr(ptr49), descr=) ++466: guard_value(p50, ConstPtr(ptr51), descr=) [p1, p0, p50, p2, p5, p12, p14, p16, p18, p22, p24, i40] debug_merge_point(0, ' #22 LOAD_FAST') debug_merge_point(0, ' #25 CALL_FUNCTION') -+487: p53 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i40, descr=) -+534: guard_no_exception(, descr=) [p1, p0, p53, p2, p5, p12, p14, p16, p18, p24, i40] ++479: p53 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i40, descr=) ++526: guard_no_exception(, descr=) [p1, p0, p53, p2, p5, p12, p14, p16, p18, p24, i40] debug_merge_point(0, ' #28 LIST_APPEND') -+549: p54 = getfield_gc(p16, descr=) -+560: guard_class(p54, 38450144, descr=) [p1, p0, p54, p16, p2, p5, p12, p14, p18, p24, p53, i40] -+573: p56 = getfield_gc(p16, descr=) -+577: i57 = getfield_gc(p56, descr=) -+581: i59 = int_add(i57, 1) -+588: p60 = getfield_gc(p56, descr=) -+588: i61 = arraylen_gc(p60, descr=) -+588: call(ConstClass(_ll_list_resize_ge_trampoline__v539___simple_call__function__), p56, i59, descr=) -+624: guard_no_exception(, descr=) [p1, p0, i57, p53, p56, p2, p5, p12, p14, p16, p18, p24, None, i40] -+639: p64 = getfield_gc(p56, descr=) ++541: p54 = getfield_gc(p16, descr=) ++552: guard_class(p54, 38655536, descr=) [p1, p0, p54, p16, p2, p5, p12, p14, p18, p24, p53, i40] ++564: p56 = getfield_gc(p16, descr=) ++568: i57 = getfield_gc(p56, descr=) ++572: i59 = int_add(i57, 1) ++579: p60 = getfield_gc(p56, descr=) ++579: i61 = arraylen_gc(p60, descr=) ++579: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i59, descr=) ++612: guard_no_exception(, descr=) [p1, p0, i57, p53, p56, p2, p5, p12, p14, p16, p18, p24, None, i40] ++627: p64 = getfield_gc(p56, descr=) setarrayitem_gc(p64, i57, p53, descr=) debug_merge_point(0, ' #31 JUMP_ABSOLUTE') -+729: i66 = getfield_raw(43858376, descr=) -+737: i68 = int_lt(i66, 0) ++713: i66 = getfield_raw(44057928, descr=) ++721: i68 = int_lt(i66, 0) guard_false(i68, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, None, i40] debug_merge_point(0, ' #13 FOR_ITER') -+747: p69 = same_as(ConstPtr(ptr48)) -+747: label(p0, p1, p2, p5, i40, p12, p14, p16, p18, p24, i42, i35, i34, i33, p56, descr=TargetToken(140139656776784)) ++731: p69 = same_as(ConstPtr(ptr48)) ++731: label(p0, p1, p2, p5, i40, p12, p14, p16, p18, p24, i42, i35, i34, i33, p56, descr=TargetToken(139705792105232)) debug_merge_point(0, ' #13 FOR_ITER') -+777: i70 = int_ge(i42, i35) ++761: i70 = int_ge(i42, i35) guard_false(i70, descr=) [p1, p0, p18, i42, i34, i33, p2, p5, p12, p14, p16, p24, i40] -+790: i71 = int_mul(i42, i34) -+801: i72 = int_add(i33, i71) -+811: i73 = int_add(i42, 1) ++781: i71 = int_mul(i42, i34) ++785: i72 = int_add(i33, i71) ++795: i73 = int_add(i42, 1) debug_merge_point(0, ' #16 STORE_FAST') debug_merge_point(0, ' #19 LOAD_GLOBAL') -+815: setfield_gc(p18, i73, descr=) -+826: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] ++806: setfield_gc(p18, i73, descr=) ++817: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #22 LOAD_FAST') debug_merge_point(0, ' #25 CALL_FUNCTION') -+826: p74 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i72, descr=) -+852: guard_no_exception(, descr=) [p1, p0, p74, p2, p5, p12, p14, p16, p18, p24, i72, None] ++817: p74 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i72, descr=) ++843: guard_no_exception(, descr=) [p1, p0, p74, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #28 LIST_APPEND') -+867: i75 = getfield_gc(p56, descr=) -+878: i76 = int_add(i75, 1) -+885: p77 = getfield_gc(p56, descr=) -+885: i78 = arraylen_gc(p77, descr=) -+885: call(ConstClass(_ll_list_resize_ge_trampoline__v539___simple_call__function__), p56, i76, descr=) -+914: guard_no_exception(, descr=) [p1, p0, i75, p74, p56, p2, p5, p12, p14, p16, p18, p24, i72, None] -+929: p79 = getfield_gc(p56, descr=) ++858: i75 = getfield_gc(p56, descr=) ++869: i76 = int_add(i75, 1) ++876: p77 = getfield_gc(p56, descr=) ++876: i78 = arraylen_gc(p77, descr=) ++876: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i76, descr=) ++905: guard_no_exception(, descr=) [p1, p0, i75, p74, p56, p2, p5, p12, p14, p16, p18, p24, i72, None] ++920: p79 = getfield_gc(p56, descr=) setarrayitem_gc(p79, i75, p74, descr=) debug_merge_point(0, ' #31 JUMP_ABSOLUTE') -+1019: i80 = getfield_raw(43858376, descr=) -+1027: i81 = int_lt(i80, 0) ++1006: i80 = getfield_raw(44057928, descr=) ++1014: i81 = int_lt(i80, 0) guard_false(i81, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #13 FOR_ITER') -+1037: jump(p0, p1, p2, p5, i72, p12, p14, p16, p18, p24, i73, i35, i34, i33, p56, descr=TargetToken(140139656776784)) -+1055: --end of the loop-- -[7e18c5571e6d] jit-log-opt-loop} -[7e18c5a109b1] {jit-backend -[7e18c5a36207] {jit-backend-dump ++1024: jump(p0, p1, p2, p5, i72, p12, p14, p16, p18, p24, i73, i35, i34, i33, p56, descr=TargetToken(139705792105232)) ++1039: --end of the loop-- +[101d934aff5b7] jit-log-opt-loop} +[101d93520f017] {jit-backend +[101d935239d0b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57c5f +0 488B0425C0399D024829E0483B0425E08C5001760D49BB6363F5CB747F000041FFD3554889E5534154415541564157488DA50000000049BBF8F182CE747F00004D8B3B4983C70149BBF8F182CE747F00004D893B4C8B7E404D0FB67C3F184983FF330F85000000004989FF4883C70148897E1848C74620000000004C897E28B80100000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB0060F5CB747F000041FFD31D180355000000 -[7e18c5a3a5c7] jit-backend-dump} -[7e18c5a3ab23] {jit-backend-addr -Loop 3 (StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) has address 7f74cbf57c95 to 7f74cbf57d08 (bootstrap 7f74cbf57c5f) -[7e18c5a3bc4d] jit-backend-addr} -[7e18c5a3c4b7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448c50 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBF8E121CA0F7F00004D8B3B4983C70149BBF8E121CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F85000000004989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D180355000000 +[101d93523fa7b] jit-backend-dump} +[101d9352401a3] {jit-backend-addr +Loop 3 (re StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) has address 7f0fc7448c86 to 7f0fc7448cf9 (bootstrap 7f0fc7448c50) +[101d93524156f] jit-backend-addr} +[101d93524215b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57c91 +0 70FFFFFF -[7e18c5a3d025] jit-backend-dump} -[7e18c5a3d8d5] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448c82 +0 70FFFFFF +[101d93524311f] jit-backend-dump} +[101d935243aef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57cc3 +0 41000000 -[7e18c5a3e36f] jit-backend-dump} -[7e18c5a3ed71] jit-backend} -[7e18c5a40383] {jit-log-opt-loop -# Loop 3 (StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) : entry bridge with 10 ops +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448cb4 +0 41000000 +[101d935244943] jit-backend-dump} +[101d935245397] jit-backend} +[101d935247d5f] {jit-log-opt-loop +# Loop 3 (re StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) : entry bridge with 10 ops [i0, p1] -debug_merge_point(0, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +84: p2 = getfield_gc(p1, descr=) +88: i3 = strgetitem(p2, i0) +94: i5 = int_eq(i3, 51) @@ -997,52 +998,52 @@ +111: setfield_gc(p1, i7, descr=) +115: setfield_gc(p1, ConstPtr(ptr8), descr=) +123: setfield_gc(p1, i0, descr=) -+127: finish(1, descr=) ++127: finish(1, descr=) +169: --end of the loop-- -[7e18c5a57e6f] jit-log-opt-loop} -[7e18c5ea97f7] {jit-backend -[7e18c5ec15f3] {jit-backend-dump +[101d935264e27] jit-log-opt-loop} +[101d93579cc0f] {jit-backend +[101d9357b906f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d1c +0 488DA50000000049BB10F282CE747F00004D8B3B4983C70149BB10F282CE747F00004D893B4883C7014C8B7E084C39FF0F8D000000004C8B76404D0FB6743E184983FE330F84000000004883C7014C39FF0F8C00000000B80000000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB0060F5CB747F000041FFD31D18035600000049BB0060F5CB747F000041FFD31D18035700000049BB0060F5CB747F000041FFD31D180358000000 -[7e18c5ec4a37] jit-backend-dump} -[7e18c5ec4fa5] {jit-backend-addr -bridge out of Guard 85 has address 7f74cbf57d1c to 7f74cbf57d9d -[7e18c5ec5bf7] jit-backend-addr} -[7e18c5ec6215] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d0d +0 488DA50000000049BB10E221CA0F7F00004D8B3B4983C70149BB10E221CA0F7F00004D893B4883C7014C8B7E084C39FF0F8D000000004C8B76404D0FB6743E184983FE330F84000000004883C7014C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D18035600000049BB007044C70F7F000041FFD31D18035700000049BB007044C70F7F000041FFD31D180358000000 +[101d9357bda0b] jit-backend-dump} +[101d9357be137] {jit-backend-addr +bridge out of Guard 85 has address 7f0fc7448d0d to 7f0fc7448d8e +[101d9357bf077] jit-backend-addr} +[101d9357bf8af] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d1f +0 70FFFFFF -[7e18c5ec6e55] jit-backend-dump} -[7e18c5ec75ff] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d10 +0 70FFFFFF +[101d9357c0a7b] jit-backend-dump} +[101d9357c12cf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d4e +0 4B000000 -[7e18c5ec7fb7] jit-backend-dump} -[7e18c5ec841b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d3f +0 4B000000 +[101d9357c2137] jit-backend-dump} +[101d9357c282b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d62 +0 4B000000 -[7e18c5ec8d3d] jit-backend-dump} -[7e18c5ec9175] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d53 +0 4B000000 +[101d9357c3627] jit-backend-dump} +[101d9357c3c8f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d6f +0 52000000 -[7e18c5ec9b15] jit-backend-dump} -[7e18c5eca38b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d60 +0 52000000 +[101d9357c491b] jit-backend-dump} +[101d9357c5153] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57cc3 +0 55000000 -[7e18c5ecada7] jit-backend-dump} -[7e18c5ecb425] jit-backend} -[7e18c5ecbce1] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448cb4 +0 55000000 +[101d9357c5e43] jit-backend-dump} +[101d9357c6727] jit-backend} +[101d9357c7423] {jit-log-opt-bridge # bridge out of Guard 85 with 13 ops [i0, p1] +37: i3 = int_add(i0, 1) +41: i4 = getfield_gc_pure(p1, descr=) +45: i5 = int_lt(i3, i4) guard_true(i5, descr=) [i3, p1] -debug_merge_point(0, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +54: p6 = getfield_gc(p1, descr=) +58: i7 = strgetitem(p6, i3) +64: i9 = int_eq(i7, 51) @@ -1050,43 +1051,43 @@ +74: i11 = int_add(i3, 1) +78: i12 = int_lt(i11, i4) guard_false(i12, descr=) [i11, p1] -+87: finish(0, descr=) ++87: finish(0, descr=) +129: --end of the loop-- -[7e18c5ed742b] jit-log-opt-bridge} -[7e18c61e68c7] {jit-backend -[7e18c61fa9a7] {jit-backend-dump +[101d9357d6543] jit-log-opt-bridge} +[101d935beecd3] {jit-backend +[101d935c047bf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57dd9 +0 488DA50000000049BB28F282CE747F00004D8B3B4983C70149BB28F282CE747F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB0060F5CB747F000041FFD31D18035900000049BB0060F5CB747F000041FFD31D18035A000000 -[7e18c61fd977] jit-backend-dump} -[7e18c61fde5d] {jit-backend-addr -bridge out of Guard 88 has address 7f74cbf57dd9 to 7f74cbf57e4d -[7e18c61fe88d] jit-backend-addr} -[7e18c61feedb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448dca +0 488DA50000000049BB28E221CA0F7F00004D8B3B4983C70149BB28E221CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D18035900000049BB007044C70F7F000041FFD31D18035A000000 +[101d935c088ab] jit-backend-dump} +[101d935c08f03] {jit-backend-addr +bridge out of Guard 88 has address 7f0fc7448dca to 7f0fc7448e3e +[101d935c09c47] jit-backend-addr} +[101d935c0a3ab] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57ddc +0 70FFFFFF -[7e18c61ff995] jit-backend-dump} -[7e18c61fffad] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448dcd +0 70FFFFFF +[101d935c0b3ef] jit-backend-dump} +[101d935c0bbf7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e0e +0 3B000000 -[7e18c6200a7b] jit-backend-dump} -[7e18c6200fd3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448dff +0 3B000000 +[101d935c0c9e3] jit-backend-dump} +[101d935c0cf6b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e1f +0 3E000000 -[7e18c620198b] jit-backend-dump} -[7e18c6201f69] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448e10 +0 3E000000 +[101d935c15c27] jit-backend-dump} +[101d935c1657b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d6f +0 66000000 -[7e18c62027f7] jit-backend-dump} -[7e18c6202e61] jit-backend} -[7e18c62035ab] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d60 +0 66000000 +[101d935c17497] jit-backend-dump} +[101d935c17dff] jit-backend} +[101d935c18b1f] {jit-log-opt-bridge # bridge out of Guard 88 with 10 ops [i0, p1] -debug_merge_point(0, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +37: p2 = getfield_gc(p1, descr=) +41: i3 = strgetitem(p2, i0) +47: i5 = int_eq(i3, 51) @@ -1095,426 +1096,426 @@ +61: i8 = getfield_gc_pure(p1, descr=) +65: i9 = int_lt(i7, i8) guard_false(i9, descr=) [i7, p1] -+74: finish(0, descr=) ++74: finish(0, descr=) +116: --end of the loop-- -[7e18c6213be9] jit-log-opt-bridge} -[7e18c6553871] {jit-backend -[7e18c655e4a9] {jit-backend-dump +[101d935c25193] jit-log-opt-bridge} +[101d93608999f] {jit-backend +[101d936096717] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e75 +0 488DA50000000049BB40F282CE747F0000498B334883C60149BB40F282CE747F0000498933B80000000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[7e18c6560b67] jit-backend-dump} -[7e18c6560fd1] {jit-backend-addr -bridge out of Guard 86 has address 7f74cbf57e75 to 7f74cbf57ec4 -[7e18c656191f] jit-backend-addr} -[7e18c6561f0b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448e66 +0 488DA50000000049BB40E221CA0F7F0000498B334883C60149BB40E221CA0F7F0000498933B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[101d936099bf7] jit-backend-dump} +[101d93609a247] {jit-backend-addr +bridge out of Guard 86 has address 7f0fc7448e66 to 7f0fc7448eb5 +[101d93609ae3f] jit-backend-addr} +[101d93609b56f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e78 +0 70FFFFFF -[7e18c6562ad1] jit-backend-dump} -[7e18c65631d3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448e69 +0 70FFFFFF +[101d93609c48f] jit-backend-dump} +[101d93609cccb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d4e +0 23010000 -[7e18c6563bdb] jit-backend-dump} -[7e18c6564221] jit-backend} -[7e18c65648f7] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d3f +0 23010000 +[101d93609da0b] jit-backend-dump} +[101d93609e263] jit-backend} +[101d93609edcb] {jit-log-opt-bridge # bridge out of Guard 86 with 1 ops [i0, p1] -+37: finish(0, descr=) ++37: finish(0, descr=) +79: --end of the loop-- -[7e18c6567069] jit-log-opt-bridge} -[7e18c75192d5] {jit-backend -[7e18c76bb2ab] {jit-backend-dump +[101d9360a2113] jit-log-opt-bridge} +[101d9374ca7d7] {jit-backend +[101d9377009b7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf580a8 +0 488B0425C0399D024829E0483B0425E08C5001760D49BB6363F5CB747F000041FFD3554889E5534154415541564157488DA50000000049BB58F282CE747F00004D8B3B4983C70149BB58F282CE747F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284889B570FFFFFF498B70304C89A568FFFFFF4D8B603848899560FFFFFF498B50404D8B40484C89BD58FFFFFF4C898D50FFFFFF48899D48FFFFFF48898540FFFFFF4889B538FFFFFF4C89A530FFFFFF48899528FFFFFF4C898520FFFFFF49BB70F282CE747F00004D8B034983C00149BB70F282CE747F00004D89034983FA040F85000000008139104D00000F85000000004C8B51104D85D20F84000000004C8B4108498B5210813A60CA01000F85000000004D8B5208498B52084939D00F83000000004D8B52104F8B54C2104D85D20F84000000004983C0014C8941084983FD000F850000000049BB90C10BCC747F00004D39DE0F85000000004C8B770849BB908B07CC747F00004D39DE0F85000000004D8B6E1049BBA88B07CC747F00004D39DD0F850000000049BBA0770BCC747F00004D8B3349BBA8770BCC747F00004D39DE0F85000000004C899518FFFFFF4889BD10FFFFFF48898D08FFFFFF41BB30698D0041FFD3488B4840488B78504885FF0F8500000000488B78284883FF000F850000000049BB18AA0CCC747F0000498B3B4883FF000F8F00000000488B3C25403E86014881FF207088010F850000000049BBD0770BCC747F0000498B3B813F98D901000F850000000049BBC8770BCC747F0000498B3B48898500FFFFFF488B0425B0685501488D5040483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C7008800000048C74008030000004889C24883C02848C70040830100488968084C8B9500FFFFFF41F6420401741B4152505251574C89D74889C641BBB0E5C40041FFD35F595A58415A498942404C8BB510FFFFFF49896E1848C74210E016840149BB20A10BCC747F00004C895A1849BBF0AD0BCC747F00004C895A204889BDF8FEFFFF48898DF0FEFFFF488995E8FEFFFF488985E0FEFFFF48C78578FFFFFF5B0000004889D741BB3091920041FFD34883BD78FFFFFF000F8C0000000048833C25203B9D02000F8500000000488985D8FEFFFF488B0425B0685501488D5010483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C70038200000488B9510FFFFFF48896A184C8BB5E8FEFFFF4C897008488985D0FEFFFF48C78578FFFFFF5C000000488BBDF8FEFFFF4889C6488B95D8FEFFFF41BB3012790041FFD34883BD78FFFFFF000F8C0000000048833C25203B9D02000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85F8FEFFFF488B4018486BD218488B5410184883FA017206813A30DF03000F85000000004881FA007C72010F8400000000488B8500FFFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8BB5E0FEFFFF49C74608FDFFFFFF4C8BB518FFFFFF4D8B561049BBFFFFFFFFFFFFFF7F4D39DA0F8D00000000488B4A10488B7A184C8B69104983FD110F85000000004C8B69204D89E84983E5014983FD000F84000000004C8B41384983F8010F8F000000004C8B41184983C0014E8B6CC1104983FD130F85000000004D89C54983C0014E8B44C1104983C5024983FA000F8E000000004983FD0B0F85000000004983F8330F850000000049BBA0EA75CE747F00004C39D90F8500000000488995C8FEFFFF488B0425B0685501488D5060483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C700189F00004889C24883C04848C7004083010048896808488B8D00FFFFFFF6410401741B4152515750524889CF4889C641BBB0E5C40041FFD35A585F59415A488941404C8B8510FFFFFF498968184C8952084C89724049BBA0EA75CE747F00004C895A3848897A10488995C0FEFFFF488985B8FEFFFF48C78578FFFFFF5D000000BF000000004889D649BB5F7CF5CB747F000041FFD34883F80274134889C7BE0000000041BBA083950041FFD3EB08488B0425901A55014883BD78FFFFFF000F8C0000000048833C25203B9D02000F85000000004885C00F8500000000488B8500FFFFFF4C8B40504D85C00F85000000004C8B40284983F8000F8500000000488B8DF0FEFFFFF64004017417514150504889C74889CE41BBB0E5C40041FFD35841585948894840488B95B8FEFFFF48C74208FDFFFFFF488B1425C8399D024883FA000F8C0000000049BB88F282CE747F0000498B134883C20149BB88F282CE747F0000498913488B9508FFFFFF4C8B72104D85F60F8400000000488B7A084D8B561041813A60CA01000F85000000004D8B76084D8B56084C39D70F83000000004D8B76104D8B74FE104D85F60F84000000004883C7014C8B9510FFFFFF4D8B6A0848897A0849BB908B07CC747F00004D39DD0F8500000000498B7D1049BBA88B07CC747F00004C39DF0F850000000049BBA0770BCC747F00004D8B2B49BBA8770BCC747F00004D39DD0F85000000004983F8000F850000000049BB18AA0CCC747F00004D8B034983F8000F8F000000004C8B0425403E86014981F8207088010F850000000049BBD0770BCC747F00004D8B0341813898D901000F850000000049BBC8770BCC747F00004D8B03488985B0FEFFFF488B0425B0685501488D5040483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C7008800000048C74008030000004889C24883C02848C70040830100488968084C8BADB0FEFFFF41F6450401741D504150415252514C89EF4889C641BBB0E5C40041FFD3595A415A4158584989454049896A1848C74210E016840149BB20A10BCC747F00004C895A1849BBF0AD0BCC747F00004C895A2048898DA8FEFFFF4C89B518FFFFFF488995A0FEFFFF4C898598FEFFFF48898590FEFFFF48C78578FFFFFF5E0000004889D741BB3091920041FFD34883BD78FFFFFF000F8C0000000048833C25203B9D02000F850000000048898588FEFFFF488B0425B0685501488D5010483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C70038200000488B9510FFFFFF48896A184C8B85A0FEFFFF4C89400848898580FEFFFF48C78578FFFFFF5F000000488BBD98FEFFFF4889C6488B9588FEFFFF41BB3012790041FFD34883BD78FFFFFF000F8C0000000048833C25203B9D02000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B8598FEFFFF488B4018486BD218488B5410184883FA017206813A30DF03000F85000000004881FA007C72010F8400000000488B85B0FEFFFF4C8B40504D85C00F85000000004C8B40284983F8000F85000000004C8B8590FEFFFF49C74008FDFFFFFF4C8B8518FFFFFF4D8B501049BBFFFFFFFFFFFFFF7F4D39DA0F8D000000004C8B7210488B4A184D8B6E104983FD110F85000000004D8B6E204C89EF4983E5014983FD000F8400000000498B7E384883FF010F8F00000000498B7E184883C7014D8B6CFE104983FD130F85000000004989FD4883C701498B7CFE104983C5024983FA000F8E000000004983FD0B0F85000000004883FF330F850000000049BBA0EA75CE747F00004D39DE0F850000000048899578FEFFFF488B0425B0685501488D5060483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C700189F00004889C24883C04848C70040830100488968084C8BB5B0FEFFFF41F6460401741D504152525141504C89F74889C641BBB0E5C40041FFD34158595A415A5849894640488BBD10FFFFFF48896F184C8952084C89424049BBA0EA75CE747F00004C895A3848894A1048899570FEFFFF48898568FEFFFF48C78578FFFFFF60000000BF000000004889D649BB5F7CF5CB747F000041FFD34883F80274134889C7BE0000000041BBA083950041FFD3EB08488B0425901A55014883BD78FFFFFF000F8C0000000048833C25203B9D02000F85000000004885C00F8500000000488B85B0FEFFFF488B78504885FF0F8500000000488B78284883FF000F85000000004C8BB5A8FEFFFFF6400401741357504889C74C89F641BBB0E5C40041FFD3585F4C897040488B9568FEFFFF48C74208FDFFFFFF488B1425C8399D024883FA000F8C000000004989F84C89F1E969FAFFFF49BB0060F5CB747F000041FFD329401C4C38354451544858045C606468036100000049BB0060F5CB747F000041FFD3401C044C3835445448585C606468036200000049BB0060F5CB747F000041FFD3401C04284C3835445448585C606468036300000049BB0060F5CB747F000041FFD3401C042108284C3835445448585C606468036400000049BB0060F5CB747F000041FFD3401C042109284C3835445448585C606468036500000049BB0060F5CB747F000041FFD3401C0421284C3835445448585C606468036600000049BB0060F5CB747F000041FFD335401C4C38445448580460646828036700000049BB0060F5CB747F000041FFD3401C384C4454480460646828036800000049BB0060F5CB747F000041FFD3401C384C4454480460646828036900000049BB0060F5CB747F000041FFD3401C34384C4454480460646828036A00000049BB0060F5CB747F000041FFD3401C384C4454480460646828036B00000049BB0060F5CB747F000041FFD3401C384C4454480460646828036C00000049BB0060F5CB747F000041FFD34070001C4C4454487404156C036D00000049BB0060F5CB747F000041FFD34070004C4454487404156C036E00000049BB0060F5CB747F000041FFD34070004C4454487404156C036F00000049BB0060F5CB747F000041FFD34070001D4C4454487404156C037000000049BB0060F5CB747F000041FFD34070001C4C445448741504156C037100000049BB0060F5CB747F000041FFD34070001C4C445448741504156C037200000049BB4360F5CB747F000041FFD34070787C0188014C445448746C8001158401035B00000049BB4360F5CB747F000041FFD34070787C0188014C445448746C8001158401037300000049BB4360F5CB747F000041FFD34070789001017C88014C445448746C800115035C00000049BB4360F5CB747F000041FFD34070789001017C88014C445448746C800115037400000049BB0060F5CB747F000041FFD34070789001097C88014C445448746C800115037500000049BB0060F5CB747F000041FFD340707890010888014C445448746C800115037600000049BB0060F5CB747F000041FFD340707888014C445448749001086C800115037700000049BB0060F5CB747F000041FFD3407000083888014C445448749001076C800115037800000049BB0060F5CB747F000041FFD34070000888014C445448749001076C800115037900000049BB0060F5CB747F000041FFD34070004C4454487407086C800115037A00000049BB0060F5CB747F000041FFD340700008384C44544874070707800115037B00000049BB0060F5CB747F000041FFD3407000084C44544874291D04070738800115037C00000049BB0060F5CB747F000041FFD340700008214C44544874291D04070738800115037D00000049BB0060F5CB747F000041FFD3407000084C44544874291D04070738800115037E00000049BB0060F5CB747F000041FFD340700008214C44544874291D04070738800115037F00000049BB0060F5CB747F000041FFD34070000821354C44544874291D04070738800115038000000049BB0060F5CB747F000041FFD3407000082135044C44544874291D07070738800115038100000049BB0060F5CB747F000041FFD34070000821044C44544874291D07070738800115038200000049BB0060F5CB747F000041FFD340700008044C44544874291D07070738800115038300000049BB4360F5CB747F000041FFD340707898019401019C014C4454487480016C035D00000049BB4360F5CB747F000041FFD340707898019401019C014C4454487480016C038400000049BB0060F5CB747F000041FFD3407078980194019C014C4454487480016C038500000049BB0060F5CB747F000041FFD3407000209C014C4454487480016C038600000049BB0060F5CB747F000041FFD34070009C014C4454487480016C038700000049BB0060F5CB747F000041FFD340704C44544874076C038800000049BB0060F5CB747F000041FFD340704C44544874076C038900000049BB0060F5CB747F000041FFD3407008384C4454486C038A00000049BB0060F5CB747F000041FFD34070081D28384C4454486C038B00000049BB0060F5CB747F000041FFD34070081D29384C4454486C038C00000049BB0060F5CB747F000041FFD34070081D384C4454486C038D00000049BB0060F5CB747F000041FFD34028344C445448083807038E00000049BB0060F5CB747F000041FFD340281C344C445448083807038F00000049BB0060F5CB747F000041FFD34028344C445448083807039000000049BB0060F5CB747F000041FFD34028344C445448083807039100000049BB0060F5CB747F000041FFD34028004C4454480815043807039200000049BB0060F5CB747F000041FFD3402800214C4454480815043807039300000049BB0060F5CB747F000041FFD3402800204C445448081515043807039400000049BB0060F5CB747F000041FFD3402800204C445448081515043807039500000049BB4360F5CB747F000041FFD34070A001AC0101B0014C4454487415A8016CA401035E00000049BB4360F5CB747F000041FFD34070A001AC0101B0014C4454487415A8016CA401039600000049BB4360F5CB747F000041FFD34070A001B80101AC01B0014C44544874156CA401035F00000049BB4360F5CB747F000041FFD34070A001B80101AC01B0014C44544874156CA401039700000049BB0060F5CB747F000041FFD34070A001B80109AC01B0014C44544874156CA401039800000049BB0060F5CB747F000041FFD34070A001B80108B0014C44544874156CA401039900000049BB0060F5CB747F000041FFD34070A001B0014C44544874B80108156CA401039A00000049BB0060F5CB747F000041FFD34070000820B0014C44544874B80107156CA401039B00000049BB0060F5CB747F000041FFD340700008B0014C44544874B80107156CA401039C00000049BB0060F5CB747F000041FFD34070004C445448740708156CA401039D00000049BB0060F5CB747F000041FFD340700008204C4454487407071507A401039E00000049BB0060F5CB747F000041FFD3407000084C4454487405293807071520A401039F00000049BB0060F5CB747F000041FFD3407000081D4C4454487405293807071520A40103A000000049BB0060F5CB747F000041FFD3407000084C4454487405293807071520A40103A100000049BB0060F5CB747F000041FFD3407000081D4C4454487405293807071520A40103A200000049BB0060F5CB747F000041FFD3407000081D354C4454487405293807071520A40103A300000049BB0060F5CB747F000041FFD3407000081D35384C4454487405290707071520A40103A400000049BB0060F5CB747F000041FFD3407000081D384C4454487405290707071520A40103A500000049BB0060F5CB747F000041FFD340700008384C4454487405290707071520A40103A600000049BB4360F5CB747F000041FFD34070A001C001BC0101C4014C44544874A4016C036000000049BB4360F5CB747F000041FFD34070A001C001BC0101C4014C44544874A4016C03A700000049BB0060F5CB747F000041FFD34070A001C001BC01C4014C44544874A4016C03A800000049BB0060F5CB747F000041FFD34070001CC4014C44544874A4016C03A900000049BB0060F5CB747F000041FFD3407000C4014C44544874A4016C03AA00000049BB0060F5CB747F000041FFD340704C44544874076C03AB00000049BB0060F5CB747F000041FFD340704C44544874076C03AC000000 -[7e18c76f3847] jit-backend-dump} -[7e18c76f45a3] {jit-backend-addr -Loop 4 ( #44 FOR_ITER) has address 7f74cbf580de to 7f74cbf58d04 (bootstrap 7f74cbf580a8) -[7e18c76f5965] jit-backend-addr} -[7e18c76f64a3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744909b +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BB58E221CA0F7F00004D8B3B4983C70149BB58E221CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284C89BD70FFFFFF4D8B783048899D68FFFFFF498B58384889BD60FFFFFF498B78404D8B40484889B558FFFFFF4C89A550FFFFFF4C898D48FFFFFF48899540FFFFFF48898538FFFFFF4C89BD30FFFFFF48899D28FFFFFF4889BD20FFFFFF4C898518FFFFFF49BB70E221CA0F7F00004D8B034983C00149BB70E221CA0F7F00004D89034983FA040F85000000008139806300000F85000000004C8B51104D85D20F84000000004C8B4108498B7A10813FF0CE01000F85000000004D8B5208498B7A084939F80F83000000004D8B52104F8B54C2104D85D20F84000000004983C0014C8941084983FD000F850000000049BB28DC58C70F7F00004D39DE0F85000000004C8BB560FFFFFF4D8B6E0849BB302855C70F7F00004D39DD0F85000000004D8B451049BBF02855C70F7F00004D39D80F850000000049BB782955C70F7F00004D8B2B49BB802955C70F7F00004D39DD0F850000000048898D10FFFFFF4C899508FFFFFF41BB201B8D0041FFD34C8B5040488B48504885C90F8500000000488B48284883F9000F850000000049BBD03B55C70F7F0000498B0B4883F9000F8F00000000488B0C2500D785014881F9201288010F850000000049BBA82955C70F7F0000498B0B813910E001000F850000000049BBA02955C70F7F0000498B0B48898500FFFFFF488B042530255601488D5040483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8BAD00FFFFFF41F6450401741952415251504C89EF4889C641BBF0C4C50041FFD35859415A5A4989454049896E1848C7421060CE830149BBB03C58C70F7F00004C895A1849BB10EC54C70F7F00004C895A20488985F8FEFFFF48898DF0FEFFFF4C8995E8FEFFFF488995E0FEFFFF48C78578FFFFFF5B0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488985D8FEFFFF488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8BB5E0FEFFFF4C897008488985D0FEFFFF48C78578FFFFFF5C000000488BBDF0FEFFFF4889C6488B95D8FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85F0FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B8500FFFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8BB5F8FEFFFF49C74608FDFFFFFF4C8BB508FFFFFF4D8B6E1049BBFFFFFFFFFFFFFF7F4D39DD0F8D000000004C8B5210488B4A184D8B42104983F8110F85000000004D8B42204C89C74983E0014983F8000F8400000000498B7A384883FF010F8F00000000498B7A184883C7014D8B44FA104983F8130F85000000004989F84883C701498B7CFA104983C0024983FD000F8E000000004983F80B0F85000000004883FF330F850000000049BBC05E56C70F7F00004D39DA0F8500000000488995C8FEFFFF488B042530255601488D5060483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A0100488968084C8B9500FFFFFF41F6420401741951504152524C89D74889C641BBF0C4C50041FFD35A415A585949894240488BBD60FFFFFF48896F1849BBC05E56C70F7F00004C895A3848894A104C896A084C897240488995C0FEFFFF488985B8FEFFFF48C78578FFFFFF5D000000BF000000004889D649BB508C44C70F7F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B8500FFFFFF488B78504885FF0F8500000000488B78284883FF000F85000000004C8B95E8FEFFFFF64004017417504152574889C74C89D641BBF0C4C50041FFD35F415A584C895040488B95B8FEFFFF48C74208FDFFFFFF488B14254845A0024883FA000F8C0000000049BB88E221CA0F7F0000498B134883C20149BB88E221CA0F7F0000498913488B9510FFFFFF4C8B72104D85F60F84000000004C8B6A08498B4E108139F0CE01000F85000000004D8B7608498B4E084939CD0F83000000004D8B76104F8B74EE104D85F60F84000000004983C501488B8D60FFFFFF4C8B41084C896A0849BB302855C70F7F00004D39D80F85000000004D8B681049BBF02855C70F7F00004D39DD0F850000000049BB782955C70F7F00004D8B0349BB802955C70F7F00004D39D80F85000000004883FF000F850000000049BBD03B55C70F7F0000498B3B4883FF000F8F00000000488B3C2500D785014881FF201288010F850000000049BBA82955C70F7F0000498B3B813F10E001000F850000000049BBA02955C70F7F0000498B3B488985B0FEFFFF488B042530255601488D5040483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8B85B0FEFFFF41F6400401741F50524152415051574C89C74889C641BBF0C4C50041FFD35F594158415A5A58498940404889691848C7421060CE830149BBB03C58C70F7F00004C895A1849BB10EC54C70F7F00004C895A204889BDA8FEFFFF4C8995A0FEFFFF48899598FEFFFF48898590FEFFFF4C89B508FFFFFF48C78578FFFFFF5E0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F850000000048898588FEFFFF488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8BB598FEFFFF4C89700848898580FEFFFF48C78578FFFFFF5F000000488BBDA8FEFFFF4889C6488B9588FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85A8FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B85B0FEFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8BB590FEFFFF49C74608FDFFFFFF4C8BB508FFFFFF4D8B561049BBFFFFFFFFFFFFFF7F4D39DA0F8D000000004C8B4210488B4A18498B78104883FF110F8500000000498B78204989FD4883E7014883FF000F84000000004D8B68384983FD010F8F000000004D8B68184983C5014B8B7CE8104883FF130F85000000004C89EF4983C5014F8B6CE8104883C7024983FA000F8E000000004883FF0B0F85000000004983FD330F850000000049BBC05E56C70F7F00004D39D80F850000000048899578FEFFFF488B042530255601488D5060483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A0100488968084C8B85B0FEFFFF41F6400401741D525141504152504C89C74889C641BBF0C4C50041FFD358415A4158595A498940404C8BAD60FFFFFF49896D1849BBC05E56C70F7F00004C895A3848894A104C8952084C89724048898570FEFFFF48899568FEFFFF48C78578FFFFFF60000000BF000000004889D649BB508C44C70F7F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B85B0FEFFFF4C8B68504D85ED0F85000000004C8B68284983FD000F85000000004C8BB5A0FEFFFFF64004017411504889C74C89F641BBF0C4C50041FFD3584C897040488B9570FEFFFF48C74208FDFFFFFF488B14254845A0024883FA000F8C000000004C89EF4D89F2E96BFAFFFF49BB007044C70F7F000041FFD3294C48403835505544585C046064686C036100000049BB007044C70F7F000041FFD34C48044038355044585C6064686C036200000049BB007044C70F7F000041FFD34C4804284038355044585C6064686C036300000049BB007044C70F7F000041FFD34C4804211C284038355044585C6064686C036400000049BB007044C70F7F000041FFD34C4804211D284038355044585C6064686C036500000049BB007044C70F7F000041FFD34C480421284038355044585C6064686C036600000049BB007044C70F7F000041FFD3354C4840385044585C0464686C28036700000049BB007044C70F7F000041FFD34C4838405044580464686C28036800000049BB007044C70F7F000041FFD34C3834405044580464686C28036900000049BB007044C70F7F000041FFD34C382034405044580464686C28036A00000049BB007044C70F7F000041FFD34C3834405044580464686C28036B00000049BB007044C70F7F000041FFD34C3834405044580464686C28036C00000049BB007044C70F7F000041FFD34C3800044050445870152874036D00000049BB007044C70F7F000041FFD34C38004050445870152874036E00000049BB007044C70F7F000041FFD34C38004050445870152874036F00000049BB007044C70F7F000041FFD34C3800054050445870152874037000000049BB007044C70F7F000041FFD34C380004405044587015152874037100000049BB007044C70F7F000041FFD34C380004405044587015152874037200000049BB437044C70F7F000041FFD34C48788001017C4050445870880184011574035B00000049BB437044C70F7F000041FFD34C48788001017C4050445870880184011574037300000049BB437044C70F7F000041FFD34C487890010180017C405044587084011574035C00000049BB437044C70F7F000041FFD34C487890010180017C405044587084011574037400000049BB007044C70F7F000041FFD34C487890010980017C405044587084011574037500000049BB007044C70F7F000041FFD34C48789001087C405044587084011574037600000049BB007044C70F7F000041FFD34C48787C405044587090010884011574037700000049BB007044C70F7F000041FFD34C480008387C405044587090010784011574037800000049BB007044C70F7F000041FFD34C4800087C405044587090010784011574037900000049BB007044C70F7F000041FFD34C48004050445870070884011574037A00000049BB007044C70F7F000041FFD34C480008384050445870070784011507037B00000049BB007044C70F7F000041FFD34C4800084050445870350528070784011538037C00000049BB007044C70F7F000041FFD34C4800081D4050445870350528070784011538037D00000049BB007044C70F7F000041FFD34C4800084050445870350528070784011538037E00000049BB007044C70F7F000041FFD34C4800081D4050445870350528070784011538037F00000049BB007044C70F7F000041FFD34C4800081D214050445870350528070784011538038000000049BB007044C70F7F000041FFD34C4800081D21284050445870350507070784011538038100000049BB007044C70F7F000041FFD34C4800081D284050445870350507070784011538038200000049BB007044C70F7F000041FFD34C480008284050445870350507070784011538038300000049BB437044C70F7F000041FFD34C487898019401019C014050445870840174035D00000049BB437044C70F7F000041FFD34C487898019401019C014050445870840174038400000049BB007044C70F7F000041FFD34C4878980194019C014050445870840174038500000049BB007044C70F7F000041FFD34C48001C9C014050445870840174038600000049BB007044C70F7F000041FFD34C48009C014050445870840174038700000049BB007044C70F7F000041FFD34C4840504458700774038800000049BB007044C70F7F000041FFD34C4840504458700774038900000049BB007044C70F7F000041FFD34C4808384050445874038A00000049BB007044C70F7F000041FFD34C48083504384050445874038B00000049BB007044C70F7F000041FFD34C48083505384050445874038C00000049BB007044C70F7F000041FFD34C480835384050445874038D00000049BB007044C70F7F000041FFD34C042040504458083807038E00000049BB007044C70F7F000041FFD34C04342040504458083807038F00000049BB007044C70F7F000041FFD34C042040504458083807039000000049BB007044C70F7F000041FFD34C042040504458083807039100000049BB007044C70F7F000041FFD34C0400405044580828153807039200000049BB007044C70F7F000041FFD34C04001D405044580828153807039300000049BB007044C70F7F000041FFD34C04001C40504458081528153807039400000049BB007044C70F7F000041FFD34C04001C40504458081528153807039500000049BB437044C70F7F000041FFD34C48A001A40101B0014050445870AC01A8011574035E00000049BB437044C70F7F000041FFD34C48A001A40101B0014050445870AC01A8011574039600000049BB437044C70F7F000041FFD34C48A001B80101A401B0014050445870A8011574035F00000049BB437044C70F7F000041FFD34C48A001B80101A401B0014050445870A8011574039700000049BB007044C70F7F000041FFD34C48A001B80109A401B0014050445870A8011574039800000049BB007044C70F7F000041FFD34C48A001B80108B0014050445870A8011574039900000049BB007044C70F7F000041FFD34C48A001B001405044587008B801A8011574039A00000049BB007044C70F7F000041FFD34C48000838B001405044587007B801A8011574039B00000049BB007044C70F7F000041FFD34C480008B001405044587007B801A8011574039C00000049BB007044C70F7F000041FFD34C480040504458700807A8011574039D00000049BB007044C70F7F000041FFD34C4800083840504458700707A8011507039E00000049BB007044C70F7F000041FFD34C48000840504458700520290707A8011538039F00000049BB007044C70F7F000041FFD34C4800083540504458700520290707A801153803A000000049BB007044C70F7F000041FFD34C48000840504458700520290707A801153803A100000049BB007044C70F7F000041FFD34C4800083540504458700520290707A801153803A200000049BB007044C70F7F000041FFD34C480008351D40504458700520290707A801153803A300000049BB007044C70F7F000041FFD34C480008351D2040504458700507290707A801153803A400000049BB007044C70F7F000041FFD34C480008352040504458700507290707A801153803A500000049BB007044C70F7F000041FFD34C4800082040504458700507290707A801153803A600000049BB437044C70F7F000041FFD34C48A001C401BC0101C001405044587074A801036000000049BB437044C70F7F000041FFD34C48A001C401BC0101C001405044587074A80103A700000049BB007044C70F7F000041FFD34C48A001C401BC01C001405044587074A80103A800000049BB007044C70F7F000041FFD34C480034C001405044587074A80103A900000049BB007044C70F7F000041FFD34C4800C001405044587074A80103AA00000049BB007044C70F7F000041FFD34C484050445870740703AB00000049BB007044C70F7F000041FFD34C484050445870740703AC000000 +[101d937749383] jit-backend-dump} +[101d93774a3eb] {jit-backend-addr +Loop 4 ( #44 FOR_ITER) has address 7f0fc74490d1 to 7f0fc7449cf2 (bootstrap 7f0fc744909b) +[101d93774be7f] jit-backend-addr} +[101d93774cd67] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf580da +0 E0FDFFFF -[7e18c76f717f] jit-backend-dump} -[7e18c76f7d6f] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74490cd +0 E0FDFFFF +[101d93774e163] jit-backend-dump} +[101d93774ee7b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf581b0 +0 500B0000 -[7e18c76f8873] jit-backend-dump} -[7e18c76f8de7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491aa +0 440B0000 +[101d93774fc9f] jit-backend-dump} +[101d937750303] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf581bc +0 660B0000 -[7e18c76f986d] jit-backend-dump} -[7e18c76f9e19] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491b6 +0 5A0B0000 +[101d937750fa3] jit-backend-dump} +[101d93775157b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf581c9 +0 790B0000 -[7e18c76fa889] jit-backend-dump} -[7e18c76faca9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491c3 +0 6D0B0000 +[101d93775220f] jit-backend-dump} +[101d9377527ab] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf581dd +0 860B0000 -[7e18c76fb57f] jit-backend-dump} -[7e18c76fb9e3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491d7 +0 7A0B0000 +[101d9377534c7] jit-backend-dump} +[101d937753b57] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf581ee +0 980B0000 -[7e18c76fc293] jit-backend-dump} -[7e18c76fc6df] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491e8 +0 8C0B0000 +[101d9377548bf] jit-backend-dump} +[101d937754e73] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58200 +0 A90B0000 -[7e18c76fcf77] jit-backend-dump} -[7e18c76fd4f3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74491fa +0 9D0B0000 +[101d937755aa3] jit-backend-dump} +[101d937756027] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58212 +0 B90B0000 -[7e18c76fdefd] jit-backend-dump} -[7e18c76fe461] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744920c +0 AD0B0000 +[101d937756c73] jit-backend-dump} +[101d9377571e7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58225 +0 C60B0000 -[7e18c76fedcf] jit-backend-dump} -[7e18c76ff20d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744921f +0 BA0B0000 +[101d937757e27] jit-backend-dump} +[101d9377584e7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5823c +0 CD0B0000 -[7e18c76ffaaf] jit-backend-dump} -[7e18c76fff13] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744923d +0 BA0B0000 +[101d937759213] jit-backend-dump} +[101d9377598c3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58253 +0 D40B0000 -[7e18c77007b7] jit-backend-dump} -[7e18c7700e13] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449254 +0 C10B0000 +[101d93775a527] jit-backend-dump} +[101d937760017] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58273 +0 F10B0000 -[7e18c77018db] jit-backend-dump} -[7e18c7701e37] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449274 +0 DE0B0000 +[101d9377610bb] jit-backend-dump} +[101d9377617a7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf582a2 +0 E00B0000 -[7e18c7702861] jit-backend-dump} -[7e18c7702dad] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744929c +0 D40B0000 +[101d9377624f7] jit-backend-dump} +[101d937762b83] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf582b0 +0 F00B0000 -[7e18c77036e7] jit-backend-dump} -[7e18c7703b95] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74492aa +0 E40B0000 +[101d937763ad7] jit-backend-dump} +[101d937764243] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf582c7 +0 130C0000 -[7e18c7704437] jit-backend-dump} -[7e18c7704963] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74492c1 +0 070C0000 +[101d937764ffb] jit-backend-dump} +[101d937765653] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf582dc +0 1C0C0000 -[7e18c7705205] jit-backend-dump} -[7e18c770570d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74492d6 +0 100C0000 +[101d937766347] jit-backend-dump} +[101d9377668cb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf582f5 +0 220C0000 -[7e18c77061fb] jit-backend-dump} -[7e18c7706727] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74492ef +0 160C0000 +[101d9377674cb] jit-backend-dump} +[101d937767a63] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf583ff +0 370B0000 -[7e18c77070cd] jit-backend-dump} -[7e18c77075ef] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74493f0 +0 340B0000 +[101d93776864f] jit-backend-dump} +[101d937768d03] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5840e +0 4C0B0000 -[7e18c7708011] jit-backend-dump} -[7e18c7708473] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74493ff +0 490B0000 +[101d937769b47] jit-backend-dump} +[101d93776a217] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf584a4 +0 DA0A0000 -[7e18c7708d15] jit-backend-dump} -[7e18c7709117] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449495 +0 D70A0000 +[101d93776aeef] jit-backend-dump} +[101d93776b477] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf584b3 +0 EF0A0000 -[7e18c7709b1f] jit-backend-dump} -[7e18c770a061] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74494a4 +0 EC0A0000 +[101d93776c07b] jit-backend-dump} +[101d93776c60f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf584cd +0 F90A0000 -[7e18c770cdcf] jit-backend-dump} -[7e18c770d5c7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74494be +0 F60A0000 +[101d93776d1fb] jit-backend-dump} +[101d93776d78b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf584f3 +0 F70A0000 -[7e18c770e0a5] jit-backend-dump} -[7e18c770e5eb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74494e4 +0 F40A0000 +[101d93776e5df] jit-backend-dump} +[101d93776ec87] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58500 +0 0D0B0000 -[7e18c770efa1] jit-backend-dump} -[7e18c770f48b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74494f1 +0 090B0000 +[101d93776f943] jit-backend-dump} +[101d93776feeb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58514 +0 1C0B0000 -[7e18c770fdf1] jit-backend-dump} -[7e18c7710259] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449505 +0 170B0000 +[101d937770aeb] jit-backend-dump} +[101d93777107b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58522 +0 330B0000 -[7e18c7710d43] jit-backend-dump} -[7e18c77113b3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449513 +0 2D0B0000 +[101d937771c6b] jit-backend-dump} +[101d93777226b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5854f +0 4A0B0000 -[7e18c7711dc3] jit-backend-dump} -[7e18c7712201] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449540 +0 430B0000 +[101d937772e6b] jit-backend-dump} +[101d937773613] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58565 +0 560B0000 -[7e18c7712aa3] jit-backend-dump} -[7e18c7712ea5] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449556 +0 4F0B0000 +[101d9377742df] jit-backend-dump} +[101d93777494f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5857a +0 650B0000 -[7e18c7713747] jit-backend-dump} -[7e18c7713b71] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744956b +0 5E0B0000 +[101d93777564f] jit-backend-dump} +[101d937775bef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58588 +0 7C0B0000 -[7e18c7714619] jit-backend-dump} -[7e18c7714b7b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449579 +0 750B0000 +[101d9377767db] jit-backend-dump} +[101d937776d63] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5859f +0 890B0000 -[7e18c7715595] jit-backend-dump} -[7e18c77159fd] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449590 +0 820B0000 +[101d937777963] jit-backend-dump} +[101d937777eeb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf585b9 +0 940B0000 -[7e18c771629d] jit-backend-dump} -[7e18c77166a1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74495aa +0 8D0B0000 +[101d937778cfb] jit-backend-dump} +[101d937779367] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf585c3 +0 B00B0000 -[7e18c7716f43] jit-backend-dump} -[7e18c771737d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74495b4 +0 A90B0000 +[101d93777a047] jit-backend-dump} +[101d93777a5ef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf585cd +0 CD0B0000 -[7e18c7717c1f] jit-backend-dump} -[7e18c7718191] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74495be +0 C60B0000 +[101d93777deb7] jit-backend-dump} +[101d93777e5b3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf585e0 +0 E00B0000 -[7e18c7718c27] jit-backend-dump} -[7e18c77191b7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74495d1 +0 D90B0000 +[101d93777f2ab] jit-backend-dump} +[101d93777f8ef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf586e6 +0 FF0A0000 -[7e18c7719bc5] jit-backend-dump} -[7e18c7719fe3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74496d6 +0 F90A0000 +[101d937780583] jit-backend-dump} +[101d937780bef] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf586f5 +0 140B0000 -[7e18c771a885] jit-backend-dump} -[7e18c771acc3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74496e5 +0 0E0B0000 +[101d9377818b7] jit-backend-dump} +[101d937781f2f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf586fe +0 2F0B0000 -[7e18c771b565] jit-backend-dump} -[7e18c771b9a3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74496ee +0 290B0000 +[101d937782c37] jit-backend-dump} +[101d9377831cb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58712 +0 3E0B0000 -[7e18c771c3ab] jit-backend-dump} -[7e18c771c903] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449702 +0 380B0000 +[101d937783dcf] jit-backend-dump} +[101d937784337] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58720 +0 500B0000 -[7e18c771d2f5] jit-backend-dump} -[7e18c771d855] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449710 +0 4A0B0000 +[101d937784f23] jit-backend-dump} +[101d93778551b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58769 +0 410B0000 -[7e18c771e0f9] jit-backend-dump} -[7e18c771e627] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449759 +0 3B0B0000 +[101d93778625f] jit-backend-dump} +[101d93778690b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5879b +0 2A0B0000 -[7e18c771eecb] jit-backend-dump} -[7e18c771f305] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744978b +0 240B0000 +[101d9377876fb] jit-backend-dump} +[101d937787d7b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf587b0 +0 300B0000 -[7e18c771fba9] jit-backend-dump} -[7e18c77200e9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744979f +0 2B0B0000 +[101d937788b13] jit-backend-dump} +[101d937789093] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf587c1 +0 3C0B0000 -[7e18c7720b01] jit-backend-dump} -[7e18c7720fe9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74497b0 +0 370B0000 +[101d937789ccf] jit-backend-dump} +[101d93778a263] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf587d3 +0 470B0000 -[7e18c7721a15] jit-backend-dump} -[7e18c7721e2f] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74497c2 +0 420B0000 +[101d93778aec7] jit-backend-dump} +[101d93778b453] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf587f9 +0 3D0B0000 -[7e18c77226d3] jit-backend-dump} -[7e18c7722af9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74497e8 +0 380B0000 +[101d93778c223] jit-backend-dump} +[101d93778c8d7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58810 +0 420B0000 -[7e18c772338b] jit-backend-dump} -[7e18c772399d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74497ff +0 3D0B0000 +[101d93778d687] jit-backend-dump} +[101d93778de53] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58830 +0 5B0B0000 -[7e18c77243ab] jit-backend-dump} -[7e18c7724905] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744981f +0 560B0000 +[101d93778ea7b] jit-backend-dump} +[101d93778f02b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5883a +0 6D0B0000 -[7e18c772539b] jit-backend-dump} -[7e18c77258fb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449829 +0 680B0000 +[101d93778fc43] jit-backend-dump} +[101d9377901bf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58851 +0 740B0000 -[7e18c7726291] jit-backend-dump} -[7e18c77266f7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449840 +0 6F0B0000 +[101d937790e3f] jit-backend-dump} +[101d9377914b7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58866 +0 7E0B0000 -[7e18c7728fe1] jit-backend-dump} -[7e18c772962d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449855 +0 790B0000 +[101d937792197] jit-backend-dump} +[101d937792813] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58880 +0 840B0000 -[7e18c772a0bb] jit-backend-dump} -[7e18c772a63d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744986e +0 800B0000 +[101d93779355b] jit-backend-dump} +[101d937793ad7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5898c +0 980A0000 -[7e18c772b0a7] jit-backend-dump} -[7e18c772b63b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744997c +0 920A0000 +[101d9377946e3] jit-backend-dump} +[101d937794c57] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5899b +0 AF0A0000 -[7e18c772c01b] jit-backend-dump} -[7e18c772c479] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744998b +0 A90A0000 +[101d93779585f] jit-backend-dump} +[101d937795dbf] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58a31 +0 3F0A0000 -[7e18c772cd63] jit-backend-dump} -[7e18c772d1ab] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a21 +0 390A0000 +[101d937796bd7] jit-backend-dump} +[101d937797237] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58a40 +0 560A0000 -[7e18c772da2d] jit-backend-dump} -[7e18c772de45] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a30 +0 500A0000 +[101d93779a9f3] jit-backend-dump} +[101d93779b1e3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58a5a +0 620A0000 -[7e18c772e7bf] jit-backend-dump} -[7e18c772ed23] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a4a +0 5C0A0000 +[101d93779bfcf] jit-backend-dump} +[101d93779c647] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58a80 +0 620A0000 -[7e18c772f7ab] jit-backend-dump} -[7e18c772fd09] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a70 +0 5C0A0000 +[101d93779d2e3] jit-backend-dump} +[101d93779d947] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58a8d +0 790A0000 -[7e18c773070d] jit-backend-dump} -[7e18c7730b63] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a7d +0 730A0000 +[101d93779e77f] jit-backend-dump} +[101d93779ee0b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58aa1 +0 890A0000 -[7e18c77313fd] jit-backend-dump} -[7e18c7731827] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a91 +0 830A0000 +[101d93779fbff] jit-backend-dump} +[101d9377a024b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58aaf +0 A00A0000 -[7e18c77320bb] jit-backend-dump} -[7e18c773255d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449a9f +0 9A0A0000 +[101d9377a0f77] jit-backend-dump} +[101d9377a1597] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58adc +0 B70A0000 -[7e18c7732fa1] jit-backend-dump} -[7e18c77334d7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449acc +0 B10A0000 +[101d9377a219b] jit-backend-dump} +[101d9377a2737] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58af2 +0 C30A0000 -[7e18c7733ef9] jit-backend-dump} -[7e18c773441b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449ae2 +0 BD0A0000 +[101d9377a336f] jit-backend-dump} +[101d9377a393b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b07 +0 D20A0000 -[7e18c7734cc1] jit-backend-dump} -[7e18c77351e1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449af7 +0 CC0A0000 +[101d9377a4787] jit-backend-dump} +[101d9377a4e33] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b15 +0 E90A0000 -[7e18c7735a87] jit-backend-dump} -[7e18c7735eed] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b05 +0 E30A0000 +[101d9377a5bbf] jit-backend-dump} +[101d9377a6193] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b2c +0 F60A0000 -[7e18c7736781] jit-backend-dump} -[7e18c7736cdb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b1c +0 F00A0000 +[101d9377a6ddf] jit-backend-dump} +[101d9377a736b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b46 +0 010B0000 -[7e18c77376f1] jit-backend-dump} -[7e18c7737c1d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b36 +0 FB0A0000 +[101d9377a7fab] jit-backend-dump} +[101d9377a852f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b50 +0 1D0B0000 -[7e18c7738755] jit-backend-dump} -[7e18c7738b5b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b40 +0 170B0000 +[101d9377a918f] jit-backend-dump} +[101d9377a982f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b5a +0 3A0B0000 -[7e18c7739403] jit-backend-dump} -[7e18c773982b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b4a +0 340B0000 +[101d9377aa64b] jit-backend-dump} +[101d9377aad1b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58b6d +0 4D0B0000 -[7e18c773a0d1] jit-backend-dump} -[7e18c773a50d] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449b5d +0 470B0000 +[101d9377abad7] jit-backend-dump} +[101d9377ac06b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58c76 +0 690A0000 -[7e18c773af55] jit-backend-dump} -[7e18c773b4b5] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449c66 +0 630A0000 +[101d9377acca7] jit-backend-dump} +[101d9377ad277] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58c85 +0 7F0A0000 -[7e18c773bedd] jit-backend-dump} -[7e18c773c305] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449c75 +0 790A0000 +[101d9377aded7] jit-backend-dump} +[101d9377ae46b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58c8e +0 9B0A0000 -[7e18c773cbad] jit-backend-dump} -[7e18c773cfeb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449c7e +0 950A0000 +[101d9377af257] jit-backend-dump} +[101d9377af98b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58ca2 +0 AB0A0000 -[7e18c773d891] jit-backend-dump} -[7e18c773dce1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449c92 +0 A50A0000 +[101d9377b075f] jit-backend-dump} +[101d9377b0d1f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58cb0 +0 BD0A0000 -[7e18c773e587] jit-backend-dump} -[7e18c773eb95] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449ca0 +0 B70A0000 +[101d9377b194f] jit-backend-dump} +[101d9377b1f5b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf58cf5 +0 B20A0000 -[7e18c773f639] jit-backend-dump} -[7e18c774029d] jit-backend} -[7e18c7741961] {jit-log-opt-loop +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7449ce3 +0 AE0A0000 +[101d9377b2b8f] jit-backend-dump} +[101d9377b3a47] jit-backend} +[101d9377b6a83] {jit-log-opt-loop # Loop 4 ( #44 FOR_ITER) : loop with 351 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -1533,377 +1534,377 @@ +157: p22 = getarrayitem_gc(p8, 6, descr=) +168: p24 = getarrayitem_gc(p8, 7, descr=) +172: p25 = getfield_gc(p0, descr=) -+172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(140139656779344)) ++172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(139705792106192)) debug_merge_point(0, ' #44 FOR_ITER') -+258: guard_value(i6, 4, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] -+268: guard_class(p16, 38352528, descr=) [p1, p0, p16, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] -+280: p28 = getfield_gc(p16, descr=) -+284: guard_nonnull(p28, descr=) [p1, p0, p16, p28, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] -+293: i29 = getfield_gc(p16, descr=) -+297: p30 = getfield_gc(p28, descr=) -+301: guard_class(p30, 38450144, descr=) [p1, p0, p16, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] -+313: p32 = getfield_gc(p28, descr=) -+317: i33 = getfield_gc(p32, descr=) -+321: i34 = uint_ge(i29, i33) ++265: guard_value(i6, 4, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] ++275: guard_class(p16, 38562496, descr=) [p1, p0, p16, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] ++287: p28 = getfield_gc(p16, descr=) ++291: guard_nonnull(p28, descr=) [p1, p0, p16, p28, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] ++300: i29 = getfield_gc(p16, descr=) ++304: p30 = getfield_gc(p28, descr=) ++308: guard_class(p30, 38655536, descr=) [p1, p0, p16, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] ++320: p32 = getfield_gc(p28, descr=) ++324: i33 = getfield_gc(p32, descr=) ++328: i34 = uint_ge(i29, i33) guard_false(i34, descr=) [p1, p0, p16, i29, i33, p32, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] -+330: p35 = getfield_gc(p32, descr=) -+334: p36 = getarrayitem_gc(p35, i29, descr=) -+339: guard_nonnull(p36, descr=) [p1, p0, p16, i29, p36, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] -+348: i38 = int_add(i29, 1) -+352: setfield_gc(p16, i38, descr=) -+356: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p20, p22, p24, p36] ++337: p35 = getfield_gc(p32, descr=) ++341: p36 = getarrayitem_gc(p35, i29, descr=) ++346: guard_nonnull(p36, descr=) [p1, p0, p16, i29, p36, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] ++355: i38 = int_add(i29, 1) ++359: setfield_gc(p16, i38, descr=) ++363: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p20, p22, p24, p36] debug_merge_point(0, ' #47 STORE_FAST') debug_merge_point(0, ' #50 LOAD_GLOBAL') -+366: guard_value(p3, ConstPtr(ptr40), descr=) [p1, p0, p3, p2, p5, p10, p12, p16, p20, p22, p24, p36] -+385: p41 = getfield_gc(p0, descr=) -+389: guard_value(p41, ConstPtr(ptr42), descr=) [p1, p0, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] -+408: p43 = getfield_gc(p41, descr=) -+412: guard_value(p43, ConstPtr(ptr44), descr=) [p1, p0, p43, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] -+431: guard_not_invalidated(, descr=) [p1, p0, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] ++373: guard_value(p3, ConstPtr(ptr40), descr=) [p1, p0, p3, p2, p5, p10, p12, p16, p20, p22, p24, p36] ++392: p41 = getfield_gc(p0, descr=) ++403: guard_value(p41, ConstPtr(ptr42), descr=) [p1, p0, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] ++422: p43 = getfield_gc(p41, descr=) ++426: guard_value(p43, ConstPtr(ptr44), descr=) [p1, p0, p43, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] ++445: guard_not_invalidated(, descr=) [p1, p0, p41, p2, p5, p10, p12, p16, p20, p22, p24, p36] debug_merge_point(0, ' #53 LOOKUP_METHOD') -+431: p46 = getfield_gc(ConstPtr(ptr45), descr=) -+444: guard_value(p46, ConstPtr(ptr47), descr=) [p1, p0, p46, p2, p5, p10, p12, p16, p20, p22, p24, p36] ++445: p46 = getfield_gc(ConstPtr(ptr45), descr=) ++458: guard_value(p46, ConstPtr(ptr47), descr=) [p1, p0, p46, p2, p5, p10, p12, p16, p20, p22, p24, p36] debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 LOAD_FAST') debug_merge_point(0, ' #62 CALL_METHOD') -+463: p49 = call(ConstClass(getexecutioncontext), descr=) -+493: p50 = getfield_gc(p49, descr=) -+497: i51 = force_token() -+497: p52 = getfield_gc(p49, descr=) -+501: guard_isnull(p52, descr=) [p1, p0, p49, p52, p2, p5, p10, p12, p16, p50, i51, p36] -+510: i53 = getfield_gc(p49, descr=) -+514: i54 = int_is_zero(i53) -guard_true(i54, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p50, i51, p36] -debug_merge_point(1, ' #0 LOAD_GLOBAL') -+524: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p50, i51, p36] -debug_merge_point(1, ' #3 LOAD_FAST') -debug_merge_point(1, ' #6 LOAD_FAST') -debug_merge_point(1, ' #9 CALL_FUNCTION') -+524: i56 = getfield_gc(ConstPtr(ptr55), descr=) -+537: i58 = int_ge(0, i56) -guard_true(i58, descr=) [p1, p0, p49, i56, p2, p5, p10, p12, p16, p50, i51, p36] -+547: i59 = force_token() -debug_merge_point(2, ' #0 LOAD_GLOBAL') -+547: p61 = getfield_gc(ConstPtr(ptr60), descr=) -+555: guard_value(p61, ConstPtr(ptr62), descr=) [p1, p0, p49, p61, p2, p5, p10, p12, p16, i59, p50, i51, p36] -debug_merge_point(2, ' #3 LOAD_FAST') -debug_merge_point(2, ' #6 LOAD_CONST') -debug_merge_point(2, ' #9 BINARY_SUBSCR') -debug_merge_point(2, ' #10 CALL_FUNCTION') -debug_merge_point(2, ' #13 BUILD_TUPLE') -debug_merge_point(2, ' #16 LOAD_FAST') -debug_merge_point(2, ' #19 BINARY_ADD') -debug_merge_point(2, ' #20 STORE_FAST') -debug_merge_point(2, ' #23 LOAD_GLOBAL') -debug_merge_point(2, ' #26 LOOKUP_METHOD') -debug_merge_point(2, ' #29 LOAD_FAST') -debug_merge_point(2, ' #32 CALL_METHOD') -+568: p64 = getfield_gc(ConstPtr(ptr63), descr=) -+581: guard_class(p64, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p64, p2, p5, p10, p12, p16, i59, p50, i51, p36] -+593: p66 = getfield_gc(ConstPtr(ptr63), descr=) -+606: i67 = force_token() ++477: p49 = call(ConstClass(getexecutioncontext), descr=) ++500: p50 = getfield_gc(p49, descr=) ++504: i51 = force_token() ++504: p52 = getfield_gc(p49, descr=) ++508: guard_isnull(p52, descr=) [p1, p0, p49, p52, p2, p5, p10, p12, p16, i51, p50, p36] ++517: i53 = getfield_gc(p49, descr=) ++521: i54 = int_is_zero(i53) +guard_true(i54, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, i51, p50, p36] +debug_merge_point(1, ' #0 LOAD_GLOBAL') ++531: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, i51, p50, p36] +debug_merge_point(1, ' #3 LOAD_FAST') +debug_merge_point(1, ' #6 LOAD_FAST') +debug_merge_point(1, ' #9 CALL_FUNCTION') ++531: i56 = getfield_gc(ConstPtr(ptr55), descr=) ++544: i58 = int_ge(0, i56) +guard_true(i58, descr=) [p1, p0, p49, i56, p2, p5, p10, p12, p16, i51, p50, p36] ++554: i59 = force_token() +debug_merge_point(2, ' #0 LOAD_GLOBAL') ++554: p61 = getfield_gc(ConstPtr(ptr60), descr=) ++562: guard_value(p61, ConstPtr(ptr62), descr=) [p1, p0, p49, p61, p2, p5, p10, p12, p16, i59, i51, p50, p36] +debug_merge_point(2, ' #3 LOAD_FAST') +debug_merge_point(2, ' #6 LOAD_CONST') +debug_merge_point(2, ' #9 BINARY_SUBSCR') +debug_merge_point(2, ' #10 CALL_FUNCTION') +debug_merge_point(2, ' #13 BUILD_TUPLE') +debug_merge_point(2, ' #16 LOAD_FAST') +debug_merge_point(2, ' #19 BINARY_ADD') +debug_merge_point(2, ' #20 STORE_FAST') +debug_merge_point(2, ' #23 LOAD_GLOBAL') +debug_merge_point(2, ' #26 LOOKUP_METHOD') +debug_merge_point(2, ' #29 LOAD_FAST') +debug_merge_point(2, ' #32 CALL_METHOD') ++575: p64 = getfield_gc(ConstPtr(ptr63), descr=) ++588: guard_class(p64, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p64, p2, p5, p10, p12, p16, i59, i51, p50, p36] ++600: p66 = getfield_gc(ConstPtr(ptr63), descr=) ++613: i67 = force_token() p69 = new_array(3, descr=) -p71 = new_with_vtable(38431936) -+698: setfield_gc(p71, i59, descr=) +p71 = new_with_vtable(38637968) ++705: setfield_gc(p71, i59, descr=) setfield_gc(p49, p71, descr=) -+747: setfield_gc(p0, i67, descr=) -+758: setarrayitem_gc(p69, 0, ConstPtr(ptr73), descr=) -+766: setarrayitem_gc(p69, 1, ConstPtr(ptr75), descr=) -+780: setarrayitem_gc(p69, 2, ConstPtr(ptr77), descr=) -+794: i79 = call_may_force(ConstClass(hash_tuple), p69, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p36, p50, i51, p69] -+859: guard_no_exception(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p36, p50, i51, p69] -+874: i80 = force_token() -p82 = new_with_vtable(38341048) -+944: setfield_gc(p0, i80, descr=) -+955: setfield_gc(p82, p69, descr=) -+966: i84 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v978___simple_call__function_l), p66, p82, i79, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p36, p50, i51] -+1024: guard_no_exception(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p36, p50, i51] -+1039: i86 = int_and(i84, -9223372036854775808) -+1055: i87 = int_is_true(i86) -guard_false(i87, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p36, p50, i51] -+1065: p88 = getfield_gc(p66, descr=) -+1076: p89 = getinteriorfield_gc(p88, i84, descr=>) -+1085: guard_nonnull_class(p89, 38586544, descr=) [p1, p0, p49, p82, p89, p71, p2, p5, p10, p12, p16, p36, p50, i51] -debug_merge_point(2, ' #35 STORE_FAST') -debug_merge_point(2, ' #38 LOAD_FAST') -debug_merge_point(2, ' #41 LOAD_CONST') -debug_merge_point(2, ' #44 COMPARE_OP') -+1103: i92 = instance_ptr_eq(ConstPtr(ptr91), p89) -guard_false(i92, descr=) [p1, p0, p49, p71, p2, p5, p10, p12, p16, p82, p89, p36, p50, i51] -debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') -debug_merge_point(2, ' #50 LOAD_FAST') -debug_merge_point(2, ' #53 RETURN_VALUE') -+1116: p93 = getfield_gc(p49, descr=) -+1127: guard_isnull(p93, descr=) [p1, p0, p49, p89, p93, p71, p2, p5, p10, p12, p16, p82, None, p36, p50, i51] -+1136: i95 = getfield_gc(p49, descr=) -+1140: i96 = int_is_true(i95) -guard_false(i96, descr=) [p1, p0, p49, p89, p71, p2, p5, p10, p12, p16, p82, None, p36, p50, i51] -+1150: p97 = getfield_gc(p49, descr=) -debug_merge_point(1, ' #12 LOOKUP_METHOD') -+1150: setfield_gc(p71, -3, descr=) -debug_merge_point(1, ' #15 LOAD_FAST') -debug_merge_point(1, ' #18 CALL_METHOD') -+1165: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, None, p89, p36, p50, i51] -+1165: i99 = strlen(p36) -+1176: i101 = int_gt(9223372036854775807, i99) -guard_true(i101, descr=) [p1, p0, p49, p89, p36, p2, p5, p10, p12, p16, None, None, None, p50, i51] -+1195: p102 = getfield_gc_pure(p89, descr=) -+1199: i103 = getfield_gc_pure(p89, descr=) -+1203: i105 = getarrayitem_gc_pure(p102, 0, descr=) -+1207: i107 = int_eq(i105, 17) -guard_true(i107, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p36, p50, i51] -+1217: i109 = getarrayitem_gc_pure(p102, 2, descr=) -+1221: i111 = int_and(i109, 1) -+1228: i112 = int_is_true(i111) -guard_true(i112, descr=) [p1, p0, p49, p89, i109, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p36, p50, i51] -+1238: i114 = getarrayitem_gc_pure(p102, 5, descr=) -+1242: i116 = int_gt(i114, 1) -guard_false(i116, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p36, p50, i51] -+1252: i118 = getarrayitem_gc_pure(p102, 1, descr=) -+1256: i120 = int_add(i118, 1) -+1260: i121 = getarrayitem_gc_pure(p102, i120, descr=) -+1265: i123 = int_eq(i121, 19) -guard_true(i123, descr=) [p1, p0, p49, p89, i120, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p36, p50, i51] -+1275: i125 = int_add(i120, 1) -+1282: i126 = getarrayitem_gc_pure(p102, i125, descr=) -+1287: i128 = int_add(i120, 2) -+1291: i130 = int_lt(0, i99) -guard_true(i130, descr=) [p1, p0, p49, p89, i126, i128, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p36, p50, i51] -+1301: guard_value(i128, 11, descr=) [p1, p0, p49, p89, i126, i128, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p36, p50, i51] -+1311: guard_value(i126, 51, descr=) [p1, p0, p49, p89, i126, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p36, p50, i51] -+1321: guard_value(p102, ConstPtr(ptr133), descr=) [p1, p0, p49, p89, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p36, p50, i51] -debug_merge_point(2, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') -+1340: i134 = force_token() -p136 = new_with_vtable(38373528) -p137 = new_with_vtable(38431936) -+1424: setfield_gc(p137, i51, descr=) ++752: setfield_gc(p0, i67, descr=) ++756: setarrayitem_gc(p69, 0, ConstPtr(ptr73), descr=) ++764: setarrayitem_gc(p69, 1, ConstPtr(ptr75), descr=) ++778: setarrayitem_gc(p69, 2, ConstPtr(ptr77), descr=) ++792: i79 = call_may_force(ConstClass(hash_tuple), p69, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p69, p50, i51, p36] ++857: guard_no_exception(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p69, p50, i51, p36] ++872: i80 = force_token() +p82 = new_with_vtable(38549536) ++942: setfield_gc(p0, i80, descr=) ++953: setfield_gc(p82, p69, descr=) ++964: i84 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p66, p82, i79, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] ++1022: guard_no_exception(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] ++1037: i86 = int_and(i84, -9223372036854775808) ++1053: i87 = int_is_true(i86) +guard_false(i87, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] ++1063: p88 = getfield_gc(p66, descr=) ++1074: p89 = getinteriorfield_gc(p88, i84, descr=>) ++1083: guard_nonnull_class(p89, 38793968, descr=) [p1, p0, p49, p82, p89, p71, p2, p5, p10, p12, p16, p50, i51, p36] +debug_merge_point(2, ' #35 STORE_FAST') +debug_merge_point(2, ' #38 LOAD_FAST') +debug_merge_point(2, ' #41 LOAD_CONST') +debug_merge_point(2, ' #44 COMPARE_OP') ++1101: i92 = instance_ptr_eq(ConstPtr(ptr91), p89) +guard_false(i92, descr=) [p1, p0, p49, p71, p2, p5, p10, p12, p16, p82, p89, p50, i51, p36] +debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') +debug_merge_point(2, ' #50 LOAD_FAST') +debug_merge_point(2, ' #53 RETURN_VALUE') ++1114: p93 = getfield_gc(p49, descr=) ++1125: guard_isnull(p93, descr=) [p1, p0, p49, p89, p93, p71, p2, p5, p10, p12, p16, p82, None, p50, i51, p36] ++1134: i95 = getfield_gc(p49, descr=) ++1138: i96 = int_is_true(i95) +guard_false(i96, descr=) [p1, p0, p49, p89, p71, p2, p5, p10, p12, p16, p82, None, p50, i51, p36] ++1148: p97 = getfield_gc(p49, descr=) +debug_merge_point(1, ' #12 LOOKUP_METHOD') ++1148: setfield_gc(p71, -3, descr=) +debug_merge_point(1, ' #15 LOAD_FAST') +debug_merge_point(1, ' #18 CALL_METHOD') ++1163: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, None, p89, p50, i51, p36] ++1163: i99 = strlen(p36) ++1174: i101 = int_gt(9223372036854775807, i99) +guard_true(i101, descr=) [p1, p0, p49, p89, p36, p2, p5, p10, p12, p16, None, None, p50, i51, None] ++1193: p102 = getfield_gc_pure(p89, descr=) ++1197: i103 = getfield_gc_pure(p89, descr=) ++1201: i105 = getarrayitem_gc_pure(p102, 0, descr=) ++1205: i107 = int_eq(i105, 17) +guard_true(i107, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] ++1215: i109 = getarrayitem_gc_pure(p102, 2, descr=) ++1219: i111 = int_and(i109, 1) ++1226: i112 = int_is_true(i111) +guard_true(i112, descr=) [p1, p0, p49, p89, i109, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] ++1236: i114 = getarrayitem_gc_pure(p102, 5, descr=) ++1240: i116 = int_gt(i114, 1) +guard_false(i116, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] ++1250: i118 = getarrayitem_gc_pure(p102, 1, descr=) ++1254: i120 = int_add(i118, 1) ++1258: i121 = getarrayitem_gc_pure(p102, i120, descr=) ++1263: i123 = int_eq(i121, 19) +guard_true(i123, descr=) [p1, p0, p49, p89, i120, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] ++1273: i125 = int_add(i120, 1) ++1280: i126 = getarrayitem_gc_pure(p102, i125, descr=) ++1285: i128 = int_add(i120, 2) ++1289: i130 = int_lt(0, i99) +guard_true(i130, descr=) [p1, p0, p49, p89, i126, i128, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] ++1299: guard_value(i128, 11, descr=) [p1, p0, p49, p89, i126, i128, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] ++1309: guard_value(i126, 51, descr=) [p1, p0, p49, p89, i126, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] ++1319: guard_value(p102, ConstPtr(ptr133), descr=) [p1, p0, p49, p89, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] +debug_merge_point(2, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') ++1338: i134 = force_token() +p136 = new_with_vtable(38602768) +p137 = new_with_vtable(38637968) ++1422: setfield_gc(p137, i51, descr=) setfield_gc(p49, p137, descr=) -+1472: setfield_gc(p0, i134, descr=) -+1483: setfield_gc(p136, i99, descr=) -+1487: setfield_gc(p136, p36, descr=) -+1491: setfield_gc(p136, ConstPtr(ptr133), descr=) -+1505: setfield_gc(p136, i103, descr=) -+1509: i138 = call_assembler(0, p136, descr=) ++1469: setfield_gc(p0, i134, descr=) ++1480: setfield_gc(p136, ConstPtr(ptr133), descr=) ++1494: setfield_gc(p136, i103, descr=) ++1498: setfield_gc(p136, i99, descr=) ++1502: setfield_gc(p136, p36, descr=) ++1506: i138 = call_assembler(0, p136, descr=) guard_not_forced(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p50, p36] -+1602: guard_no_exception(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p50, p36] -+1617: guard_false(i138, descr=) [p1, p0, p49, p136, p89, p137, p2, p5, p10, p12, p16, p50, p36] -debug_merge_point(1, ' #21 RETURN_VALUE') -+1626: p139 = getfield_gc(p49, descr=) -+1637: guard_isnull(p139, descr=) [p1, p0, p49, p139, p137, p2, p5, p10, p12, p16, p50, p36] -+1646: i140 = getfield_gc(p49, descr=) -+1650: i141 = int_is_true(i140) ++1599: guard_no_exception(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p50, p36] ++1614: guard_false(i138, descr=) [p1, p0, p49, p136, p89, p137, p2, p5, p10, p12, p16, p50, p36] +debug_merge_point(1, ' #21 RETURN_VALUE') ++1623: p139 = getfield_gc(p49, descr=) ++1634: guard_isnull(p139, descr=) [p1, p0, p49, p139, p137, p2, p5, p10, p12, p16, p50, p36] ++1643: i140 = getfield_gc(p49, descr=) ++1647: i141 = int_is_true(i140) guard_false(i141, descr=) [p1, p0, p49, p137, p2, p5, p10, p12, p16, p50, p36] -+1660: p142 = getfield_gc(p49, descr=) ++1657: p142 = getfield_gc(p49, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') setfield_gc(p49, p50, descr=) -+1700: setfield_gc(p137, -3, descr=) -+1715: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p36] -+1715: i145 = getfield_raw(43858376, descr=) -+1723: i147 = int_lt(i145, 0) ++1697: setfield_gc(p137, -3, descr=) ++1712: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p36] ++1712: i145 = getfield_raw(44057928, descr=) ++1720: i147 = int_lt(i145, 0) guard_false(i147, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p36] debug_merge_point(0, ' #44 FOR_ITER') -+1733: label(p0, p1, p2, p5, p10, p12, p36, p16, i140, p49, p50, descr=TargetToken(140139656777744)) ++1730: label(p0, p1, p2, p5, p10, p12, p36, p16, i140, p49, p50, descr=TargetToken(139705792106272)) debug_merge_point(0, ' #44 FOR_ITER') -+1763: p148 = getfield_gc(p16, descr=) -+1774: guard_nonnull(p148, descr=) [p1, p0, p16, p148, p2, p5, p10, p12, p36] -+1783: i149 = getfield_gc(p16, descr=) -+1787: p150 = getfield_gc(p148, descr=) -+1791: guard_class(p150, 38450144, descr=) [p1, p0, p16, i149, p150, p148, p2, p5, p10, p12, p36] -+1804: p151 = getfield_gc(p148, descr=) -+1808: i152 = getfield_gc(p151, descr=) -+1812: i153 = uint_ge(i149, i152) ++1760: p148 = getfield_gc(p16, descr=) ++1771: guard_nonnull(p148, descr=) [p1, p0, p16, p148, p2, p5, p10, p12, p36] ++1780: i149 = getfield_gc(p16, descr=) ++1784: p150 = getfield_gc(p148, descr=) ++1788: guard_class(p150, 38655536, descr=) [p1, p0, p16, i149, p150, p148, p2, p5, p10, p12, p36] ++1800: p151 = getfield_gc(p148, descr=) ++1804: i152 = getfield_gc(p151, descr=) ++1808: i153 = uint_ge(i149, i152) guard_false(i153, descr=) [p1, p0, p16, i149, i152, p151, p2, p5, p10, p12, p36] -+1821: p154 = getfield_gc(p151, descr=) -+1825: p155 = getarrayitem_gc(p154, i149, descr=) -+1830: guard_nonnull(p155, descr=) [p1, p0, p16, i149, p155, p2, p5, p10, p12, p36] -+1839: i156 = int_add(i149, 1) ++1817: p154 = getfield_gc(p151, descr=) ++1821: p155 = getarrayitem_gc(p154, i149, descr=) ++1826: guard_nonnull(p155, descr=) [p1, p0, p16, i149, p155, p2, p5, p10, p12, p36] ++1835: i156 = int_add(i149, 1) debug_merge_point(0, ' #47 STORE_FAST') debug_merge_point(0, ' #50 LOAD_GLOBAL') -+1843: p157 = getfield_gc(p0, descr=) -+1854: setfield_gc(p16, i156, descr=) -+1858: guard_value(p157, ConstPtr(ptr42), descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] -+1877: p158 = getfield_gc(p157, descr=) -+1881: guard_value(p158, ConstPtr(ptr44), descr=) [p1, p0, p158, p157, p2, p5, p10, p12, p16, p155, None] -+1900: guard_not_invalidated(, descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] ++1839: p157 = getfield_gc(p0, descr=) ++1850: setfield_gc(p16, i156, descr=) ++1854: guard_value(p157, ConstPtr(ptr42), descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] ++1873: p158 = getfield_gc(p157, descr=) ++1877: guard_value(p158, ConstPtr(ptr44), descr=) [p1, p0, p158, p157, p2, p5, p10, p12, p16, p155, None] ++1896: guard_not_invalidated(, descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #53 LOOKUP_METHOD') -+1900: p159 = getfield_gc(ConstPtr(ptr45), descr=) -+1913: guard_value(p159, ConstPtr(ptr47), descr=) [p1, p0, p159, p2, p5, p10, p12, p16, p155, None] ++1896: p159 = getfield_gc(ConstPtr(ptr45), descr=) ++1909: guard_value(p159, ConstPtr(ptr47), descr=) [p1, p0, p159, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 LOAD_FAST') debug_merge_point(0, ' #62 CALL_METHOD') -+1932: i160 = force_token() -+1932: i161 = int_is_zero(i140) -guard_true(i161, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, i160, p50, p155, None] -debug_merge_point(1, ' #0 LOAD_GLOBAL') -debug_merge_point(1, ' #3 LOAD_FAST') -debug_merge_point(1, ' #6 LOAD_FAST') -debug_merge_point(1, ' #9 CALL_FUNCTION') -+1942: i162 = getfield_gc(ConstPtr(ptr55), descr=) -+1955: i163 = int_ge(0, i162) -guard_true(i163, descr=) [p1, p0, p49, i162, p2, p5, p10, p12, p16, i160, p50, p155, None] -+1965: i164 = force_token() -debug_merge_point(2, ' #0 LOAD_GLOBAL') -+1965: p165 = getfield_gc(ConstPtr(ptr60), descr=) -+1973: guard_value(p165, ConstPtr(ptr62), descr=) [p1, p0, p49, p165, p2, p5, p10, p12, p16, i164, i160, p50, p155, None] -debug_merge_point(2, ' #3 LOAD_FAST') -debug_merge_point(2, ' #6 LOAD_CONST') -debug_merge_point(2, ' #9 BINARY_SUBSCR') -debug_merge_point(2, ' #10 CALL_FUNCTION') -debug_merge_point(2, ' #13 BUILD_TUPLE') -debug_merge_point(2, ' #16 LOAD_FAST') -debug_merge_point(2, ' #19 BINARY_ADD') -debug_merge_point(2, ' #20 STORE_FAST') -debug_merge_point(2, ' #23 LOAD_GLOBAL') -debug_merge_point(2, ' #26 LOOKUP_METHOD') -debug_merge_point(2, ' #29 LOAD_FAST') -debug_merge_point(2, ' #32 CALL_METHOD') -+1986: p166 = getfield_gc(ConstPtr(ptr63), descr=) -+1999: guard_class(p166, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p166, p2, p5, p10, p12, p16, i164, i160, p50, p155, None] -+2012: p167 = getfield_gc(ConstPtr(ptr63), descr=) -+2025: i168 = force_token() ++1928: i160 = force_token() ++1928: i161 = int_is_zero(i140) +guard_true(i161, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p50, i160, p155, None] +debug_merge_point(1, ' #0 LOAD_GLOBAL') +debug_merge_point(1, ' #3 LOAD_FAST') +debug_merge_point(1, ' #6 LOAD_FAST') +debug_merge_point(1, ' #9 CALL_FUNCTION') ++1938: i162 = getfield_gc(ConstPtr(ptr55), descr=) ++1951: i163 = int_ge(0, i162) +guard_true(i163, descr=) [p1, p0, p49, i162, p2, p5, p10, p12, p16, p50, i160, p155, None] ++1961: i164 = force_token() +debug_merge_point(2, ' #0 LOAD_GLOBAL') ++1961: p165 = getfield_gc(ConstPtr(ptr60), descr=) ++1969: guard_value(p165, ConstPtr(ptr62), descr=) [p1, p0, p49, p165, p2, p5, p10, p12, p16, i164, p50, i160, p155, None] +debug_merge_point(2, ' #3 LOAD_FAST') +debug_merge_point(2, ' #6 LOAD_CONST') +debug_merge_point(2, ' #9 BINARY_SUBSCR') +debug_merge_point(2, ' #10 CALL_FUNCTION') +debug_merge_point(2, ' #13 BUILD_TUPLE') +debug_merge_point(2, ' #16 LOAD_FAST') +debug_merge_point(2, ' #19 BINARY_ADD') +debug_merge_point(2, ' #20 STORE_FAST') +debug_merge_point(2, ' #23 LOAD_GLOBAL') +debug_merge_point(2, ' #26 LOOKUP_METHOD') +debug_merge_point(2, ' #29 LOAD_FAST') +debug_merge_point(2, ' #32 CALL_METHOD') ++1982: p166 = getfield_gc(ConstPtr(ptr63), descr=) ++1995: guard_class(p166, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p166, p2, p5, p10, p12, p16, i164, p50, i160, p155, None] ++2007: p167 = getfield_gc(ConstPtr(ptr63), descr=) ++2020: i168 = force_token() p169 = new_array(3, descr=) -p170 = new_with_vtable(38431936) -+2117: setfield_gc(p170, i164, descr=) +p170 = new_with_vtable(38637968) ++2112: setfield_gc(p170, i164, descr=) setfield_gc(p49, p170, descr=) -+2168: setfield_gc(p0, i168, descr=) -+2172: setarrayitem_gc(p169, 0, ConstPtr(ptr73), descr=) -+2180: setarrayitem_gc(p169, 1, ConstPtr(ptr75), descr=) -+2194: setarrayitem_gc(p169, 2, ConstPtr(ptr174), descr=) -+2208: i175 = call_may_force(ConstClass(hash_tuple), p169, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, i160, p169, p155, p50] -+2280: guard_no_exception(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, i160, p169, p155, p50] -+2295: i176 = force_token() -p177 = new_with_vtable(38341048) -+2365: setfield_gc(p0, i176, descr=) -+2376: setfield_gc(p177, p169, descr=) -+2387: i178 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v978___simple_call__function_l), p167, p177, i175, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, i160, p155, p50] -+2445: guard_no_exception(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, i160, p155, p50] -+2460: i179 = int_and(i178, -9223372036854775808) -+2476: i180 = int_is_true(i179) -guard_false(i180, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, i160, p155, p50] -+2486: p181 = getfield_gc(p167, descr=) -+2497: p182 = getinteriorfield_gc(p181, i178, descr=>) -+2506: guard_nonnull_class(p182, 38586544, descr=) [p1, p0, p49, p177, p182, p170, p2, p5, p10, p12, p16, i160, p155, p50] -debug_merge_point(2, ' #35 STORE_FAST') -debug_merge_point(2, ' #38 LOAD_FAST') -debug_merge_point(2, ' #41 LOAD_CONST') -debug_merge_point(2, ' #44 COMPARE_OP') -+2524: i183 = instance_ptr_eq(ConstPtr(ptr91), p182) -guard_false(i183, descr=) [p1, p0, p49, p170, p2, p5, p10, p12, p16, p177, p182, i160, p155, p50] -debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') -debug_merge_point(2, ' #50 LOAD_FAST') -debug_merge_point(2, ' #53 RETURN_VALUE') -+2537: p184 = getfield_gc(p49, descr=) -+2548: guard_isnull(p184, descr=) [p1, p0, p49, p182, p184, p170, p2, p5, p10, p12, p16, p177, None, i160, p155, p50] -+2557: i185 = getfield_gc(p49, descr=) -+2561: i186 = int_is_true(i185) -guard_false(i186, descr=) [p1, p0, p49, p182, p170, p2, p5, p10, p12, p16, p177, None, i160, p155, p50] -+2571: p187 = getfield_gc(p49, descr=) -debug_merge_point(1, ' #12 LOOKUP_METHOD') -+2571: setfield_gc(p170, -3, descr=) -debug_merge_point(1, ' #15 LOAD_FAST') -debug_merge_point(1, ' #18 CALL_METHOD') -+2586: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, None, p182, i160, p155, p50] -+2586: i189 = strlen(p155) -+2597: i191 = int_gt(9223372036854775807, i189) -guard_true(i191, descr=) [p1, p0, p49, p182, p155, p2, p5, p10, p12, p16, None, None, i160, None, p50] -+2616: p192 = getfield_gc_pure(p182, descr=) -+2620: i193 = getfield_gc_pure(p182, descr=) -+2624: i194 = getarrayitem_gc_pure(p192, 0, descr=) -+2628: i195 = int_eq(i194, 17) -guard_true(i195, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, i189, p192, None, None, i160, p155, p50] -+2638: i196 = getarrayitem_gc_pure(p192, 2, descr=) -+2642: i197 = int_and(i196, 1) -+2649: i198 = int_is_true(i197) -guard_true(i198, descr=) [p1, p0, p49, p182, i196, p2, p5, p10, p12, p16, i193, i189, p192, None, None, i160, p155, p50] -+2659: i199 = getarrayitem_gc_pure(p192, 5, descr=) -+2663: i200 = int_gt(i199, 1) -guard_false(i200, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, i189, p192, None, None, i160, p155, p50] -+2673: i201 = getarrayitem_gc_pure(p192, 1, descr=) -+2677: i202 = int_add(i201, 1) -+2681: i203 = getarrayitem_gc_pure(p192, i202, descr=) -+2686: i204 = int_eq(i203, 19) -guard_true(i204, descr=) [p1, p0, p49, p182, i202, p2, p5, p10, p12, p16, i193, i189, p192, None, None, i160, p155, p50] -+2696: i205 = int_add(i202, 1) -+2703: i206 = getarrayitem_gc_pure(p192, i205, descr=) -+2708: i207 = int_add(i202, 2) -+2712: i209 = int_lt(0, i189) -guard_true(i209, descr=) [p1, p0, p49, p182, i206, i207, p2, p5, p10, p12, p16, i193, i189, p192, None, None, i160, p155, p50] -+2722: guard_value(i207, 11, descr=) [p1, p0, p49, p182, i206, i207, p192, p2, p5, p10, p12, p16, i193, i189, None, None, None, i160, p155, p50] -+2732: guard_value(i206, 51, descr=) [p1, p0, p49, p182, i206, p192, p2, p5, p10, p12, p16, i193, i189, None, None, None, i160, p155, p50] -+2742: guard_value(p192, ConstPtr(ptr133), descr=) [p1, p0, p49, p182, p192, p2, p5, p10, p12, p16, i193, i189, None, None, None, i160, p155, p50] -debug_merge_point(2, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') -+2761: i210 = force_token() -p211 = new_with_vtable(38373528) -p212 = new_with_vtable(38431936) -+2845: setfield_gc(p212, i160, descr=) ++2165: setfield_gc(p0, i168, descr=) ++2169: setarrayitem_gc(p169, 0, ConstPtr(ptr73), descr=) ++2177: setarrayitem_gc(p169, 1, ConstPtr(ptr75), descr=) ++2191: setarrayitem_gc(p169, 2, ConstPtr(ptr174), descr=) ++2205: i175 = call_may_force(ConstClass(hash_tuple), p169, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p169, p50, i160, p155] ++2277: guard_no_exception(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p169, p50, i160, p155] ++2292: i176 = force_token() +p177 = new_with_vtable(38549536) ++2362: setfield_gc(p0, i176, descr=) ++2373: setfield_gc(p177, p169, descr=) ++2384: i178 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p167, p177, i175, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] ++2442: guard_no_exception(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] ++2457: i179 = int_and(i178, -9223372036854775808) ++2473: i180 = int_is_true(i179) +guard_false(i180, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] ++2483: p181 = getfield_gc(p167, descr=) ++2494: p182 = getinteriorfield_gc(p181, i178, descr=>) ++2503: guard_nonnull_class(p182, 38793968, descr=) [p1, p0, p49, p177, p182, p170, p2, p5, p10, p12, p16, p50, i160, p155] +debug_merge_point(2, ' #35 STORE_FAST') +debug_merge_point(2, ' #38 LOAD_FAST') +debug_merge_point(2, ' #41 LOAD_CONST') +debug_merge_point(2, ' #44 COMPARE_OP') ++2521: i183 = instance_ptr_eq(ConstPtr(ptr91), p182) +guard_false(i183, descr=) [p1, p0, p49, p170, p2, p5, p10, p12, p16, p182, p177, p50, i160, p155] +debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') +debug_merge_point(2, ' #50 LOAD_FAST') +debug_merge_point(2, ' #53 RETURN_VALUE') ++2534: p184 = getfield_gc(p49, descr=) ++2545: guard_isnull(p184, descr=) [p1, p0, p49, p182, p184, p170, p2, p5, p10, p12, p16, None, p177, p50, i160, p155] ++2554: i185 = getfield_gc(p49, descr=) ++2558: i186 = int_is_true(i185) +guard_false(i186, descr=) [p1, p0, p49, p182, p170, p2, p5, p10, p12, p16, None, p177, p50, i160, p155] ++2568: p187 = getfield_gc(p49, descr=) +debug_merge_point(1, ' #12 LOOKUP_METHOD') ++2568: setfield_gc(p170, -3, descr=) +debug_merge_point(1, ' #15 LOAD_FAST') +debug_merge_point(1, ' #18 CALL_METHOD') ++2583: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p182, None, p50, i160, p155] ++2583: i189 = strlen(p155) ++2594: i191 = int_gt(9223372036854775807, i189) +guard_true(i191, descr=) [p1, p0, p49, p182, p155, p2, p5, p10, p12, p16, None, None, p50, i160, None] ++2613: p192 = getfield_gc_pure(p182, descr=) ++2617: i193 = getfield_gc_pure(p182, descr=) ++2621: i194 = getarrayitem_gc_pure(p192, 0, descr=) ++2625: i195 = int_eq(i194, 17) +guard_true(i195, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] ++2635: i196 = getarrayitem_gc_pure(p192, 2, descr=) ++2639: i197 = int_and(i196, 1) ++2646: i198 = int_is_true(i197) +guard_true(i198, descr=) [p1, p0, p49, p182, i196, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] ++2656: i199 = getarrayitem_gc_pure(p192, 5, descr=) ++2660: i200 = int_gt(i199, 1) +guard_false(i200, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] ++2670: i201 = getarrayitem_gc_pure(p192, 1, descr=) ++2674: i202 = int_add(i201, 1) ++2678: i203 = getarrayitem_gc_pure(p192, i202, descr=) ++2683: i204 = int_eq(i203, 19) +guard_true(i204, descr=) [p1, p0, p49, p182, i202, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] ++2693: i205 = int_add(i202, 1) ++2700: i206 = getarrayitem_gc_pure(p192, i205, descr=) ++2705: i207 = int_add(i202, 2) ++2709: i209 = int_lt(0, i189) +guard_true(i209, descr=) [p1, p0, p49, p182, i206, i207, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] ++2719: guard_value(i207, 11, descr=) [p1, p0, p49, p182, i206, i207, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] ++2729: guard_value(i206, 51, descr=) [p1, p0, p49, p182, i206, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] ++2739: guard_value(p192, ConstPtr(ptr133), descr=) [p1, p0, p49, p182, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] +debug_merge_point(2, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') ++2758: i210 = force_token() +p211 = new_with_vtable(38602768) +p212 = new_with_vtable(38637968) ++2842: setfield_gc(p212, i160, descr=) setfield_gc(p49, p212, descr=) -+2896: setfield_gc(p0, i210, descr=) -+2907: setfield_gc(p211, i189, descr=) -+2911: setfield_gc(p211, p155, descr=) -+2915: setfield_gc(p211, ConstPtr(ptr133), descr=) -+2929: setfield_gc(p211, i193, descr=) -+2933: i213 = call_assembler(0, p211, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p50, p155] -+3026: guard_no_exception(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p50, p155] -+3041: guard_false(i213, descr=) [p1, p0, p49, p211, p182, p212, p2, p5, p10, p12, p16, p50, p155] -debug_merge_point(1, ' #21 RETURN_VALUE') -+3050: p214 = getfield_gc(p49, descr=) -+3061: guard_isnull(p214, descr=) [p1, p0, p49, p214, p212, p2, p5, p10, p12, p16, p50, p155] -+3070: i215 = getfield_gc(p49, descr=) -+3074: i216 = int_is_true(i215) -guard_false(i216, descr=) [p1, p0, p49, p212, p2, p5, p10, p12, p16, p50, p155] -+3084: p217 = getfield_gc(p49, descr=) ++2893: setfield_gc(p0, i210, descr=) ++2904: setfield_gc(p211, ConstPtr(ptr133), descr=) ++2918: setfield_gc(p211, i193, descr=) ++2922: setfield_gc(p211, i189, descr=) ++2926: setfield_gc(p211, p155, descr=) ++2930: i213 = call_assembler(0, p211, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p155, p50] ++3023: guard_no_exception(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p155, p50] ++3038: guard_false(i213, descr=) [p1, p0, p49, p211, p182, p212, p2, p5, p10, p12, p16, p155, p50] +debug_merge_point(1, ' #21 RETURN_VALUE') ++3047: p214 = getfield_gc(p49, descr=) ++3058: guard_isnull(p214, descr=) [p1, p0, p49, p214, p212, p2, p5, p10, p12, p16, p155, p50] ++3067: i215 = getfield_gc(p49, descr=) ++3071: i216 = int_is_true(i215) +guard_false(i216, descr=) [p1, p0, p49, p212, p2, p5, p10, p12, p16, p155, p50] ++3081: p217 = getfield_gc(p49, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') setfield_gc(p49, p50, descr=) -+3120: setfield_gc(p212, -3, descr=) -+3135: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p155] -+3135: i219 = getfield_raw(43858376, descr=) -+3143: i220 = int_lt(i219, 0) -guard_false(i220, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p155] ++3115: setfield_gc(p212, -3, descr=) ++3130: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, p155, None] ++3130: i219 = getfield_raw(44057928, descr=) ++3138: i220 = int_lt(i219, 0) +guard_false(i220, descr=) [p1, p0, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #44 FOR_ITER') -+3153: jump(p0, p1, p2, p5, p10, p12, p155, p16, i215, p49, p50, descr=TargetToken(140139656777744)) -+3164: --end of the loop-- -[7e18c787d7a5] jit-log-opt-loop} -[7e18c79879ff] {jit-backend -[7e18c799e36b] {jit-backend-dump ++3148: jump(p0, p1, p2, p5, p10, p12, p155, p16, i215, p49, p50, descr=TargetToken(139705792106272)) ++3159: --end of the loop-- +[101d93796427b] jit-log-opt-loop} +[101d937abc21f] {jit-backend +[101d937ad8b53] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf597c6 +0 488DA50000000049BBA0F282CE747F00004D8B3B4983C70149BBA0F282CE747F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB0060F5CB747F000041FFD31D1803AD00000049BB0060F5CB747F000041FFD31D1803AE000000 -[7e18c79a15f1] jit-backend-dump} -[7e18c79a1aed] {jit-backend-addr -bridge out of Guard 90 has address 7f74cbf597c6 to 7f74cbf5983a -[7e18c79a25f5] jit-backend-addr} -[7e18c79a2c21] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a7b0 +0 488DA50000000049BBA0E221CA0F7F00004D8B3B4983C70149BBA0E221CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D1803AD00000049BB007044C70F7F000041FFD31D1803AE000000 +[101d937add077] jit-backend-dump} +[101d937add7ab] {jit-backend-addr +bridge out of Guard 90 has address 7f0fc744a7b0 to 7f0fc744a824 +[101d937ade67b] jit-backend-addr} +[101d937adee83] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf597c9 +0 70FFFFFF -[7e18c79a9017] jit-backend-dump} -[7e18c79a9805] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a7b3 +0 70FFFFFF +[101d937adfeaf] jit-backend-dump} +[101d937ae063b] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf597fb +0 3B000000 -[7e18c79aa391] jit-backend-dump} -[7e18c79aa8c3] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a7e5 +0 3B000000 +[101d937ae1417] jit-backend-dump} +[101d937ae19a3] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5980c +0 3E000000 -[7e18c79ab29d] jit-backend-dump} -[7e18c79abe0b] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a7f6 +0 3E000000 +[101d937ae25df] jit-backend-dump} +[101d937ae2deb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e1f +0 A3190000 -[7e18c79ac6e5] jit-backend-dump} -[7e18c79ace71] jit-backend} -[7e18c79ad765] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448e10 +0 9C190000 +[101d937ae3a13] jit-backend-dump} +[101d937ae42d7] jit-backend} +[101d937ae50ff] {jit-log-opt-bridge # bridge out of Guard 90 with 10 ops [i0, p1] -debug_merge_point(0, 'StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') +37: p2 = getfield_gc(p1, descr=) +41: i3 = strgetitem(p2, i0) +47: i5 = int_eq(i3, 51) @@ -1912,150 +1913,150 @@ +61: i8 = getfield_gc_pure(p1, descr=) +65: i9 = int_lt(i7, i8) guard_false(i9, descr=) [i7, p1] -+74: finish(0, descr=) ++74: finish(0, descr=) +116: --end of the loop-- -[7e18c79b75e5] jit-log-opt-bridge} -[7e18c7efb137] {jit-backend -[7e18c7f2eb61] {jit-backend-dump +[101d937af18c7] jit-log-opt-bridge} +[101d9382744bb] {jit-backend +[101d9382b13eb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5987a +0 488DA50000000049BBB8F282CE747F00004D8B3B4983C70149BBB8F282CE747F00004D893B4C8BBD00FFFFFF4D8B77504D85F60F85000000004D8B77284983FE000F85000000004C8BB5F0FEFFFF41F6470401740F4C89FF4C89F641BBB0E5C40041FFD34D8977404C8BB5B8FEFFFF49C74608FDFFFFFF4C8B3425C8399D024983FE000F8C00000000488B0425B0685501488D5010483B1425C8685501761A49BB2D62F5CB747F000041FFD349BBC262F5CB747F000041FFD348891425B068550148C70088190000488B9518FFFFFF48895008488BBD10FFFFFF49BB90C10BCC747F00004D89DE41BD0000000041BA0400000048C78550FFFFFF2C00000048898540FFFFFF488B8D08FFFFFF48C78538FFFFFF0000000048C78530FFFFFF0000000048C78528FFFFFF0000000048C78520FFFFFF0000000049BB8C81F5CB747F000041FFE349BB0060F5CB747F000041FFD340703C389C014C445448749801940180016C03AF00000049BB0060F5CB747F000041FFD340703C9C014C445448749801940180016C03B000000049BB0060F5CB747F000041FFD340704C445448740707076C03B100000049BB0060F5CB747F000041FFD340704C445448740707076C03B2000000 -[7e18c7f34a0f] jit-backend-dump} -[7e18c7f350ad] {jit-backend-addr -bridge out of Guard 133 has address 7f74cbf5987a to 7f74cbf599bf -[7e18c7f35bfd] jit-backend-addr} -[7e18c7f36373] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a864 +0 488DA50000000049BBB8E221CA0F7F00004D8B3B4983C70149BBB8E221CA0F7F00004D893B4C8BBD00FFFFFF4D8B77504D85F60F85000000004D8B77284983FE000F85000000004C8BB5E8FEFFFF41F6470401740F4C89FF4C89F641BBF0C4C50041FFD34D8977404C8BB5B8FEFFFF49C74608FDFFFFFF4C8B34254845A0024983FE000F8C00000000488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C70088250000488B9508FFFFFF4889500849BB28DC58C70F7F00004D89DE41BD0000000041BA0400000048C78548FFFFFF2C00000048898538FFFFFF488B8D10FFFFFF48C78530FFFFFF0000000048C78528FFFFFF0000000048C78520FFFFFF0000000048C78518FFFFFF0000000049BB869144C70F7F000041FFE349BB007044C70F7F000041FFD34C483C389C0140504458709401749801840103AF00000049BB007044C70F7F000041FFD34C483C9C0140504458709401749801840103B000000049BB007044C70F7F000041FFD34C4840504458700774070703B100000049BB007044C70F7F000041FFD34C4840504458700774070703B2000000 +[101d9382b8807] jit-backend-dump} +[101d9382b8fa3] {jit-backend-addr +bridge out of Guard 133 has address 7f0fc744a864 to 7f0fc744a9a2 +[101d9382b9c93] jit-backend-addr} +[101d9382ba69f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf5987d +0 E0FDFFFF -[7e18c7f36f51] jit-backend-dump} -[7e18c7f375bd] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a867 +0 E0FDFFFF +[101d9382bb6b3] jit-backend-dump} +[101d9382bbfeb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf598af +0 0C010000 -[7e18c7f3816b] jit-backend-dump} -[7e18c7f386a1] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a899 +0 05010000 +[101d9382c799f] jit-backend-dump} +[101d9382c81c7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf598bd +0 22010000 -[7e18c7f39187] jit-backend-dump} -[7e18c7f39711] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a8a7 +0 1B010000 +[101d9382c8f8b] jit-backend-dump} +[101d9382c9667] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf598ff +0 20010000 -[7e18c7f3a0e7] jit-backend-dump} -[7e18c7f3a6a7] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744a8e9 +0 19010000 +[101d9382ca3a7] jit-backend-dump} +[101d9382cab93] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf586fe +0 78110000 -[7e18c7f3af31] jit-backend-dump} -[7e18c7f3b7bf] jit-backend} -[7e18c7f3c2fd] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc74496ee +0 72110000 +[101d9382cba6f] jit-backend-dump} +[101d9382cc4f3] jit-backend} +[101d9382cd433] {jit-log-opt-bridge # bridge out of Guard 133 with 19 ops [p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11, p12] -debug_merge_point(1, ' #21 RETURN_VALUE') +debug_merge_point(1, ' #21 RETURN_VALUE') +37: p13 = getfield_gc(p2, descr=) -+48: guard_isnull(p13, descr=) [p0, p1, p2, p13, p5, p6, p7, p8, p9, p10, p3, p4, p11, p12] ++48: guard_isnull(p13, descr=) [p0, p1, p2, p13, p5, p6, p7, p8, p9, p10, p4, p12, p3, p11] +57: i14 = getfield_gc(p2, descr=) +61: i15 = int_is_true(i14) -guard_false(i15, descr=) [p0, p1, p2, p5, p6, p7, p8, p9, p10, p3, p4, p11, p12] +guard_false(i15, descr=) [p0, p1, p2, p5, p6, p7, p8, p9, p10, p4, p12, p3, p11] +71: p16 = getfield_gc(p2, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') setfield_gc(p2, p11, descr=) +104: setfield_gc(p5, -3, descr=) -+119: guard_not_invalidated(, descr=) [p0, p1, p6, p7, p8, p9, p10, None, None, None, p12] -+119: i20 = getfield_raw(43858376, descr=) ++119: guard_not_invalidated(, descr=) [p0, p1, p6, p7, p8, p9, p10, None, p12, None, None] ++119: i20 = getfield_raw(44057928, descr=) +127: i22 = int_lt(i20, 0) -guard_false(i22, descr=) [p0, p1, p6, p7, p8, p9, p10, None, None, None, p12] +guard_false(i22, descr=) [p0, p1, p6, p7, p8, p9, p10, None, p12, None, None] debug_merge_point(0, ' #44 FOR_ITER') p24 = new_with_vtable(ConstClass(W_StringObject)) +200: setfield_gc(p24, p12, descr=) -+211: jump(p1, p0, p6, ConstPtr(ptr25), 0, p7, 4, 44, p8, p9, p24, p10, ConstPtr(ptr29), ConstPtr(ptr30), ConstPtr(ptr30), ConstPtr(ptr30), descr=TargetToken(140139656779344)) -+325: --end of the loop-- -[7e18c7f5ca77] jit-log-opt-bridge} -[7e18c7f94eb5] {jit-backend -[7e18c7fa297f] {jit-backend-dump ++211: jump(p1, p0, p6, ConstPtr(ptr25), 0, p7, 4, 44, p8, p9, p24, p10, ConstPtr(ptr29), ConstPtr(ptr30), ConstPtr(ptr30), ConstPtr(ptr30), descr=TargetToken(139705792106192)) ++318: --end of the loop-- +[101d9382ed8b7] jit-log-opt-bridge} +[101d93832def7] {jit-backend +[101d93833f3c7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf59a40 +0 488DA50000000049BBD0F282CE747F00004D8B3B4983C70149BBD0F282CE747F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[7e18c7fa535b] jit-backend-dump} -[7e18c7fa5807] {jit-backend-addr -bridge out of Guard 87 has address 7f74cbf59a40 to 7f74cbf59aa6 -[7e18c7fa6217] jit-backend-addr} -[7e18c7fa67f9] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744aa23 +0 488DA50000000049BBD0E221CA0F7F00004D8B3B4983C70149BBD0E221CA0F7F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[101d938342dd7] jit-backend-dump} +[101d93834346f] {jit-backend-addr +bridge out of Guard 87 has address 7f0fc744aa23 to 7f0fc744aa89 +[101d938349387] jit-backend-addr} +[101d938349d3f] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf59a43 +0 70FFFFFF -[7e18c7fa7353] jit-backend-dump} -[7e18c7fa7a73] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744aa26 +0 70FFFFFF +[101d93834addf] jit-backend-dump} +[101d93834b6fb] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57d62 +0 DA1C0000 -[7e18c7fa8387] jit-backend-dump} -[7e18c7fa8a1b] jit-backend} -[7e18c7fa90bb] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448d53 +0 CC1C0000 +[101d93834c5a7] jit-backend-dump} +[101d93834cf63] jit-backend} +[101d93834dd27] {jit-log-opt-bridge # bridge out of Guard 87 with 5 ops [i0, p1] +37: i3 = int_add(i0, 1) +44: setfield_gc(p1, i3, descr=) +48: setfield_gc(p1, ConstPtr(ptr4), descr=) +56: setfield_gc(p1, i0, descr=) -+60: finish(1, descr=) ++60: finish(1, descr=) +102: --end of the loop-- -[7e18c7faefab] jit-log-opt-bridge} -[7e18c80e256d] {jit-backend -[7e18c80ef8fd] {jit-backend-dump +[101d93835cbfb] jit-log-opt-bridge} +[101d9384f8b1f] {jit-backend +[101d938509147] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf59aa6 +0 488DA50000000049BBE8F282CE747F00004D8B3B4983C70149BBE8F282CE747F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425901A550141BBC0BAF20041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[7e18c80f2183] jit-backend-dump} -[7e18c80f265d] {jit-backend-addr -bridge out of Guard 89 has address 7f74cbf59aa6 to 7f74cbf59b0c -[7e18c80f3045] jit-backend-addr} -[7e18c80f3621] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744aa89 +0 488DA50000000049BBE8E221CA0F7F00004D8B3B4983C70149BBE8E221CA0F7F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[101d93850cbdb] jit-backend-dump} +[101d93850d29f] {jit-backend-addr +bridge out of Guard 89 has address 7f0fc744aa89 to 7f0fc744aaef +[101d93850de5b] jit-backend-addr} +[101d93850e5c7] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf59aa9 +0 70FFFFFF -[7e18c80f405b] jit-backend-dump} -[7e18c80f46fb] {jit-backend-dump +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc744aa8c +0 70FFFFFF +[101d93850f3e3] jit-backend-dump} +[101d93850fc33] {jit-backend-dump BACKEND x86_64 -SYS_EXECUTABLE python -CODE_DUMP @7f74cbf57e0e +0 941C0000 -[7e18c80f50fd] jit-backend-dump} -[7e18c80f57ab] jit-backend} -[7e18c80f5eab] {jit-log-opt-bridge +SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy +CODE_DUMP @7f0fc7448dff +0 861C0000 +[101d938510adf] jit-backend-dump} +[101d93851136f] jit-backend} +[101d938511de3] {jit-log-opt-bridge # bridge out of Guard 89 with 5 ops [i0, p1] +37: i3 = int_add(i0, 1) +44: setfield_gc(p1, i3, descr=) +48: setfield_gc(p1, ConstPtr(ptr4), descr=) +56: setfield_gc(p1, i0, descr=) -+60: finish(1, descr=) ++60: finish(1, descr=) +102: --end of the loop-- -[7e18c80fb77d] jit-log-opt-bridge} -[7e18c814b937] {jit-backend-counts +[101d938518e47] jit-log-opt-bridge} +[101d93857d417] {jit-backend-counts entry 0:4647 -TargetToken(140139616183984):4647 -TargetToken(140139616184064):9292 +TargetToken(139705745523760):4647 +TargetToken(139705745523840):9292 entry 1:201 -TargetToken(140139616188064):201 -TargetToken(140139616188144):4468 +TargetToken(139705745528160):201 +TargetToken(139705745528240):4468 bridge 16:4446 bridge 33:4268 -TargetToken(140139616190144):4268 +TargetToken(139705745530240):4268 entry 2:1 -TargetToken(140139656776704):1 -TargetToken(140139656776784):1938 +TargetToken(139705792105152):1 +TargetToken(139705792105232):1938 entry 3:3173 bridge 85:2882 bridge 88:2074 bridge 86:158 entry 4:377 -TargetToken(140139656779344):527 -TargetToken(140139656777744):1411 +TargetToken(139705792106192):527 +TargetToken(139705792106272):1411 bridge 90:1420 bridge 133:150 bridge 87:50 bridge 89:7 -[7e18c8157381] jit-backend-counts} +[101d938585943] jit-backend-counts} From noreply at buildbot.pypy.org Wed Feb 29 21:29:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 29 Feb 2012 21:29:15 +0100 (CET) Subject: [pypy-commit] jitviewer default: fix jitviewer by simplifying how bridges are displayed. various small fixes as well Message-ID: <20120229202915.C9AA18204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r189:f8c7f89f4f97 Date: 2012-02-29 12:28 -0800 http://bitbucket.org/pypy/jitviewer/changeset/f8c7f89f4f97/ Log: fix jitviewer by simplifying how bridges are displayed. various small fixes as well diff --git a/_jitviewer/app.py b/_jitviewer/app.py --- a/_jitviewer/app.py +++ b/_jitviewer/app.py @@ -62,6 +62,21 @@ def repr(self): return '???' +def mangle_descr(descr): + if descr.startswith('TargetToken('): + return descr[len('TargetToken('):-1] + if descr.startswith(' CUTOFF: extra_data = "Show all (%d) loops" % len(loops) else: @@ -98,11 +113,18 @@ extra_data=extra_data) def loop(self): - no = int(flask.request.args.get('no', '0')) - orig_loop = self.storage.loops[no] + name = mangle_descr(flask.request.args['name']) + orig_loop = self.storage.loop_dict[name] if hasattr(orig_loop, 'force_asm'): orig_loop.force_asm() - ops = adjust_bridges(orig_loop, flask.request.args) + ops = orig_loop.operations + for op in ops: + if op.is_guard(): + descr = mangle_descr(op.descr) + subloop = self.storage.loop_dict.get(descr, None) + if subloop is not None: + op.bridge = descr + op.count = getattr(subloop, 'count', '?') loop = FunctionHtml.from_operations(ops, self.storage, inputargs=orig_loop.inputargs) path = flask.request.args.get('path', '').split(',') @@ -143,7 +165,7 @@ # source = CodeReprNoFile(loop) d = {'html': flask.render_template('loop.html', source=source, - current_loop=no, + current_loop=name, upper_path=up, show_upper_path=bool(path)), 'scrollto': startline, @@ -191,7 +213,9 @@ storage = LoopStorage(extra_path) log, loops = import_log(filename, ParserWithHtmlRepr) parse_log_counts(extract_category(log, 'jit-backend-count'), loops) - storage.reconnect_loops(loops) + storage.loops = [loop for loop in loops + if not loop.descr.startswith('bridge')] + storage.loop_dict = create_loop_dict(loops) app = OverrideFlask('_jitviewer') server = Server(filename, storage) app.debug = True diff --git a/_jitviewer/parser.py b/_jitviewer/parser.py --- a/_jitviewer/parser.py +++ b/_jitviewer/parser.py @@ -106,9 +106,8 @@ field, self.wrap_html(self.args[1])) def repr_jump(self): - no = int(re.search("\d+", self.descr).group(0)) - return ("" % no + - self.default_repr() + "") + return ("" % self.descr + + self.default_repr() + "") def default_repr(self): args = [self.wrap_html(arg) for arg in self.args] @@ -122,6 +121,7 @@ return '%s(%s)' % (self.name, arglist) repr_call_assembler = repr_jump + repr_label = repr_jump #def repr_call_assembler(self): # xxxx diff --git a/_jitviewer/static/script.js b/_jitviewer/static/script.js --- a/_jitviewer/static/script.js +++ b/_jitviewer/static/script.js @@ -4,13 +4,13 @@ 'op': true, }; -function show_loop(no, path) +function show_loop(name, path) { - $("#loop-" + glob_bridge_state.no).removeClass("selected"); - $("#loop-" + no).addClass("selected"); - $("#title-text").html($("#loop-" + no).attr('name')); + $("#loop-" + glob_bridge_state.name).removeClass("selected"); + $("#loop-" + name).addClass("selected"); + $("#title-text").html($("#loop-" + name).attr('name')); $("#title").show(); - glob_bridge_state.no = no; + glob_bridge_state.name = name; if (path) { glob_bridge_state.path = path; } else { @@ -29,7 +29,7 @@ $('#callstack').html('') for (var index in arg.callstack) { var elem = arg.callstack[index]; - $('#callstack').append('"); + $('#callstack').append('"); } if (!glob_bridge_state.asm) { $(".asm").hide(); diff --git a/_jitviewer/templates/index.html b/_jitviewer/templates/index.html --- a/_jitviewer/templates/index.html +++ b/_jitviewer/templates/index.html @@ -29,12 +29,8 @@
      - {% for is_entry_bridge, index, item in loops %} - {% if is_entry_bridge %} -
    • Entry bridge: {{item.repr()}} run {{item.count}} times
    • - {% else %} -
    • {{item.repr()}} run {{item.count}} times
    • - {% endif %} + {% for item in loops %} +
    • {{item.repr()}} run {{item.count}} times
    • {% endfor %}
    {% if extra_data %} diff --git a/_jitviewer/templates/loop.html b/_jitviewer/templates/loop.html --- a/_jitviewer/templates/loop.html +++ b/_jitviewer/templates/loop.html @@ -13,7 +13,7 @@ {% for op in chunk.operations %} {% if op.name != "debug_merge_point" %} {% if op.bridge %} - {{op.html_repr()}} >>show bridge (taken {{op.percentage}}%)
    + {{op.html_repr()}} show bridge  (run {{op.count}} times)
    {% if op.asm %}

    {{op.asm}}

    {% endif %} @@ -26,7 +26,7 @@ {% endif %} {% endfor %} {% else %} - {{(chunk.html_repr())|safe}}
    + {{(chunk.html_repr())|safe}}
    {% endif %} {% endfor %}
    diff --git a/bin/jitviewer.py b/bin/jitviewer.py old mode 100644 new mode 100755 diff --git a/log.pypylog b/log.pypylog --- a/log.pypylog +++ b/log.pypylog @@ -1,131 +1,131 @@ -[101d9320139e7] {jit-backend-dump +[88d31d16aba] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447000 +0 4157415641554154415341524151415057565554535251504889E341BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 -[101d932027117] jit-backend-dump} -[101d932028a7b] {jit-backend-dump +CODE_DUMP @7fe3d152a000 +0 4157415641554154415341524151415057565554535251504889E341BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[88d31d297ea] jit-backend-dump} +[88d31d2b2de] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447043 +0 4157415641554154415341524151415057565554535251504889E341BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 -[101d93202a9db] jit-backend-dump} -[101d93202e2e7] {jit-backend-dump +CODE_DUMP @7fe3d152a043 +0 4157415641554154415341524151415057565554535251504889E341BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[88d31d2d1fe] jit-backend-dump} +[88d31d30a8e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447086 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 -[101d9320319ab] jit-backend-dump} -[101d932032bff] {jit-backend-dump +CODE_DUMP @7fe3d152a086 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BBD01BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[88d31d33c7a] jit-backend-dump} +[88d31d35046] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447137 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 -[101d93203580f] jit-backend-dump} -[101d9320397f7] {jit-backend-dump +CODE_DUMP @7fe3d152a137 +0 4157415641554154415341524151415057565554535251504889E34881EC80000000F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C2438F2440F11442440F2440F114C2448F2440F11542450F2440F115C2458F2440F11642460F2440F116C2468F2440F11742470F2440F117C247841BB801BF30041FFD34889DF4883E4F041BB60C4D30041FFD3488D65D8415F415E415D415C5B5DC3 +[88d31d37afe] jit-backend-dump} +[88d31d3ba52] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447210 +0 41BBE01AF30041FFD3B803000000488D65D8415F415E415D415C5B5DC3 -[101d93203ae3f] jit-backend-dump} -[101d932041e57] {jit-backend-dump +CODE_DUMP @7fe3d152a210 +0 41BBE01AF30041FFD3B803000000488D65D8415F415E415D415C5B5DC3 +[88d31d3d07a] jit-backend-dump} +[88d31d4415e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744722d +0 F20F11442410F20F114C2418F20F11542420F20F115C2428F20F11642430F20F116C2438F20F11742440F20F117C2448F2440F11442450F2440F114C2458F2440F11542460F2440F115C2468F2440F11642470F2440F116C2478F2440F11B42480000000F2440F11BC24880000004829C24C8955B048894D80488975904C8945A04C894DA848897D984889D741BB1096CF0041FFE3 -[101d932044b27] jit-backend-dump} -[101d93204bad3] {jit-backend-dump +CODE_DUMP @7fe3d152a22d +0 F20F11442410F20F114C2418F20F11542420F20F115C2428F20F11642430F20F116C2438F20F11742440F20F117C2448F2440F11442450F2440F114C2458F2440F11542460F2440F115C2468F2440F11642470F2440F116C2478F2440F11B42480000000F2440F11BC24880000004829C24C8955B048894D80488975904C8945A04C894DA848897D984889D741BB1096CF0041FFE3 +[88d31d46c12] jit-backend-dump} +[88d31d4d12e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74472c2 +0 4C8B55B0488B4D80488B75904C8B45A04C8B4DA8488B7D98F20F10442410F20F104C2418F20F10542420F20F105C2428F20F10642430F20F106C2438F20F10742440F20F107C2448F2440F10442450F2440F104C2458F2440F10542460F2440F105C2468F2440F10642470F2440F106C2478F2440F10B42480000000F2440F10BC24880000004885C07409488B142530255601C349BB107244C70F7F000041FFE3 -[101d93204e53b] jit-backend-dump} -[101d932051243] {jit-backend-dump +CODE_DUMP @7fe3d152a2c2 +0 4C8B55B0488B4D80488B75904C8B45A04C8B4DA8488B7D98F20F10442410F20F104C2418F20F10542420F20F105C2428F20F10642430F20F106C2438F20F10742440F20F107C2448F2440F10442450F2440F104C2458F2440F10542460F2440F105C2468F2440F10642470F2440F106C2478F2440F10B42480000000F2440F10BC24880000004885C07409488B142530255601C349BB10A252D1E37F000041FFE3 +[88d31d4fbc2] jit-backend-dump} +[88d31d52872] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447363 +0 57565251415041514883EC40F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C24384889E741BBD036A90041FFD3488B0425A046A0024885C0753CF20F107C2438F20F10742430F20F106C2428F20F10642420F20F105C2418F20F10542410F20F104C2408F20F1004244883C44041594158595A5E5FC341BB801BF30041FFD3B8030000004883C478C3 -[101d932053c4b] jit-backend-dump} -[101d93205af97] {jit-backend-counts -[101d93205bb73] jit-backend-counts} -[101d93272b48f] {jit-backend -[101d932e3d033] {jit-backend-dump +CODE_DUMP @7fe3d152a363 +0 57565251415041514883EC40F20F110424F20F114C2408F20F11542410F20F115C2418F20F11642420F20F116C2428F20F11742430F20F117C24384889E741BBD036A90041FFD3488B0425A046A0024885C0753CF20F107C2438F20F10742430F20F106C2428F20F10642420F20F105C2418F20F10542410F20F104C2408F20F1004244883C44041594158595A5E5FC341BB801BF30041FFD3B8030000004883C478C3 +[88d31d5505e] jit-backend-dump} +[88d31d56066] {jit-backend-counts +[88d31d56552] jit-backend-counts} +[88d32442602] {jit-backend +[88d32b5faa2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447406 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBF0E021CA0F7F00004D8B3B4983C70149BBF0E021CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284D8B40304889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48899548FFFFFF48898D40FFFFFF4C898538FFFFFF49BB08E121CA0F7F00004D8B034983C00149BB08E121CA0F7F00004D89034983FA030F85000000008138806300000F85000000004C8B50104D85D20F84000000004C8B4008498B4A108139582D03000F85000000004D8B5208498B4A08498B52104D8B52184983F8000F8C000000004D39D00F8D000000004D89C14C0FAFC24989CC4C01C14983C1014C8948084983FD000F85000000004883FB017206813BF82200000F850000000049BB206055C70F7F00004D39DE0F85000000004C8B73084983C6010F8000000000488B1C254845A0024883FB000F8C0000000048898D30FFFFFF49BB20E121CA0F7F0000498B0B4883C10149BB20E121CA0F7F000049890B4D39D10F8D000000004C89C94C0FAFCA4C89E34D01CC4883C101488948084D89F14983C6010F80000000004C8B0C254845A0024983F9000F8C000000004C89A530FFFFFF4989C94989DCE993FFFFFF49BB007044C70F7F000041FFD32944404838354C510C5400585C030400000049BB007044C70F7F000041FFD34440004838354C0C54585C030500000049BB007044C70F7F000041FFD3444000284838354C0C54585C030600000049BB007044C70F7F000041FFD34440002104284838354C0C54585C030700000049BB007044C70F7F000041FFD3444000212909054838354C0C54585C030800000049BB007044C70F7F000041FFD34440002109054838354C0C54585C030900000049BB007044C70F7F000041FFD335444048384C0C54005C05030A00000049BB007044C70F7F000041FFD344400C48384C005C05030B00000049BB007044C70F7F000041FFD3444038484C0C005C05030C00000049BB007044C70F7F000041FFD344400C39484C0005030D00000049BB007044C70F7F000041FFD34440484C003905030E00000049BB007044C70F7F000041FFD34440484C003905030F00000049BB007044C70F7F000041FFD3444000250931484C3961031000000049BB007044C70F7F000041FFD3444039484C00312507031100000049BB007044C70F7F000041FFD34440484C0039310707031200000049BB007044C70F7F000041FFD34440484C00393107070313000000 -[101d932e5dfe7] jit-backend-dump} -[101d932e5ec67] {jit-backend-addr -Loop 0 ( #19 FOR_ITER) has address 7f0fc744743c to 7f0fc7447619 (bootstrap 7f0fc7447406) -[101d932e60303] jit-backend-addr} -[101d932e61047] {jit-backend-dump +CODE_DUMP @7fe3d152a406 +0 488B04254045A0024829E0483B0425E03C5101760D49BB63A352D1E37F000041FFD3554889E5534154415541564157488DA50000000049BBF01030D4E37F00004D8B3B4983C70149BBF01030D4E37F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284D8B40304889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48899548FFFFFF48898D40FFFFFF4C898538FFFFFF49BB081130D4E37F00004D8B034983C00149BB081130D4E37F00004D89034983FA030F85000000008138806300000F85000000004C8B50104D85D20F84000000004C8B4008498B4A108139582D03000F85000000004D8B5208498B4A08498B52104D8B52184983F8000F8C000000004D39D00F8D000000004D89C14C0FAFC24989CC4C01C14983C1014C8948084983FD000F85000000004883FB017206813BF82200000F850000000049BB20A063D1E37F00004D39DE0F85000000004C8B73084983C6010F8000000000488B1C254845A0024883FB000F8C0000000048898D30FFFFFF49BB201130D4E37F0000498B0B4883C10149BB201130D4E37F000049890B4D39D10F8D000000004C89C94C0FAFCA4C89E34D01CC4883C101488948084D89F14983C6010F80000000004C8B0C254845A0024983F9000F8C000000004C89A530FFFFFF4989C94989DCE993FFFFFF49BB00A052D1E37F000041FFD32944404838354C510C5400585C030400000049BB00A052D1E37F000041FFD34440004838354C0C54585C030500000049BB00A052D1E37F000041FFD3444000284838354C0C54585C030600000049BB00A052D1E37F000041FFD34440002104284838354C0C54585C030700000049BB00A052D1E37F000041FFD3444000212909054838354C0C54585C030800000049BB00A052D1E37F000041FFD34440002109054838354C0C54585C030900000049BB00A052D1E37F000041FFD335444048384C0C54005C05030A00000049BB00A052D1E37F000041FFD344400C48384C005C05030B00000049BB00A052D1E37F000041FFD3444038484C0C005C05030C00000049BB00A052D1E37F000041FFD344400C39484C0005030D00000049BB00A052D1E37F000041FFD34440484C003905030E00000049BB00A052D1E37F000041FFD34440484C003905030F00000049BB00A052D1E37F000041FFD3444000250931484C3961031000000049BB00A052D1E37F000041FFD3444039484C00312507031100000049BB00A052D1E37F000041FFD34440484C0039310707031200000049BB00A052D1E37F000041FFD34440484C00393107070313000000 +[88d32b80f0a] jit-backend-dump} +[88d32b81cd2] {jit-backend-addr +Loop 0 ( #19 FOR_ITER) has address 7fe3d152a43c to 7fe3d152a619 (bootstrap 7fe3d152a406) +[88d32b83662] jit-backend-addr} +[88d32b84356] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447438 +0 30FFFFFF -[101d932e62273] jit-backend-dump} -[101d932e62f17] {jit-backend-dump +CODE_DUMP @7fe3d152a438 +0 30FFFFFF +[88d32b8556a] jit-backend-dump} +[88d32b8622a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74474ed +0 28010000 -[101d932e63cfb] jit-backend-dump} -[101d932e64423] {jit-backend-dump +CODE_DUMP @7fe3d152a4ed +0 28010000 +[88d32b8701a] jit-backend-dump} +[88d32b8761e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74474f9 +0 3B010000 -[101d932e65267] jit-backend-dump} -[101d932e658ff] {jit-backend-dump +CODE_DUMP @7fe3d152a4f9 +0 3B010000 +[88d32b88312] jit-backend-dump} +[88d32b888a6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447506 +0 4B010000 -[101d932e666d3] jit-backend-dump} -[101d932e66c5f] {jit-backend-dump +CODE_DUMP @7fe3d152a506 +0 4B010000 +[88d32b89516] jit-backend-dump} +[88d32b89a86] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744751a +0 55010000 -[101d932e678b7] jit-backend-dump} -[101d932e67e33] {jit-backend-dump +CODE_DUMP @7fe3d152a51a +0 55010000 +[88d32b8a7aa] jit-backend-dump} +[88d32b8ae2e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447534 +0 5B010000 -[101d932e68a53] jit-backend-dump} -[101d932e68fd3] {jit-backend-dump +CODE_DUMP @7fe3d152a534 +0 5B010000 +[88d32b8bb9e] jit-backend-dump} +[88d32b8c22e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744753d +0 73010000 -[101d932e69df7] jit-backend-dump} -[101d932e6a463] {jit-backend-dump +CODE_DUMP @7fe3d152a53d +0 73010000 +[88d32b8ceee] jit-backend-dump} +[88d32b8d452] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744755c +0 74010000 -[101d932e6b25f] jit-backend-dump} -[101d932e6b7d3] {jit-backend-dump +CODE_DUMP @7fe3d152a55c +0 74010000 +[88d32b8e0aa] jit-backend-dump} +[88d32b8e5ee] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744756e +0 7F010000 -[101d932e6c417] jit-backend-dump} -[101d932e6c96b] {jit-backend-dump +CODE_DUMP @7fe3d152a56e +0 7F010000 +[88d32b8f282] jit-backend-dump} +[88d32b8f7c2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447581 +0 87010000 -[101d932e6d66f] jit-backend-dump} -[101d932e6dbc3] {jit-backend-dump +CODE_DUMP @7fe3d152a581 +0 87010000 +[88d32b9061e] jit-backend-dump} +[88d32b90c9a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744758f +0 94010000 -[101d932e6e80b] jit-backend-dump} -[101d932e6f127] {jit-backend-dump +CODE_DUMP @7fe3d152a58f +0 94010000 +[88d32b91a6a] jit-backend-dump} +[88d32b921ea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74475a1 +0 B5010000 -[101d932e6ff33] jit-backend-dump} -[101d932e705bf] {jit-backend-dump +CODE_DUMP @7fe3d152a5a1 +0 B5010000 +[88d32b92e46] jit-backend-dump} +[88d32b933ba] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74475cf +0 A0010000 -[101d932e744c3] jit-backend-dump} -[101d932e74c73] {jit-backend-dump +CODE_DUMP @7fe3d152a5cf +0 A0010000 +[88d32b93fe6] jit-backend-dump} +[88d32b9452e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74475f1 +0 9A010000 -[101d932e75abb] jit-backend-dump} -[101d932e761d3] {jit-backend-dump +CODE_DUMP @7fe3d152a5f1 +0 9A010000 +[88d32b9a81a] jit-backend-dump} +[88d32b9afda] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447603 +0 BE010000 -[101d932e76f5b] jit-backend-dump} -[101d932e77c97] jit-backend} -[101d932e7b5d7] {jit-log-opt-loop +CODE_DUMP @7fe3d152a603 +0 BE010000 +[88d32b9bdfa] jit-backend-dump} +[88d32b9cd1a] jit-backend} +[88d32ba091a] {jit-log-opt-loop # Loop 0 ( #19 FOR_ITER) : loop with 73 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -141,7 +141,7 @@ +131: p16 = getarrayitem_gc(p8, 3, descr=) +135: p18 = getarrayitem_gc(p8, 4, descr=) +139: p19 = getfield_gc(p0, descr=) -+139: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, descr=TargetToken(139705745523760)) ++139: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, descr=TargetToken(140616447296560)) debug_merge_point(0, ' #19 FOR_ITER') +225: guard_value(i6, 3, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18] +235: guard_class(p14, 38562496, descr=) [p1, p0, p14, p2, p3, i4, p5, p10, p12, p16, p18] @@ -179,7 +179,7 @@ +405: i46 = int_lt(i44, 0) guard_false(i46, descr=) [p1, p0, p2, p5, p14, i42, i34] debug_merge_point(0, ' #19 FOR_ITER') -+415: label(p0, p1, p2, p5, i42, i34, p14, i36, i29, i28, i27, descr=TargetToken(139705745523840)) ++415: label(p0, p1, p2, p5, i42, i34, p14, i36, i29, i28, i27, descr=TargetToken(140616447296640)) debug_merge_point(0, ' #19 FOR_ITER') +452: i47 = int_ge(i36, i29) guard_false(i47, descr=) [p1, p0, p14, i36, i28, i27, p2, p5, i42, i34] @@ -200,100 +200,100 @@ +503: i54 = int_lt(i53, 0) guard_false(i54, descr=) [p1, p0, p2, p5, p14, i51, i49, None, None] debug_merge_point(0, ' #19 FOR_ITER') -+513: jump(p0, p1, p2, p5, i51, i49, p14, i50, i29, i28, i27, descr=TargetToken(139705745523840)) ++513: jump(p0, p1, p2, p5, i51, i49, p14, i50, i29, i28, i27, descr=TargetToken(140616447296640)) +531: --end of the loop-- -[101d932f3c31b] jit-log-opt-loop} -[101d933493233] {jit-backend -[101d93351df77] {jit-backend-dump +[88d32c61fee] jit-log-opt-loop} +[88d3318ffa2] {jit-backend +[88d3320c772] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74477e0 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBD8E021CA0F7F00004D8B3B4983C70149BBD8E021CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B40204D8B40284889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48898548FFFFFF4C898540FFFFFF49BB38E121CA0F7F00004D8B034983C00149BB38E121CA0F7F00004D89034983FA020F85000000004883FA017206813AF82200000F85000000004983FD000F850000000049BBD86055C70F7F00004D39DE0F85000000004C8B72084981FE102700000F8D0000000049BB00000000000000804D39DE0F84000000004C89F0B90200000048899538FFFFFF48898530FFFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F85000000004883FB017206813BF82200000F8500000000488B43084883C0010F8000000000488B9D30FFFFFF4883C3014C8B34254845A0024983FE000F8C0000000049BB50E121CA0F7F00004D8B334983C60149BB50E121CA0F7F00004D89334881FB102700000F8D0000000049BB00000000000000804C39DB0F840000000048898528FFFFFF4889D8B90200000048898520FFFFFF489948F7F94889D048C1FA3FBB020000004821D34801D84883F8000F8500000000488B8528FFFFFF4883C0010F8000000000488B9D20FFFFFF4883C301488B14254845A0024883FA000F8C00000000E958FFFFFF49BB007044C70F7F000041FFD32944404838354C510C085458031400000049BB007044C70F7F000041FFD34440084838354C0C5458031500000049BB007044C70F7F000041FFD335444048384C0C0858031600000049BB007044C70F7F000041FFD3444038484C0C0858031700000049BB007044C70F7F000041FFD3444008484C0C031800000049BB007044C70F7F000041FFD344400839484C0C031900000049BB007044C70F7F000041FFD34440484C0C5C01031A00000049BB007044C70F7F000041FFD344400C484C5C07031B00000049BB007044C70F7F000041FFD344400C01484C5C07031C00000049BB007044C70F7F000041FFD34440484C0D0107031D00000049BB007044C70F7F000041FFD34440484C0D0107031E00000049BB007044C70F7F000041FFD34440484C0D01031F00000049BB007044C70F7F000041FFD344400D484C0701032000000049BB007044C70F7F000041FFD34440484C016965032100000049BB007044C70F7F000041FFD3444001484C076965032200000049BB007044C70F7F000041FFD34440484C0D01070707032300000049BB007044C70F7F000041FFD34440484C0D010707070324000000 -[101d93353cbd3] jit-backend-dump} -[101d93353d813] {jit-backend-addr -Loop 1 ( #15 LOAD_FAST) has address 7f0fc7447816 to 7f0fc7447a30 (bootstrap 7f0fc74477e0) -[101d93353ee6f] jit-backend-addr} -[101d93353fb2f] {jit-backend-dump +CODE_DUMP @7fe3d152a7e0 +0 488B04254045A0024829E0483B0425E03C5101760D49BB63A352D1E37F000041FFD3554889E5534154415541564157488DA50000000049BBD81030D4E37F00004D8B3B4983C70149BBD81030D4E37F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B40204D8B40284889BD70FFFFFF4889B568FFFFFF4C89BD60FFFFFF4C89A558FFFFFF4C898D50FFFFFF48898548FFFFFF4C898540FFFFFF49BB381130D4E37F00004D8B034983C00149BB381130D4E37F00004D89034983FA020F85000000004883FA017206813AF82200000F85000000004983FD000F850000000049BBD8A063D1E37F00004D39DE0F85000000004C8B72084981FE102700000F8D0000000049BB00000000000000804D39DE0F84000000004C89F0B90200000048899538FFFFFF48898530FFFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F85000000004883FB017206813BF82200000F8500000000488B43084883C0010F8000000000488B9D30FFFFFF4883C3014C8B34254845A0024983FE000F8C0000000049BB501130D4E37F00004D8B334983C60149BB501130D4E37F00004D89334881FB102700000F8D0000000049BB00000000000000804C39DB0F840000000048898528FFFFFF4889D8B90200000048898520FFFFFF489948F7F94889D048C1FA3FBB020000004821D34801D84883F8000F8500000000488B8528FFFFFF4883C0010F8000000000488B9D20FFFFFF4883C301488B14254845A0024883FA000F8C00000000E958FFFFFF49BB00A052D1E37F000041FFD32944404838354C510C085458031400000049BB00A052D1E37F000041FFD34440084838354C0C5458031500000049BB00A052D1E37F000041FFD335444048384C0C0858031600000049BB00A052D1E37F000041FFD3444038484C0C0858031700000049BB00A052D1E37F000041FFD3444008484C0C031800000049BB00A052D1E37F000041FFD344400839484C0C031900000049BB00A052D1E37F000041FFD34440484C0C5C01031A00000049BB00A052D1E37F000041FFD344400C484C5C07031B00000049BB00A052D1E37F000041FFD344400C01484C5C07031C00000049BB00A052D1E37F000041FFD34440484C010D07031D00000049BB00A052D1E37F000041FFD34440484C010D07031E00000049BB00A052D1E37F000041FFD34440484C010D031F00000049BB00A052D1E37F000041FFD344400D484C0107032000000049BB00A052D1E37F000041FFD34440484C016569032100000049BB00A052D1E37F000041FFD3444001484C076569032200000049BB00A052D1E37F000041FFD34440484C0D01070707032300000049BB00A052D1E37F000041FFD34440484C0D010707070324000000 +[88d33223ede] jit-backend-dump} +[88d3322493e] {jit-backend-addr +Loop 1 ( #15 LOAD_FAST) has address 7fe3d152a816 to 7fe3d152aa30 (bootstrap 7fe3d152a7e0) +[88d33225f0a] jit-backend-addr} +[88d3322686e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447812 +0 20FFFFFF -[101d933540ca7] jit-backend-dump} -[101d9335418d7] {jit-backend-dump +CODE_DUMP @7fe3d152a812 +0 20FFFFFF +[88d332279ea] jit-backend-dump} +[88d3322824a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74478bc +0 70010000 -[101d933542823] jit-backend-dump} -[101d933542e1b] {jit-backend-dump +CODE_DUMP @7fe3d152a8bc +0 70010000 +[88d33228ffe] jit-backend-dump} +[88d332295e2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74478ce +0 7C010000 -[101d933543c4f] jit-backend-dump} -[101d93354423b] {jit-backend-dump +CODE_DUMP @7fe3d152a8ce +0 7C010000 +[88d3322a282] jit-backend-dump} +[88d3322a7da] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74478d8 +0 8E010000 -[101d933544f7b] jit-backend-dump} -[101d9335455b3] {jit-backend-dump +CODE_DUMP @7fe3d152a8d8 +0 8E010000 +[88d3322b46e] jit-backend-dump} +[88d3322bac6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74478eb +0 96010000 -[101d9335461bf] jit-backend-dump} -[101d93354670b] {jit-backend-dump +CODE_DUMP @7fe3d152a8eb +0 96010000 +[88d3322c85a] jit-backend-dump} +[88d3322ce92] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74478fc +0 9F010000 -[101d93354732b] jit-backend-dump} -[101d9335479cf] {jit-backend-dump +CODE_DUMP @7fe3d152a8fc +0 9F010000 +[88d3322dc9e] jit-backend-dump} +[88d3322e20a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744790f +0 A4010000 -[101d9335486b3] jit-backend-dump} -[101d933548d3b] {jit-backend-dump +CODE_DUMP @7fe3d152a90f +0 A4010000 +[88d3322ee5e] jit-backend-dump} +[88d3322f3a2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447947 +0 85010000 -[101d933549a0f] jit-backend-dump} -[101d933549f5b] {jit-backend-dump +CODE_DUMP @7fe3d152a947 +0 85010000 +[88d33230026] jit-backend-dump} +[88d33230572] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447959 +0 8C010000 -[101d93354ab6b] jit-backend-dump} -[101d93354b0bb] {jit-backend-dump +CODE_DUMP @7fe3d152a959 +0 8C010000 +[88d332311c6] jit-backend-dump} +[88d33231802] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447967 +0 97010000 -[101d93354bcf3] jit-backend-dump} -[101d93354c46f] {jit-backend-dump +CODE_DUMP @7fe3d152a967 +0 97010000 +[88d332325fa] jit-backend-dump} +[88d33232e56] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447984 +0 AD010000 -[101d93354d1ff] jit-backend-dump} -[101d93354d887] {jit-backend-dump +CODE_DUMP @7fe3d152a984 +0 AD010000 +[88d33233ae2] jit-backend-dump} +[88d3323405e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74479af +0 9B010000 -[101d93354e61b] jit-backend-dump} -[101d93354ec37] {jit-backend-dump +CODE_DUMP @7fe3d152a9af +0 9B010000 +[88d33234ce2] jit-backend-dump} +[88d3323522a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74479c2 +0 A0010000 -[101d93354f8fb] jit-backend-dump} -[101d93354ff73] {jit-backend-dump +CODE_DUMP @7fe3d152a9c2 +0 A0010000 +[88d33235e96] jit-backend-dump} +[88d332363fa] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74479f9 +0 82010000 -[101d933550b97] jit-backend-dump} -[101d9335510ef] {jit-backend-dump +CODE_DUMP @7fe3d152a9f9 +0 82010000 +[88d33237186] jit-backend-dump} +[88d332377b6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447a0a +0 8A010000 -[101d933551d13] jit-backend-dump} -[101d93355230b] {jit-backend-dump +CODE_DUMP @7fe3d152aa0a +0 8A010000 +[88d332384f2] jit-backend-dump} +[88d33238ada] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447a27 +0 A2010000 -[101d9335530cb] jit-backend-dump} -[101d933553d03] jit-backend} -[101d933556d57] {jit-log-opt-loop +CODE_DUMP @7fe3d152aa27 +0 A2010000 +[88d33239722] jit-backend-dump} +[88d3323a282] jit-backend} +[88d3323c77a] {jit-log-opt-loop # Loop 1 ( #15 LOAD_FAST) : loop with 92 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -308,7 +308,7 @@ +127: p14 = getarrayitem_gc(p8, 2, descr=) +131: p16 = getarrayitem_gc(p8, 3, descr=) +135: p17 = getfield_gc(p0, descr=) -+135: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, descr=TargetToken(139705745528160)) ++135: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, descr=TargetToken(140616447300960)) debug_merge_point(0, ' #15 LOAD_FAST') +214: guard_value(i6, 2, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16] +224: guard_nonnull_class(p12, ConstClass(W_IntObject), descr=) [p1, p0, p12, p2, p3, i4, p5, p10, p14, p16] @@ -346,35 +346,35 @@ +395: i40 = int_add(i22, 1) debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') -+406: guard_not_invalidated(, descr=) [p1, p0, p2, p5, i40, i38, None] ++406: guard_not_invalidated(, descr=) [p1, p0, p2, p5, i38, i40, None] +406: i42 = getfield_raw(44057928, descr=) +414: i44 = int_lt(i42, 0) -guard_false(i44, descr=) [p1, p0, p2, p5, i40, i38, None] +guard_false(i44, descr=) [p1, p0, p2, p5, i38, i40, None] debug_merge_point(0, ' #15 LOAD_FAST') -+424: label(p0, p1, p2, p5, i38, i40, descr=TargetToken(139705745528240)) ++424: label(p0, p1, p2, p5, i38, i40, descr=TargetToken(140616447301040)) debug_merge_point(0, ' #15 LOAD_FAST') debug_merge_point(0, ' #18 LOAD_CONST') debug_merge_point(0, ' #21 COMPARE_OP') +454: i45 = int_lt(i40, 10000) -guard_true(i45, descr=) [p1, p0, p2, p5, i40, i38] +guard_true(i45, descr=) [p1, p0, p2, p5, i38, i40] debug_merge_point(0, ' #24 POP_JUMP_IF_FALSE') debug_merge_point(0, ' #27 LOAD_FAST') debug_merge_point(0, ' #30 LOAD_CONST') debug_merge_point(0, ' #33 BINARY_MODULO') +467: i46 = int_eq(i40, -9223372036854775808) -guard_false(i46, descr=) [p1, p0, i40, p2, p5, None, i38] +guard_false(i46, descr=) [p1, p0, i40, p2, p5, i38, None] +486: i47 = int_mod(i40, 2) +513: i48 = int_rshift(i47, 63) +520: i49 = int_and(2, i48) +528: i50 = int_add(i47, i49) debug_merge_point(0, ' #34 POP_JUMP_IF_FALSE') +531: i51 = int_is_true(i50) -guard_false(i51, descr=) [p1, p0, p2, p5, i50, i40, i38] +guard_false(i51, descr=) [p1, p0, p2, p5, i50, i38, i40] debug_merge_point(0, ' #53 LOAD_FAST') debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 INPLACE_ADD') +541: i52 = int_add_ovf(i38, 1) -guard_no_overflow(, descr=) [p1, p0, i52, p2, p5, None, i40, i38] +guard_no_overflow(, descr=) [p1, p0, i52, p2, p5, None, i38, i40] debug_merge_point(0, ' #60 STORE_FAST') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') @@ -387,45 +387,45 @@ +577: i55 = int_lt(i54, 0) guard_false(i55, descr=) [p1, p0, p2, p5, i53, i52, None, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+587: jump(p0, p1, p2, p5, i52, i53, descr=TargetToken(139705745528240)) ++587: jump(p0, p1, p2, p5, i52, i53, descr=TargetToken(140616447301040)) +592: --end of the loop-- -[101d9335bb7f7] jit-log-opt-loop} -[101d9336ee0c7] {jit-backend -[101d933752597] {jit-backend-dump +[88d3329e7be] jit-log-opt-loop} +[88d3339de62] {jit-backend +[88d333fa652] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447bf5 +0 488DA50000000049BB68E121CA0F7F00004D8B234983C40149BB68E121CA0F7F00004D89234C8BA558FFFFFF498B54241048C740100000000041813C24388F01000F85000000004D8B6424184983FC020F85000000004885D20F8500000000488B9570FFFFFF4C8B6268488B042530255601488D5020483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700F8220000488B9570FFFFFF40C68295000000014C8B8D60FFFFFFF64204017417504151524889D74C89CE41BBF0C4C50041FFD35A4159584C894A50F6420401741D50524889D749BB206055C70F7F00004C89DE41BBF0C4C50041FFD35A5849BB206055C70F7F00004C895A7840C682960000000048C742600000000048C782800000000200000048C742582A00000041F644240401742641F6442404407518504C89E7BE000000004889C241BB50C2C50041FFD358EB0641804C24FF0149894424104889C24883C01048C700F82200004C8B8D30FFFFFF4C89480841F644240401742841F644240440751A52504C89E7BE010000004889C241BB50C2C50041FFD3585AEB0641804C24FF01498944241849C74424200000000049C74424280000000049C7442430000000004C89720848891425B039720141BBD01BF30041FFD3B801000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD344403048086139032500000049BB007044C70F7F000041FFD344403148086139032600000049BB007044C70F7F000041FFD34440084861390327000000 -[101d93375b0db] jit-backend-dump} -[101d93375bdb3] {jit-backend-addr -bridge out of Guard 16 has address 7f0fc7447bf5 to 7f0fc7447dee -[101d93375ce13] jit-backend-addr} -[101d93375d5b3] {jit-backend-dump +CODE_DUMP @7fe3d152abf5 +0 488DA50000000049BB681130D4E37F00004D8B234983C40149BB681130D4E37F00004D89234C8BA558FFFFFF498B54241048C740100000000041813C24388F01000F85000000004D8B6424184983FC020F85000000004885D20F8500000000488B9570FFFFFF4C8B6268488B042530255601488D5020483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700F8220000488B9570FFFFFF40C68295000000014C8B8D60FFFFFFF64204017417415152504889D74C89CE41BBF0C4C50041FFD3585A41594C894A50F6420401741D52504889D749BB20A063D1E37F00004C89DE41BBF0C4C50041FFD3585A49BB20A063D1E37F00004C895A7840C682960000000048C742600000000048C782800000000200000048C742582A00000041F644240401742641F6442404407518504C89E7BE000000004889C241BB50C2C50041FFD358EB0641804C24FF0149894424104889C24883C01048C700F82200004C8B8D30FFFFFF4C89480841F644240401742841F644240440751A50524C89E7BE010000004889C241BB50C2C50041FFD35A58EB0641804C24FF01498944241849C74424200000000049C74424280000000049C7442430000000004C89720848891425B039720141BBD01BF30041FFD3B801000000488D65D8415F415E415D415C5B5DC349BB00A052D1E37F000041FFD344403048086139032500000049BB00A052D1E37F000041FFD344403148086139032600000049BB00A052D1E37F000041FFD34440084861390327000000 +[88d33402c6a] jit-backend-dump} +[88d334039c2] {jit-backend-addr +bridge out of Guard 16 has address 7fe3d152abf5 to 7fe3d152adee +[88d33404b0a] jit-backend-addr} +[88d334053be] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447bf8 +0 A0FEFFFF -[101d93375e7df] jit-backend-dump} -[101d933766b53] {jit-backend-dump +CODE_DUMP @7fe3d152abf8 +0 A0FEFFFF +[88d33406682] jit-backend-dump} +[88d33406f22] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447c38 +0 B2010000 -[101d933767ccb] jit-backend-dump} -[101d933768297] {jit-backend-dump +CODE_DUMP @7fe3d152ac38 +0 B2010000 +[88d33407d7a] jit-backend-dump} +[88d3340e6d6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447c47 +0 BC010000 -[101d93376905f] jit-backend-dump} -[101d9337695f3] {jit-backend-dump +CODE_DUMP @7fe3d152ac47 +0 BC010000 +[88d3340f83a] jit-backend-dump} +[88d3340fec6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447c50 +0 CC010000 -[101d93376a23b] jit-backend-dump} -[101d93376abf7] {jit-backend-dump +CODE_DUMP @7fe3d152ac50 +0 CC010000 +[88d33410c6a] jit-backend-dump} +[88d334115ae] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74475cf +0 22060000 -[101d93376b887] jit-backend-dump} -[101d93376c36f] jit-backend} -[101d93376d483] {jit-log-opt-bridge +CODE_DUMP @7fe3d152a5cf +0 22060000 +[88d33412206] jit-backend-dump} +[88d33412bbe] jit-backend} +[88d33413c72] {jit-log-opt-bridge # bridge out of Guard 16 with 28 ops [p0, p1, p2, i3, i4, i5, p6, p7, i8, i9] debug_merge_point(0, ' #38 POP_BLOCK') @@ -457,153 +457,153 @@ +464: setfield_gc(p18, i8, descr=) +468: finish(p18, descr=) +505: --end of the loop-- -[101d9337940af] jit-log-opt-bridge} -[101d933e4fadb] {jit-backend -[101d9342684e3] {jit-backend-dump +[88d3343ab46] jit-log-opt-bridge} +[88d3411d8fa] {jit-backend +[88d3484d56e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447e87 +0 488DA50000000049BB80E121CA0F7F0000498B034883C00149BB80E121CA0F7F0000498903488B8570FFFFFF4C8B780849BB302855C70F7F00004D39DF0F85000000004D8B771049BBF02855C70F7F00004D39DE0F850000000041BB201B8D0041FFD34C8B78404C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8B342500D785014981FE201288010F85000000004C8B34254845A0024983FE000F8C0000000048898518FFFFFF488B042530255601488D9048010000483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700488701004889C24881C09800000048C7008800000048C74008050000004989C64883C03848C700F82200004989C54883C01048C700F82200004989C44883C01048C700806300004989C24883C01848C700783600004989C14883C01848C7008800000048C74008000000004989C04883C01048C700508A010048896808488BBD18FFFFFFF6470401741E4151524152505741504889C641BBF0C4C50041FFD341585F58415A5A415948894740488BB570FFFFFF48896E184C897A3049C74508010000004D896E104D89661849C74110400FA10149BB80881ACA0F7F00004D8959084D894A1049C74208010000004D8956204C89726848C742700200000049BB302855C70F7F00004C895A0848C742581300000048C7828000000003000000C782900000001500000049BB206055C70F7F00004C895A7849BBA0881ACA0F7F00004C895A604C89422848899510FFFFFF48898508FFFFFF48C78578FFFFFF280000004889FE4889D749BB067444C70F7F000041FFD34883F80174154889C7488BB510FFFFFF41BB4091940041FFD3EB23488B8510FFFFFF48C7401800000000488B0425B039720148C70425B0397201000000004883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488B9518FFFFFF488B7A504885FF0F8500000000488B7A28488BB510FFFFFF48C74650000000004883FF000F8500000000488B7A404C8B46304C0FB6B694000000F6420401741B5256504150574889D74C89C641BBF0C4C50041FFD35F4158585E5A4C8942404D85F60F85000000004C8BB508FFFFFF49C74608FDFFFFFF8138F82200000F85000000004C8B7008488B9528FFFFFF4C01F20F8000000000488B8520FFFFFF4883C0010F80000000004C8B34254845A0024983FE000F8C0000000049BB98E121CA0F7F00004D8B334983C60149BB98E121CA0F7F00004D89334881F8102700000F8D0000000049BB00000000000000804C39D80F8400000000B90200000048899500FFFFFF488985F8FEFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F8500000000488B8500FFFFFF4883C0010F80000000004C8BB5F8FEFFFF4983C601488B14254845A0024883FA000F8C000000004C89F349BB887944C70F7F000041FFE349BB007044C70F7F000041FFD344003C484C6569032900000049BB007044C70F7F000041FFD34400383C484C6569032A00000049BB007044C70F7F000041FFD344003C484C6569032B00000049BB007044C70F7F000041FFD344400038484C3C156569032C00000049BB007044C70F7F000041FFD3444000484C3C156569032D00000049BB007044C70F7F000041FFD3444000484C3C156569032E00000049BB007044C70F7F000041FFD344400038484C3C156569032F00000049BB007044C70F7F000041FFD3444000484C3C156569033000000049BB437044C70F7F000041FFD344406C700074484C6569032800000049BB437044C70F7F000041FFD344406C700074484C6569033100000049BB007044C70F7F000041FFD344400800701C74484C6569033200000049BB007044C70F7F000041FFD3444000180874484C6569033300000049BB007044C70F7F000041FFD34440001C180874484C6569033400000049BB007044C70F7F000041FFD3444000484C6569033500000049BB007044C70F7F000041FFD344400009484C6569033600000049BB007044C70F7F000041FFD3444001484C090769033700000049BB007044C70F7F000041FFD34440484C01090707033800000049BB007044C70F7F000041FFD34440484C01090707033900000049BB007044C70F7F000041FFD34440484C0109033A00000049BB007044C70F7F000041FFD3444001484C0709033B00000049BB007044C70F7F000041FFD34440484C017D79033C00000049BB007044C70F7F000041FFD3444001484C077D79033D00000049BB007044C70F7F000041FFD34440484C0139070707033E00000049BB007044C70F7F000041FFD34440484C0139070707033F000000 -[101d9342870eb] jit-backend-dump} -[101d934287cc7] {jit-backend-addr -bridge out of Guard 33 has address 7f0fc7447e87 to 7f0fc74482b6 -[101d934289067] jit-backend-addr} -[101d934289bfb] {jit-backend-dump +CODE_DUMP @7fe3d152ae87 +0 488DA50000000049BB801130D4E37F0000498B034883C00149BB801130D4E37F0000498903488B8570FFFFFF4C8B780849BB306863D1E37F00004D39DF0F85000000004D8B771049BBF06863D1E37F00004D39DE0F850000000041BB201B8D0041FFD34C8B78404C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8B342500D785014981FE201288010F85000000004C8B34254845A0024983FE000F8C0000000048898518FFFFFF488B042530255601488D9048010000483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700488701004889C24881C09800000048C7008800000048C74008050000004989C64883C03848C700F82200004989C54883C01048C700F82200004989C44883C01048C700806300004989C24883C01848C700783600004989C14883C01848C7008800000048C74008000000004989C04883C01048C700508A010048896808488BBD18FFFFFFF6470401741E5241524150505741514889C641BBF0C4C50041FFD341595F584158415A5A48894740488BB570FFFFFF48896E184C897A3049C74508010000004D896E104D89661849C74110400FA10149BB80A828D4E37F00004D8959084D894A1049C74208010000004D8956204C89726848C742700200000049BB306863D1E37F00004C895A0848C742581300000048C7828000000003000000C782900000001500000049BB20A063D1E37F00004C895A7849BBA0A828D4E37F00004C895A604C89422848899510FFFFFF48898508FFFFFF48C78578FFFFFF280000004889FE4889D749BB06A452D1E37F000041FFD34883F80174154889C7488BB510FFFFFF41BB4091940041FFD3EB23488B8510FFFFFF48C7401800000000488B0425B039720148C70425B0397201000000004883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488B9518FFFFFF488B7A504885FF0F8500000000488B7A28488BB510FFFFFF48C74650000000004883FF000F8500000000488B7A404C8B46304C0FB6B694000000F6420401741B5652504150574889D74C89C641BBF0C4C50041FFD35F4158585A5E4C8942404D85F60F85000000004C8BB508FFFFFF49C74608FDFFFFFF8138F82200000F85000000004C8B7008488B9528FFFFFF4C01F20F8000000000488B8520FFFFFF4883C0010F80000000004C8B34254845A0024983FE000F8C0000000049BB981130D4E37F00004D8B334983C60149BB981130D4E37F00004D89334881F8102700000F8D0000000049BB00000000000000804C39D80F8400000000B90200000048899500FFFFFF488985F8FEFFFF489948F7F94889D048C1FA3F41BE020000004921D64C01F04883F8000F8500000000488B8500FFFFFF4883C0010F80000000004C8BB5F8FEFFFF4983C601488B14254845A0024883FA000F8C000000004C89F349BB88A952D1E37F000041FFE349BB00A052D1E37F000041FFD344003C484C6569032900000049BB00A052D1E37F000041FFD34400383C484C6569032A00000049BB00A052D1E37F000041FFD344003C484C6569032B00000049BB00A052D1E37F000041FFD344400038484C3C156569032C00000049BB00A052D1E37F000041FFD3444000484C3C156569032D00000049BB00A052D1E37F000041FFD3444000484C3C156569032E00000049BB00A052D1E37F000041FFD344400038484C3C156569032F00000049BB00A052D1E37F000041FFD3444000484C3C156569033000000049BB43A052D1E37F000041FFD344406C700074484C6569032800000049BB43A052D1E37F000041FFD344406C700074484C6569033100000049BB00A052D1E37F000041FFD344400800701C74484C6569033200000049BB00A052D1E37F000041FFD3444000180874484C6569033300000049BB00A052D1E37F000041FFD34440001C180874484C6569033400000049BB00A052D1E37F000041FFD3444000484C6569033500000049BB00A052D1E37F000041FFD344400009484C6569033600000049BB00A052D1E37F000041FFD3444001484C090769033700000049BB00A052D1E37F000041FFD34440484C01090707033800000049BB00A052D1E37F000041FFD34440484C01090707033900000049BB00A052D1E37F000041FFD34440484C0901033A00000049BB00A052D1E37F000041FFD3444001484C0907033B00000049BB00A052D1E37F000041FFD34440484C01797D033C00000049BB00A052D1E37F000041FFD3444001484C07797D033D00000049BB00A052D1E37F000041FFD34440484C3901070707033E00000049BB00A052D1E37F000041FFD34440484C3901070707033F000000 +[88d3487067a] jit-backend-dump} +[88d3487132a] {jit-backend-addr +bridge out of Guard 33 has address 7fe3d152ae87 to 7fe3d152b2b6 +[88d34872842] jit-backend-addr} +[88d34873512] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447e8a +0 70FEFFFF -[101d93428af1b] jit-backend-dump} -[101d93428ba07] {jit-backend-dump +CODE_DUMP @7fe3d152ae8a +0 70FEFFFF +[88d3487491a] jit-backend-dump} +[88d348755de] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447ec6 +0 EC030000 -[101d93428c88b] jit-backend-dump} -[101d93428cfbf] {jit-backend-dump +CODE_DUMP @7fe3d152aec6 +0 EC030000 +[88d348763ea] jit-backend-dump} +[88d348769e2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447edd +0 EE030000 -[101d93428dcd3] jit-backend-dump} -[101d93428e49b] {jit-backend-dump +CODE_DUMP @7fe3d152aedd +0 EE030000 +[88d348776fe] jit-backend-dump} +[88d34877e2e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447ef7 +0 07040000 -[101d93428f367] jit-backend-dump} -[101d93428f9f3] {jit-backend-dump +CODE_DUMP @7fe3d152aef7 +0 07040000 +[88d34878ade] jit-backend-dump} +[88d3487917a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447f05 +0 15040000 -[101d934290813] jit-backend-dump} -[101d934290f17] {jit-backend-dump +CODE_DUMP @7fe3d152af05 +0 15040000 +[88d34879fb6] jit-backend-dump} +[88d3487a6ba] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447f1a +0 36040000 -[101d934291c43] jit-backend-dump} -[101d9342950ef] {jit-backend-dump +CODE_DUMP @7fe3d152af1a +0 36040000 +[88d3487b3a6] jit-backend-dump} +[88d3487b906] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447f2c +0 40040000 -[101d93429620f] jit-backend-dump} -[101d93429687b] {jit-backend-dump +CODE_DUMP @7fe3d152af2c +0 40040000 +[88d3487c596] jit-backend-dump} +[88d3487caf6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448131 +0 56020000 -[101d9342976bb] jit-backend-dump} -[101d934297d93] {jit-backend-dump +CODE_DUMP @7fe3d152b131 +0 56020000 +[88d34880546] jit-backend-dump} +[88d34880c56] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448140 +0 63020000 -[101d934298bd7] jit-backend-dump} -[101d934299233] {jit-backend-dump +CODE_DUMP @7fe3d152b140 +0 63020000 +[88d34881afa] jit-backend-dump} +[88d34882166] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448154 +0 6B020000 -[101d934299ec3] jit-backend-dump} -[101d93429a4ff] {jit-backend-dump +CODE_DUMP @7fe3d152b154 +0 6B020000 +[88d34882f72] jit-backend-dump} +[88d348835da] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448171 +0 6B020000 -[101d93429b213] jit-backend-dump} -[101d93429b7ab] {jit-backend-dump +CODE_DUMP @7fe3d152b171 +0 6B020000 +[88d3488424e] jit-backend-dump} +[88d348847ba] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74481af +0 49020000 -[101d93429c3bb] jit-backend-dump} -[101d93429c923] {jit-backend-dump +CODE_DUMP @7fe3d152b1af +0 49020000 +[88d34885406] jit-backend-dump} +[88d3488596e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74481ca +0 4B020000 -[101d93429d653] jit-backend-dump} -[101d93429dccb] {jit-backend-dump +CODE_DUMP @7fe3d152b1ca +0 4B020000 +[88d348865c6] jit-backend-dump} +[88d34886b1e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74481de +0 50020000 -[101d93429e98b] jit-backend-dump} -[101d93429ef73] {jit-backend-dump +CODE_DUMP @7fe3d152b1de +0 50020000 +[88d348878f2] jit-backend-dump} +[88d34887f66] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74481ef +0 59020000 -[101d93429fc8b] jit-backend-dump} -[101d9342a0737] {jit-backend-dump +CODE_DUMP @7fe3d152b1ef +0 59020000 +[88d34888c7a] jit-backend-dump} +[88d3488976e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448201 +0 7B020000 -[101d9342a13ab] jit-backend-dump} -[101d9342a192f] {jit-backend-dump +CODE_DUMP @7fe3d152b201 +0 7B020000 +[88d3488a3de] jit-backend-dump} +[88d3488a966] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744822c +0 6A020000 -[101d9342a25a7] jit-backend-dump} -[101d9342a2c27] {jit-backend-dump +CODE_DUMP @7fe3d152b22c +0 6A020000 +[88d3488b5e2] jit-backend-dump} +[88d3488bb6a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744823f +0 6F020000 -[101d9342a3acb] jit-backend-dump} -[101d9342a410b] {jit-backend-dump +CODE_DUMP @7fe3d152b23f +0 6F020000 +[88d3488c7d6] jit-backend-dump} +[88d3488ce32] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448274 +0 53020000 -[101d9342a4e27] jit-backend-dump} -[101d9342a53af] {jit-backend-dump +CODE_DUMP @7fe3d152b274 +0 53020000 +[88d3488db4a] jit-backend-dump} +[88d3488e1ae] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448285 +0 5B020000 -[101d9342a6027] jit-backend-dump} -[101d9342a65db] {jit-backend-dump +CODE_DUMP @7fe3d152b285 +0 5B020000 +[88d3488ee86] jit-backend-dump} +[88d3488f426] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74482a2 +0 73020000 -[101d9342a7263] jit-backend-dump} -[101d9342a7acb] {jit-backend-dump +CODE_DUMP @7fe3d152b2a2 +0 73020000 +[88d3489006e] jit-backend-dump} +[88d34890abe] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74479f9 +0 8A040000 -[101d9342a88cb] jit-backend-dump} -[101d9342a9513] jit-backend} -[101d9342aabeb] {jit-log-opt-bridge +CODE_DUMP @7fe3d152a9f9 +0 8A040000 +[88d3489174a] jit-backend-dump} +[88d3489254e] jit-backend} +[88d34894446] {jit-log-opt-bridge # bridge out of Guard 33 with 138 ops [p0, p1, p2, p3, i4, i5, i6] debug_merge_point(0, ' #37 LOAD_FAST') debug_merge_point(0, ' #40 LOAD_GLOBAL') +37: p7 = getfield_gc(p1, descr=) -+48: guard_value(p7, ConstPtr(ptr8), descr=) [p0, p1, p7, p2, p3, i6, i5] ++48: guard_value(p7, ConstPtr(ptr8), descr=) [p0, p1, p7, p2, p3, i5, i6] +67: p9 = getfield_gc(p7, descr=) -+71: guard_value(p9, ConstPtr(ptr10), descr=) [p0, p1, p9, p7, p2, p3, i6, i5] -+90: guard_not_invalidated(, descr=) [p0, p1, p7, p2, p3, i6, i5] ++71: guard_value(p9, ConstPtr(ptr10), descr=) [p0, p1, p9, p7, p2, p3, i5, i6] ++90: guard_not_invalidated(, descr=) [p0, p1, p7, p2, p3, i5, i6] debug_merge_point(0, ' #43 CALL_FUNCTION') +90: p12 = call(ConstClass(getexecutioncontext), descr=) +99: p13 = getfield_gc(p12, descr=) +103: i14 = force_token() +103: p15 = getfield_gc(p12, descr=) -+107: guard_isnull(p15, descr=) [p0, p1, p12, p15, p2, p3, p13, i14, i6, i5] ++107: guard_isnull(p15, descr=) [p0, p1, p12, p15, p2, p3, p13, i14, i5, i6] +116: i16 = getfield_gc(p12, descr=) +120: i17 = int_is_zero(i16) -guard_true(i17, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] +guard_true(i17, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] debug_merge_point(1, ' #0 LOAD_CONST') debug_merge_point(1, ' #3 STORE_FAST') debug_merge_point(1, ' #6 SETUP_LOOP') debug_merge_point(1, ' #9 LOAD_GLOBAL') -+130: guard_not_invalidated(, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] ++130: guard_not_invalidated(, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] +130: p19 = getfield_gc(ConstPtr(ptr18), descr=) -+138: guard_value(p19, ConstPtr(ptr20), descr=) [p0, p1, p12, p19, p2, p3, p13, i14, i6, i5] ++138: guard_value(p19, ConstPtr(ptr20), descr=) [p0, p1, p12, p19, p2, p3, p13, i14, i5, i6] debug_merge_point(1, ' #12 LOAD_CONST') debug_merge_point(1, ' #15 CALL_FUNCTION') debug_merge_point(1, ' #18 GET_ITER') @@ -616,7 +616,7 @@ debug_merge_point(1, ' #35 JUMP_ABSOLUTE') +151: i22 = getfield_raw(44057928, descr=) +159: i24 = int_lt(i22, 0) -guard_false(i24, descr=) [p0, p1, p12, p2, p3, p13, i14, i6, i5] +guard_false(i24, descr=) [p0, p1, p12, p2, p3, p13, i14, i5, i6] debug_merge_point(1, ' #19 FOR_ITER') +169: i25 = force_token() p27 = new_with_vtable(38637192) @@ -649,33 +649,33 @@ +548: setfield_gc(p27, ConstPtr(ptr54), descr=) +562: setfield_gc(p27, p39, descr=) +566: p55 = call_assembler(p27, p12, descr=) -guard_not_forced(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i6, i5] +guard_not_forced(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i5, i6] +686: keepalive(p27) -+686: guard_no_exception(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i6, i5] ++686: guard_no_exception(, descr=) [p0, p1, p12, p27, p55, p41, p2, p3, i5, i6] +701: p56 = getfield_gc(p12, descr=) -+712: guard_isnull(p56, descr=) [p0, p1, p12, p55, p27, p56, p41, p2, p3, i6, i5] ++712: guard_isnull(p56, descr=) [p0, p1, p12, p55, p27, p56, p41, p2, p3, i5, i6] +721: i57 = getfield_gc(p12, descr=) +725: setfield_gc(p27, ConstPtr(ptr58), descr=) +740: i59 = int_is_true(i57) -guard_false(i59, descr=) [p0, p1, p55, p27, p12, p41, p2, p3, i6, i5] +guard_false(i59, descr=) [p0, p1, p55, p27, p12, p41, p2, p3, i5, i6] +750: p60 = getfield_gc(p12, descr=) +754: p61 = getfield_gc(p27, descr=) +758: i62 = getfield_gc(p27, descr=) setfield_gc(p12, p61, descr=) -+803: guard_false(i62, descr=) [p0, p1, p55, p60, p27, p12, p41, p2, p3, i6, i5] ++803: guard_false(i62, descr=) [p0, p1, p55, p60, p27, p12, p41, p2, p3, i5, i6] debug_merge_point(0, ' #46 INPLACE_ADD') +812: setfield_gc(p41, -3, descr=) -+827: guard_class(p55, ConstClass(W_IntObject), descr=) [p0, p1, p55, p2, p3, i6, i5] ++827: guard_class(p55, ConstClass(W_IntObject), descr=) [p0, p1, p55, p2, p3, i5, i6] +839: i65 = getfield_gc_pure(p55, descr=) -+843: i66 = int_add_ovf(i6, i65) -guard_no_overflow(, descr=) [p0, p1, p55, i66, p2, p3, i6, i5] ++843: i66 = int_add_ovf(i5, i65) +guard_no_overflow(, descr=) [p0, p1, p55, i66, p2, p3, i5, i6] debug_merge_point(0, ' #47 STORE_FAST') debug_merge_point(0, ' #50 JUMP_FORWARD') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') debug_merge_point(0, ' #69 INPLACE_ADD') -+859: i68 = int_add_ovf(i5, 1) -guard_no_overflow(, descr=) [p0, p1, i68, p2, p3, i66, None, i5] ++859: i68 = int_add_ovf(i6, 1) +guard_no_overflow(, descr=) [p0, p1, i68, p2, p3, i66, None, i6] debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') +876: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i68, i66, None, None] @@ -683,29 +683,29 @@ +884: i73 = int_lt(i71, 0) guard_false(i73, descr=) [p0, p1, p2, p3, i68, i66, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+894: label(p1, p0, p2, p3, i66, i68, descr=TargetToken(139705745530240)) ++894: label(p1, p0, p2, p3, i66, i68, descr=TargetToken(140616447303040)) debug_merge_point(0, ' #18 LOAD_CONST') debug_merge_point(0, ' #21 COMPARE_OP') +924: i75 = int_lt(i68, 10000) -guard_true(i75, descr=) [p0, p1, p2, p3, i68, i66] +guard_true(i75, descr=) [p0, p1, p2, p3, i66, i68] debug_merge_point(0, ' #24 POP_JUMP_IF_FALSE') debug_merge_point(0, ' #27 LOAD_FAST') debug_merge_point(0, ' #30 LOAD_CONST') debug_merge_point(0, ' #33 BINARY_MODULO') +937: i77 = int_eq(i68, -9223372036854775808) -guard_false(i77, descr=) [p0, p1, i68, p2, p3, None, i66] +guard_false(i77, descr=) [p0, p1, i68, p2, p3, i66, None] +956: i79 = int_mod(i68, 2) +980: i81 = int_rshift(i79, 63) +987: i82 = int_and(2, i81) +996: i83 = int_add(i79, i82) debug_merge_point(0, ' #34 POP_JUMP_IF_FALSE') +999: i84 = int_is_true(i83) -guard_false(i84, descr=) [p0, p1, p2, p3, i83, i68, i66] +guard_false(i84, descr=) [p0, p1, p2, p3, i83, i66, i68] debug_merge_point(0, ' #53 LOAD_FAST') debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 INPLACE_ADD') +1009: i86 = int_add_ovf(i66, 1) -guard_no_overflow(, descr=) [p0, p1, i86, p2, p3, None, i68, i66] +guard_no_overflow(, descr=) [p0, p1, i86, p2, p3, None, i66, i68] debug_merge_point(0, ' #60 STORE_FAST') debug_merge_point(0, ' #63 LOAD_FAST') debug_merge_point(0, ' #66 LOAD_CONST') @@ -713,155 +713,155 @@ +1026: i88 = int_add(i68, 1) debug_merge_point(0, ' #70 STORE_FAST') debug_merge_point(0, ' #73 JUMP_ABSOLUTE') -+1037: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i86, i88, None, None, None] ++1037: guard_not_invalidated(, descr=) [p0, p1, p2, p3, i88, i86, None, None, None] +1037: i90 = getfield_raw(44057928, descr=) +1045: i92 = int_lt(i90, 0) -guard_false(i92, descr=) [p0, p1, p2, p3, i86, i88, None, None, None] +guard_false(i92, descr=) [p0, p1, p2, p3, i88, i86, None, None, None] debug_merge_point(0, ' #15 LOAD_FAST') -+1055: jump(p1, p0, p2, p3, i86, i88, descr=TargetToken(139705745528240)) ++1055: jump(p1, p0, p2, p3, i86, i88, descr=TargetToken(140616447301040)) +1071: --end of the loop-- -[101d93433fbbf] jit-log-opt-bridge} -[101d934497aaf] {jit-backend-dump +[88d3492e66a] jit-log-opt-bridge} +[88d34aa7ade] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447976 +0 E9A1010000 -[101d93449ad5f] jit-backend-dump} -[101d93449b473] {jit-backend-dump +CODE_DUMP @7fe3d152a976 +0 E9A1010000 +[88d34aac14a] jit-backend-dump} +[88d34aac9da] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447a19 +0 E994010000 -[101d93449c38f] jit-backend-dump} -[101d93449ca23] {jit-backend-dump +CODE_DUMP @7fe3d152aa19 +0 E994010000 +[88d34aad96e] jit-backend-dump} +[88d34aae002] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447ee1 +0 E903040000 -[101d93449d84f] jit-backend-dump} -[101d93449de23] {jit-backend-dump +CODE_DUMP @7fe3d152aee1 +0 E903040000 +[88d34aaefb6] jit-backend-dump} +[88d34aaf5d2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7447f09 +0 E92B040000 -[101d93449eb0f] jit-backend-dump} -[101d93449f06f] {jit-backend-dump +CODE_DUMP @7fe3d152af09 +0 E92B040000 +[88d34ab027a] jit-backend-dump} +[88d34ab07f6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74481f3 +0 E96E020000 -[101d93449fd23] jit-backend-dump} -[101d9344a036f] {jit-backend-dump +CODE_DUMP @7fe3d152b1f3 +0 E96E020000 +[88d34ab14da] jit-backend-dump} +[88d34ab1a66] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448294 +0 E965020000 -[101d9344a1233] jit-backend-dump} -[101d93494cec3] {jit-backend -[101d934a39f17] {jit-backend-dump +CODE_DUMP @7fe3d152b294 +0 E965020000 +[88d34ab2722] jit-backend-dump} +[88d3500197e] {jit-backend +[88d350f661a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74485a8 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBB0E121CA0F7F00004D8B3B4983C70149BBB0E121CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284889BD70FFFFFF498B78304C89BD68FFFFFF4D8B783848898D60FFFFFF498B48404D8B40484889B558FFFFFF4C89A550FFFFFF4C898D48FFFFFF48899D40FFFFFF48899538FFFFFF48898530FFFFFF4C89BD28FFFFFF48898D20FFFFFF4C898518FFFFFF49BBC8E121CA0F7F00004D8B034983C00149BBC8E121CA0F7F00004D89034983FA050F8500000000813F806300000F85000000004C8B57104D85D20F84000000004C8B4708498B4A108139582D03000F85000000004D8B5208498B4A084D8B7A104D8B52184983F8000F8C000000004D39D00F8D000000004C89C04D0FAFC74889CA4C01C14883C001488947084983FD000F850000000049BB28DC58C70F7F00004D39DE0F85000000004C8BB570FFFFFF4D8B6E0849BB302855C70F7F00004D39DD0F85000000004D8B451049BBF02855C70F7F00004D39D80F85000000004C8B2C2500D785014981FD201288010F850000000048899510FFFFFF48898D08FFFFFF48898500FFFFFF4889BDF8FEFFFF4C8995F0FEFFFF4889CF41BBA01FEF0041FFD348833C25A046A002000F85000000004C8B9560FFFFFF498B7A10813FF0CE01000F8500000000498B7A08488B4F084889CA4883C101488985E8FEFFFF4889BDE0FEFFFF488995D8FEFFFF4889CE41BB9029790041FFD348833C25A046A002000F8500000000488B8DE0FEFFFF488B5110488BBDD8FEFFFF4C8B95E8FEFFFFF64204017432F6420440751E52575141524889FE4889D74C89D241BB50C2C50041FFD3415A595F5AEB0E5748C1EF074883F7F8480FAB3A5F4C8954FA104C8B14254845A0024983FA000F8C0000000049BBE0E121CA0F7F00004D8B334983C60149BBE0E121CA0F7F00004D89334C8BB500FFFFFF4C3BB5F0FEFFFF0F8D000000004D0FAFF74C8B9510FFFFFF4D01F24C8BB500FFFFFF4983C601488BBDF8FEFFFF4C8977084C899508FFFFFF48898DD0FEFFFF4C89D741BBA01FEF0041FFD348833C25A046A002000F8500000000488B8DD0FEFFFF4C8B51084C89D74983C2014889BDC8FEFFFF488985C0FEFFFF4889CF4C89D641BB9029790041FFD348833C25A046A002000F85000000004C8B95D0FEFFFF498B4A10488B85C8FEFFFF488BBDC0FEFFFFF64104017432F6410440751E51415257504889C64889FA4889CF41BB50C2C50041FFD3585F415A59EB0E5048C1E8074883F0F8480FAB015848897CC110488B3C254845A0024883FF000F8C000000004C89B500FFFFFF4C89D1E9CCFEFFFF49BB007044C70F7F000041FFD3294C404438355055585C60481C64686C034000000049BB007044C70F7F000041FFD34C401C44383550585C604864686C034100000049BB007044C70F7F000041FFD34C401C2844383550585C604864686C034200000049BB007044C70F7F000041FFD34C401C21042844383550585C604864686C034300000049BB007044C70F7F000041FFD34C401C21293D0544383550585C604864686C034400000049BB007044C70F7F000041FFD34C401C213D0544383550585C604864686C034500000049BB007044C70F7F000041FFD3354C40443850585C60481C686C05034600000049BB007044C70F7F000041FFD34C403844505C60481C686C05034700000049BB007044C70F7F000041FFD34C383444505C60481C686C05034800000049BB007044C70F7F000041FFD34C38203444505C60481C686C05034900000049BB007044C70F7F000041FFD34C383444505C60481C686C05034A00000049BB007044C70F7F000041FFD34C383444505C60481C686C05034B00000049BB437044C70F7F000041FFD34C380044505C60487C6C75034C00000049BB007044C70F7F000041FFD34C381C2844505C607C6C0075034D00000049BB437044C70F7F000041FFD34C388D018401880144505C60487C6C0775034E00000049BB007044C70F7F000041FFD34C3844505C60487C6C0775034F00000049BB007044C70F7F000041FFD34C407C393D7144505C60486C75035000000049BB007044C70F7F000041FFD34C4044505C60481C6C2907035100000049BB437044C70F7F000041FFD34C400044505C60487C6C7507035200000049BB437044C70F7F000041FFD34C4095019801900144505C60487C6C7507035300000049BB007044C70F7F000041FFD34C4044505C60487C6C75070354000000 -[101d934a5a737] jit-backend-dump} -[101d934a5b3bf] {jit-backend-addr -Loop 2 ( #13 FOR_ITER) has address 7f0fc74485de to 7f0fc74489b7 (bootstrap 7f0fc74485a8) -[101d934a5ce03] jit-backend-addr} -[101d934a5da27] {jit-backend-dump +CODE_DUMP @7fe3d152b5a8 +0 488B04254045A0024829E0483B0425E03C5101760D49BB63A352D1E37F000041FFD3554889E5534154415541564157488DA50000000049BBB01130D4E37F00004D8B3B4983C70149BBB01130D4E37F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284889B570FFFFFF498B70304C89A568FFFFFF4D8B60384889BD60FFFFFF498B78404D8B40484C89BD58FFFFFF4C898D50FFFFFF48899D48FFFFFF48899540FFFFFF48898538FFFFFF4C89A530FFFFFF4889BD28FFFFFF4C898520FFFFFF49BBC81130D4E37F00004D8B034983C00149BBC81130D4E37F00004D89034983FA050F8500000000813E806300000F85000000004C8B56104D85D20F84000000004C8B4608498B7A10813F582D03000F85000000004D8B5208498B7A084D8B62104D8B52184983F8000F8C000000004D39D00F8D000000004C89C04D0FAFC44889FA4C01C74883C001488946084983FD000F850000000049BB281C67D1E37F00004D39DE0F85000000004C8BB560FFFFFF4D8B6E0849BB306863D1E37F00004D39DD0F85000000004D8B451049BBF06863D1E37F00004D39D80F85000000004C8B2C2500D785014981FD201288010F85000000004C899518FFFFFF48899510FFFFFF4889BD08FFFFFF48898D00FFFFFF488985F8FEFFFF4889B5F0FEFFFF41BBA01FEF0041FFD348833C25A046A002000F8500000000488BB500FFFFFF488B4E108139F0CE01000F8500000000488B4E08488B79084889FA4883C701488985E8FEFFFF48898DE0FEFFFF488995D8FEFFFF4889FE4889CF41BB9029790041FFD348833C25A046A002000F8500000000488BBDE0FEFFFF488B5710488B8DD8FEFFFF488BB5E8FEFFFFF64204017430F6420440751C525157564889D74889F24889CE41BB50C2C50041FFD35E5F595AEB0E5148C1E9074883F1F8480FAB0A59488974CA10488B34254845A0024883FE000F8C0000000049BBE01130D4E37F00004D8B334983C60149BBE01130D4E37F00004D89334C8BB5F8FEFFFF4C3BB518FFFFFF0F8D000000004D0FAFF4488BB510FFFFFF4C01F64C8BB5F8FEFFFF4983C601488B8DF0FEFFFF4C8971084889B508FFFFFF4889BDD0FEFFFF4889F741BBA01FEF0041FFD348833C25A046A002000F8500000000488BBDD0FEFFFF488B4F084889CE4883C101488985C8FEFFFF4889B5C0FEFFFF4889CE41BB9029790041FFD348833C25A046A002000F8500000000488B8DD0FEFFFF488B7910488BB5C0FEFFFF488B85C8FEFFFFF6470401742AF64704407516575156504889C241BB50C2C50041FFD3585E595FEB0E5648C1EE074883F6F8480FAB375E488944F710488B04254845A0024883F8000F8C000000004C89B5F8FEFFFF4889CFE9D7FEFFFF49BB00A052D1E37F000041FFD32940484C3835445154585C0418606468034000000049BB00A052D1E37F000041FFD34048184C38354454585C04606468034100000049BB00A052D1E37F000041FFD3404818284C38354454585C04606468034200000049BB00A052D1E37F000041FFD3404818211C284C38354454585C04606468034300000049BB00A052D1E37F000041FFD34048182129311D4C38354454585C04606468034400000049BB00A052D1E37F000041FFD340481821311D4C38354454585C04606468034500000049BB00A052D1E37F000041FFD33540484C384454585C041864681D034600000049BB00A052D1E37F000041FFD34048384C44585C041864681D034700000049BB00A052D1E37F000041FFD34038344C44585C041864681D034800000049BB00A052D1E37F000041FFD3403820344C44585C041864681D034900000049BB00A052D1E37F000041FFD34038344C44585C041864681D034A00000049BB00A052D1E37F000041FFD34038344C44585C041864681D034B00000049BB43A052D1E37F000041FFD34038004C44585C7880016875034C00000049BB00A052D1E37F000041FFD3403804184C44585C8001680075034D00000049BB43A052D1E37F000041FFD340388D01840188014C44585C788001680775034E00000049BB00A052D1E37F000041FFD340384C44585C788001680775034F00000049BB00A052D1E37F000041FFD3404880013931714C44585C786875035000000049BB00A052D1E37F000041FFD340484C44585C7804681907035100000049BB43A052D1E37F000041FFD34048004C44585C788001687507035200000049BB43A052D1E37F000041FFD340489901940190014C44585C788001687507035300000049BB00A052D1E37F000041FFD340484C44585C7880016875070354000000 +[88d3511802e] jit-backend-dump} +[88d35118d66] {jit-backend-addr +Loop 2 ( #13 FOR_ITER) has address 7fe3d152b5de to 7fe3d152b9aa (bootstrap 7fe3d152b5a8) +[88d3511a862] jit-backend-addr} +[88d3511b456] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74485da +0 C0FEFFFF -[101d934a5eba3] jit-backend-dump} -[101d934a5f88f] {jit-backend-dump +CODE_DUMP @7fe3d152b5da +0 C0FEFFFF +[88d3511c6ca] jit-backend-dump} +[88d3511d1ee] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74486b7 +0 FC020000 -[101d934a605fb] jit-backend-dump} -[101d934a60bdf] {jit-backend-dump +CODE_DUMP @7fe3d152b6b0 +0 F6020000 +[88d3511e252] jit-backend-dump} +[88d3511e97a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74486c3 +0 12030000 -[101d934a6187b] jit-backend-dump} -[101d934a61e07] {jit-backend-dump +CODE_DUMP @7fe3d152b6bc +0 0C030000 +[88d3511f76a] jit-backend-dump} +[88d3511fdc6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74486d0 +0 25030000 -[101d934a62c73] jit-backend-dump} -[101d934a63327] {jit-backend-dump +CODE_DUMP @7fe3d152b6c9 +0 1F030000 +[88d35120aba] jit-backend-dump} +[88d3512117e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74486e4 +0 32030000 -[101d934a6407f] jit-backend-dump} -[101d934a6461f] {jit-backend-dump +CODE_DUMP @7fe3d152b6dd +0 2C030000 +[88d35121eda] jit-backend-dump} +[88d3512247e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74486fe +0 3B030000 -[101d934a65297] jit-backend-dump} -[101d934a65837] {jit-backend-dump +CODE_DUMP @7fe3d152b6f7 +0 35030000 +[88d351230e6] jit-backend-dump} +[88d3512368e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448707 +0 56030000 -[101d934a664cf] jit-backend-dump} -[101d934a66a3f] {jit-backend-dump +CODE_DUMP @7fe3d152b700 +0 50030000 +[88d35124492] jit-backend-dump} +[88d35124b06] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448726 +0 5A030000 -[101d934a6769b] jit-backend-dump} -[101d934a67d27] {jit-backend-dump +CODE_DUMP @7fe3d152b71f +0 54030000 +[88d35125862] jit-backend-dump} +[88d35125dd2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448739 +0 67030000 -[101d934a6d79b] jit-backend-dump} -[101d934a6df23] {jit-backend-dump +CODE_DUMP @7fe3d152b732 +0 61030000 +[88d35126a3e] jit-backend-dump} +[88d35126fbe] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448757 +0 67030000 -[101d934a6ec23] jit-backend-dump} -[101d934a6f2a3] {jit-backend-dump +CODE_DUMP @7fe3d152b750 +0 61030000 +[88d35127c1e] jit-backend-dump} +[88d3512818a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744876e +0 6E030000 -[101d934a6ff77] jit-backend-dump} -[101d934a70607] {jit-backend-dump +CODE_DUMP @7fe3d152b767 +0 68030000 +[88d3512db1e] jit-backend-dump} +[88d3512e5ea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448783 +0 96030000 -[101d934a7120f] jit-backend-dump} -[101d934a717bb] {jit-backend-dump +CODE_DUMP @7fe3d152b77c +0 90030000 +[88d3512f45e] jit-backend-dump} +[88d3512faaa] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74487c1 +0 76030000 -[101d934a72643] jit-backend-dump} -[101d934a72ccb] {jit-backend-dump +CODE_DUMP @7fe3d152b7be +0 6C030000 +[88d351307e6] jit-backend-dump} +[88d35130d36] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74487d8 +0 7C030000 -[101d934a73aa3] jit-backend-dump} -[101d934a74113] {jit-backend-dump +CODE_DUMP @7fe3d152b7d5 +0 73030000 +[88d351319a6] jit-backend-dump} +[88d35131f2a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448817 +0 5B030000 -[101d934a74dc3] jit-backend-dump} -[101d934a75477] {jit-backend-dump +CODE_DUMP @7fe3d152b817 +0 50030000 +[88d35132b92] jit-backend-dump} +[88d3513321e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744887f +0 16030000 -[101d934a7612b] jit-backend-dump} -[101d934a766ab] {jit-backend-dump +CODE_DUMP @7fe3d152b87d +0 0E030000 +[88d35134046] jit-backend-dump} +[88d3513468e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74488b1 +0 01030000 -[101d934a7726f] jit-backend-dump} -[101d934a77913] {jit-backend-dump +CODE_DUMP @7fe3d152b8af +0 FA020000 +[88d35135452] jit-backend-dump} +[88d35135a42] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74488fe +0 F0020000 -[101d934a7860f] jit-backend-dump} -[101d934a78c87] {jit-backend-dump +CODE_DUMP @7fe3d152b8fc +0 EA020000 +[88d351366ae] jit-backend-dump} +[88d35136c0e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744893c +0 D0020000 -[101d934a7998f] jit-backend-dump} -[101d934a7a003] {jit-backend-dump +CODE_DUMP @7fe3d152b937 +0 CE020000 +[88d35137892] jit-backend-dump} +[88d35137dee] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74489a4 +0 8B020000 -[101d934a7aca7] jit-backend-dump} -[101d934a7ba63] jit-backend} -[101d934a7e39b] {jit-log-opt-loop +CODE_DUMP @7fe3d152b997 +0 92020000 +[88d35138b26] jit-backend-dump} +[88d351397d6] jit-backend} +[88d3513c30a] {jit-log-opt-loop # Loop 2 ( #13 FOR_ITER) : loop with 100 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -880,113 +880,113 @@ +157: p22 = getarrayitem_gc(p8, 6, descr=) +168: p24 = getarrayitem_gc(p8, 7, descr=) +172: p25 = getfield_gc(p0, descr=) -+172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(139705792105152)) ++172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(140616493869760)) debug_merge_point(0, ' #13 FOR_ITER') -+265: guard_value(i6, 5, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] -+275: guard_class(p18, 38562496, descr=) [p1, p0, p18, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+287: p28 = getfield_gc(p18, descr=) -+291: guard_nonnull(p28, descr=) [p1, p0, p18, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+300: i29 = getfield_gc(p18, descr=) -+304: p30 = getfield_gc(p28, descr=) -+308: guard_class(p30, 38745240, descr=) [p1, p0, p18, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+320: p32 = getfield_gc(p28, descr=) -+324: i33 = getfield_gc_pure(p32, descr=) -+328: i34 = getfield_gc_pure(p32, descr=) -+332: i35 = getfield_gc_pure(p32, descr=) -+336: i37 = int_lt(i29, 0) ++258: guard_value(i6, 5, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] ++268: guard_class(p18, 38562496, descr=) [p1, p0, p18, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++280: p28 = getfield_gc(p18, descr=) ++284: guard_nonnull(p28, descr=) [p1, p0, p18, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++293: i29 = getfield_gc(p18, descr=) ++297: p30 = getfield_gc(p28, descr=) ++301: guard_class(p30, 38745240, descr=) [p1, p0, p18, i29, p30, p28, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] ++313: p32 = getfield_gc(p28, descr=) ++317: i33 = getfield_gc_pure(p32, descr=) ++321: i34 = getfield_gc_pure(p32, descr=) ++325: i35 = getfield_gc_pure(p32, descr=) ++329: i37 = int_lt(i29, 0) guard_false(i37, descr=) [p1, p0, p18, i29, i35, i34, i33, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+346: i38 = int_ge(i29, i35) ++339: i38 = int_ge(i29, i35) guard_false(i38, descr=) [p1, p0, p18, i29, i34, i33, p2, p3, i4, p5, p10, p12, p14, p16, p20, p22, p24] -+355: i39 = int_mul(i29, i34) -+362: i40 = int_add(i33, i39) -+368: i42 = int_add(i29, 1) -+372: setfield_gc(p18, i42, descr=) -+376: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p18, p22, p24, i40] ++348: i39 = int_mul(i29, i34) ++355: i40 = int_add(i33, i39) ++361: i42 = int_add(i29, 1) ++365: setfield_gc(p18, i42, descr=) ++369: guard_value(i4, 0, descr=) [i4, p1, p0, p2, p3, p5, p10, p12, p14, p16, p18, p22, p24, i40] debug_merge_point(0, ' #16 STORE_FAST') debug_merge_point(0, ' #19 LOAD_GLOBAL') -+386: guard_value(p3, ConstPtr(ptr44), descr=) [p1, p0, p3, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+405: p45 = getfield_gc(p0, descr=) -+416: guard_value(p45, ConstPtr(ptr46), descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+435: p47 = getfield_gc(p45, descr=) -+439: guard_value(p47, ConstPtr(ptr48), descr=) [p1, p0, p47, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+458: guard_not_invalidated(, descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] -+458: p50 = getfield_gc(ConstPtr(ptr49), descr=) -+466: guard_value(p50, ConstPtr(ptr51), descr=) [p1, p0, p50, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++379: guard_value(p3, ConstPtr(ptr44), descr=) [p1, p0, p3, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++398: p45 = getfield_gc(p0, descr=) ++409: guard_value(p45, ConstPtr(ptr46), descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++428: p47 = getfield_gc(p45, descr=) ++432: guard_value(p47, ConstPtr(ptr48), descr=) [p1, p0, p47, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++451: guard_not_invalidated(, descr=) [p1, p0, p45, p2, p5, p12, p14, p16, p18, p22, p24, i40] ++451: p50 = getfield_gc(ConstPtr(ptr49), descr=) ++459: guard_value(p50, ConstPtr(ptr51), descr=) [p1, p0, p50, p2, p5, p12, p14, p16, p18, p22, p24, i40] debug_merge_point(0, ' #22 LOAD_FAST') debug_merge_point(0, ' #25 CALL_FUNCTION') -+479: p53 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i40, descr=) -+526: guard_no_exception(, descr=) [p1, p0, p53, p2, p5, p12, p14, p16, p18, p24, i40] ++472: p53 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i40, descr=) ++523: guard_no_exception(, descr=) [p1, p0, p53, p2, p5, p12, p14, p16, p18, p24, i40] debug_merge_point(0, ' #28 LIST_APPEND') -+541: p54 = getfield_gc(p16, descr=) -+552: guard_class(p54, 38655536, descr=) [p1, p0, p54, p16, p2, p5, p12, p14, p18, p24, p53, i40] -+564: p56 = getfield_gc(p16, descr=) -+568: i57 = getfield_gc(p56, descr=) -+572: i59 = int_add(i57, 1) -+579: p60 = getfield_gc(p56, descr=) -+579: i61 = arraylen_gc(p60, descr=) -+579: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i59, descr=) ++538: p54 = getfield_gc(p16, descr=) ++549: guard_class(p54, 38655536, descr=) [p1, p0, p54, p16, p2, p5, p12, p14, p18, p24, p53, i40] ++561: p56 = getfield_gc(p16, descr=) ++565: i57 = getfield_gc(p56, descr=) ++569: i59 = int_add(i57, 1) ++576: p60 = getfield_gc(p56, descr=) ++576: i61 = arraylen_gc(p60, descr=) ++576: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i59, descr=) +612: guard_no_exception(, descr=) [p1, p0, i57, p53, p56, p2, p5, p12, p14, p16, p18, p24, None, i40] +627: p64 = getfield_gc(p56, descr=) setarrayitem_gc(p64, i57, p53, descr=) debug_merge_point(0, ' #31 JUMP_ABSOLUTE') -+713: i66 = getfield_raw(44057928, descr=) -+721: i68 = int_lt(i66, 0) ++711: i66 = getfield_raw(44057928, descr=) ++719: i68 = int_lt(i66, 0) guard_false(i68, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, None, i40] debug_merge_point(0, ' #13 FOR_ITER') -+731: p69 = same_as(ConstPtr(ptr48)) -+731: label(p0, p1, p2, p5, i40, p12, p14, p16, p18, p24, i42, i35, i34, i33, p56, descr=TargetToken(139705792105232)) ++729: p69 = same_as(ConstPtr(ptr48)) ++729: label(p0, p1, p2, p5, i40, p12, p14, p16, p18, p24, i42, i35, i34, i33, p56, descr=TargetToken(140616493869840)) debug_merge_point(0, ' #13 FOR_ITER') -+761: i70 = int_ge(i42, i35) ++759: i70 = int_ge(i42, i35) guard_false(i70, descr=) [p1, p0, p18, i42, i34, i33, p2, p5, p12, p14, p16, p24, i40] -+781: i71 = int_mul(i42, i34) -+785: i72 = int_add(i33, i71) -+795: i73 = int_add(i42, 1) ++779: i71 = int_mul(i42, i34) ++783: i72 = int_add(i33, i71) ++793: i73 = int_add(i42, 1) debug_merge_point(0, ' #16 STORE_FAST') debug_merge_point(0, ' #19 LOAD_GLOBAL') -+806: setfield_gc(p18, i73, descr=) -+817: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] ++804: setfield_gc(p18, i73, descr=) ++815: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #22 LOAD_FAST') debug_merge_point(0, ' #25 CALL_FUNCTION') -+817: p74 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i72, descr=) -+843: guard_no_exception(, descr=) [p1, p0, p74, p2, p5, p12, p14, p16, p18, p24, i72, None] ++815: p74 = call(ConstClass(ll_int_str__IntegerR_SignedConst_Signed), i72, descr=) ++841: guard_no_exception(, descr=) [p1, p0, p74, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #28 LIST_APPEND') -+858: i75 = getfield_gc(p56, descr=) -+869: i76 = int_add(i75, 1) -+876: p77 = getfield_gc(p56, descr=) -+876: i78 = arraylen_gc(p77, descr=) -+876: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i76, descr=) -+905: guard_no_exception(, descr=) [p1, p0, i75, p74, p56, p2, p5, p12, p14, p16, p18, p24, i72, None] -+920: p79 = getfield_gc(p56, descr=) ++856: i75 = getfield_gc(p56, descr=) ++867: i76 = int_add(i75, 1) ++874: p77 = getfield_gc(p56, descr=) ++874: i78 = arraylen_gc(p77, descr=) ++874: call(ConstClass(_ll_list_resize_ge_trampoline__v575___simple_call__function__), p56, i76, descr=) ++900: guard_no_exception(, descr=) [p1, p0, i75, p74, p56, p2, p5, p12, p14, p16, p18, p24, i72, None] ++915: p79 = getfield_gc(p56, descr=) setarrayitem_gc(p79, i75, p74, descr=) debug_merge_point(0, ' #31 JUMP_ABSOLUTE') -+1006: i80 = getfield_raw(44057928, descr=) -+1014: i81 = int_lt(i80, 0) ++993: i80 = getfield_raw(44057928, descr=) ++1001: i81 = int_lt(i80, 0) guard_false(i81, descr=) [p1, p0, p2, p5, p12, p14, p16, p18, p24, i72, None] debug_merge_point(0, ' #13 FOR_ITER') -+1024: jump(p0, p1, p2, p5, i72, p12, p14, p16, p18, p24, i73, i35, i34, i33, p56, descr=TargetToken(139705792105232)) -+1039: --end of the loop-- -[101d934aff5b7] jit-log-opt-loop} -[101d93520f017] {jit-backend -[101d935239d0b] {jit-backend-dump ++1011: jump(p0, p1, p2, p5, i72, p12, p14, p16, p18, p24, i73, i35, i34, i33, p56, descr=TargetToken(140616493869840)) ++1026: --end of the loop-- +[88d351c1c82] jit-log-opt-loop} +[88d357e62da] {jit-backend +[88d35804926] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448c50 +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BBF8E121CA0F7F00004D8B3B4983C70149BBF8E121CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F85000000004989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D180355000000 -[101d93523fa7b] jit-backend-dump} -[101d9352401a3] {jit-backend-addr -Loop 3 (re StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) has address 7f0fc7448c86 to 7f0fc7448cf9 (bootstrap 7f0fc7448c50) -[101d93524156f] jit-backend-addr} -[101d93524215b] {jit-backend-dump +CODE_DUMP @7fe3d152bc4b +0 488B04254045A0024829E0483B0425E03C5101760D49BB63A352D1E37F000041FFD3554889E5534154415541564157488DA50000000049BBF81130D4E37F00004D8B3B4983C70149BBF81130D4E37F00004D893B4C8B7E404D0FB67C3F184983FF330F85000000004989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB00A052D1E37F000041FFD31D180355000000 +[88d35809aea] jit-backend-dump} +[88d3580a26e] {jit-backend-addr +Loop 3 (re StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) has address 7fe3d152bc81 to 7fe3d152bcf4 (bootstrap 7fe3d152bc4b) +[88d3580b41e] jit-backend-addr} +[88d3580bbca] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448c82 +0 70FFFFFF -[101d93524311f] jit-backend-dump} -[101d935243aef] {jit-backend-dump +CODE_DUMP @7fe3d152bc7d +0 70FFFFFF +[88d3580cac6] jit-backend-dump} +[88d3580d1fe] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448cb4 +0 41000000 -[101d935244943] jit-backend-dump} -[101d935245397] jit-backend} -[101d935247d5f] {jit-log-opt-loop +CODE_DUMP @7fe3d152bcaf +0 41000000 +[88d3580defa] jit-backend-dump} +[88d3580e8da] jit-backend} +[88d358110ce] {jit-log-opt-loop # Loop 3 (re StrLiteralSearch at 11/51 [17, 8, 3, 1, 1, 1, 1, 51, 0, 19, 51, 1]) : entry bridge with 10 ops [i0, p1] debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') @@ -1000,43 +1000,43 @@ +123: setfield_gc(p1, i0, descr=) +127: finish(1, descr=) +169: --end of the loop-- -[101d935264e27] jit-log-opt-loop} -[101d93579cc0f] {jit-backend -[101d9357b906f] {jit-backend-dump +[88d35827e02] jit-log-opt-loop} +[88d35d759b6] {jit-backend +[88d35d96556] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d0d +0 488DA50000000049BB10E221CA0F7F00004D8B3B4983C70149BB10E221CA0F7F00004D893B4883C7014C8B7E084C39FF0F8D000000004C8B76404D0FB6743E184983FE330F84000000004883C7014C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D18035600000049BB007044C70F7F000041FFD31D18035700000049BB007044C70F7F000041FFD31D180358000000 -[101d9357bda0b] jit-backend-dump} -[101d9357be137] {jit-backend-addr -bridge out of Guard 85 has address 7f0fc7448d0d to 7f0fc7448d8e -[101d9357bf077] jit-backend-addr} -[101d9357bf8af] {jit-backend-dump +CODE_DUMP @7fe3d152bd08 +0 488DA50000000049BB101230D4E37F00004D8B3B4983C70149BB101230D4E37F00004D893B4883C7014C8B7E084C39FF0F8D000000004C8B76404D0FB6743E184983FE330F84000000004883C7014C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB00A052D1E37F000041FFD31D18035600000049BB00A052D1E37F000041FFD31D18035700000049BB00A052D1E37F000041FFD31D180358000000 +[88d35d9b142] jit-backend-dump} +[88d35d9b8b2] {jit-backend-addr +bridge out of Guard 85 has address 7fe3d152bd08 to 7fe3d152bd89 +[88d35d9c782] jit-backend-addr} +[88d35d9d186] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d10 +0 70FFFFFF -[101d9357c0a7b] jit-backend-dump} -[101d9357c12cf] {jit-backend-dump +CODE_DUMP @7fe3d152bd0b +0 70FFFFFF +[88d35d9e3be] jit-backend-dump} +[88d35d9ecea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d3f +0 4B000000 -[101d9357c2137] jit-backend-dump} -[101d9357c282b] {jit-backend-dump +CODE_DUMP @7fe3d152bd3a +0 4B000000 +[88d35d9f9de] jit-backend-dump} +[88d35d9ff6e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d53 +0 4B000000 -[101d9357c3627] jit-backend-dump} -[101d9357c3c8f] {jit-backend-dump +CODE_DUMP @7fe3d152bd4e +0 4B000000 +[88d35da0bce] jit-backend-dump} +[88d35da110e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d60 +0 52000000 -[101d9357c491b] jit-backend-dump} -[101d9357c5153] {jit-backend-dump +CODE_DUMP @7fe3d152bd5b +0 52000000 +[88d35da1da2] jit-backend-dump} +[88d35da271e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448cb4 +0 55000000 -[101d9357c5e43] jit-backend-dump} -[101d9357c6727] jit-backend} -[101d9357c7423] {jit-log-opt-bridge +CODE_DUMP @7fe3d152bcaf +0 55000000 +[88d35da34da] jit-backend-dump} +[88d35da3ffe] jit-backend} +[88d35da4e16] {jit-log-opt-bridge # bridge out of Guard 85 with 13 ops [i0, p1] +37: i3 = int_add(i0, 1) @@ -1053,38 +1053,38 @@ guard_false(i12, descr=) [i11, p1] +87: finish(0, descr=) +129: --end of the loop-- -[101d9357d6543] jit-log-opt-bridge} -[101d935beecd3] {jit-backend -[101d935c047bf] {jit-backend-dump +[88d35db4966] jit-log-opt-bridge} +[88d36200a5e] {jit-backend +[88d3621742a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448dca +0 488DA50000000049BB28E221CA0F7F00004D8B3B4983C70149BB28E221CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D18035900000049BB007044C70F7F000041FFD31D18035A000000 -[101d935c088ab] jit-backend-dump} -[101d935c08f03] {jit-backend-addr -bridge out of Guard 88 has address 7f0fc7448dca to 7f0fc7448e3e -[101d935c09c47] jit-backend-addr} -[101d935c0a3ab] {jit-backend-dump +CODE_DUMP @7fe3d152bdc5 +0 488DA50000000049BB281230D4E37F00004D8B3B4983C70149BB281230D4E37F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB00A052D1E37F000041FFD31D18035900000049BB00A052D1E37F000041FFD31D18035A000000 +[88d3621b586] jit-backend-dump} +[88d3621bbe6] {jit-backend-addr +bridge out of Guard 88 has address 7fe3d152bdc5 to 7fe3d152be39 +[88d3621c8ca] jit-backend-addr} +[88d3621d05a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448dcd +0 70FFFFFF -[101d935c0b3ef] jit-backend-dump} -[101d935c0bbf7] {jit-backend-dump +CODE_DUMP @7fe3d152bdc8 +0 70FFFFFF +[88d3621dfb2] jit-backend-dump} +[88d3621e806] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448dff +0 3B000000 -[101d935c0c9e3] jit-backend-dump} -[101d935c0cf6b] {jit-backend-dump +CODE_DUMP @7fe3d152bdfa +0 3B000000 +[88d3621f4be] jit-backend-dump} +[88d3621fa9a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448e10 +0 3E000000 -[101d935c15c27] jit-backend-dump} -[101d935c1657b] {jit-backend-dump +CODE_DUMP @7fe3d152be0b +0 3E000000 +[88d36220896] jit-backend-dump} +[88d36220f7e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d60 +0 66000000 -[101d935c17497] jit-backend-dump} -[101d935c17dff] jit-backend} -[101d935c18b1f] {jit-log-opt-bridge +CODE_DUMP @7fe3d152bd5b +0 66000000 +[88d36221c7a] jit-backend-dump} +[88d36222536] jit-backend} +[88d3622316a] {jit-log-opt-bridge # bridge out of Guard 88 with 10 ops [i0, p1] debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') @@ -1098,424 +1098,424 @@ guard_false(i9, descr=) [i7, p1] +74: finish(0, descr=) +116: --end of the loop-- -[101d935c25193] jit-log-opt-bridge} -[101d93608999f] {jit-backend -[101d936096717] {jit-backend-dump +[88d3623b6ee] jit-log-opt-bridge} +[88d366a7f56] {jit-backend +[88d366b4d3a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448e66 +0 488DA50000000049BB40E221CA0F7F0000498B334883C60149BB40E221CA0F7F0000498933B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[101d936099bf7] jit-backend-dump} -[101d93609a247] {jit-backend-addr -bridge out of Guard 86 has address 7f0fc7448e66 to 7f0fc7448eb5 -[101d93609ae3f] jit-backend-addr} -[101d93609b56f] {jit-backend-dump +CODE_DUMP @7fe3d152be61 +0 488DA50000000049BB401230D4E37F0000498B334883C60149BB401230D4E37F0000498933B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[88d366b8426] jit-backend-dump} +[88d366b8a2e] {jit-backend-addr +bridge out of Guard 86 has address 7fe3d152be61 to 7fe3d152beb0 +[88d366b9692] jit-backend-addr} +[88d366b9de6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448e69 +0 70FFFFFF -[101d93609c48f] jit-backend-dump} -[101d93609cccb] {jit-backend-dump +CODE_DUMP @7fe3d152be64 +0 70FFFFFF +[88d366bac7a] jit-backend-dump} +[88d366bb53a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d3f +0 23010000 -[101d93609da0b] jit-backend-dump} -[101d93609e263] jit-backend} -[101d93609edcb] {jit-log-opt-bridge +CODE_DUMP @7fe3d152bd3a +0 23010000 +[88d366bc196] jit-backend-dump} +[88d366bc9c2] jit-backend} +[88d366bd42e] {jit-log-opt-bridge # bridge out of Guard 86 with 1 ops [i0, p1] +37: finish(0, descr=) +79: --end of the loop-- -[101d9360a2113] jit-log-opt-bridge} -[101d9374ca7d7] {jit-backend -[101d9377009b7] {jit-backend-dump +[88d366c09d2] jit-log-opt-bridge} +[88d37b0c4be] {jit-backend +[88d37d43ed2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744909b +0 488B04254045A0024829E0483B0425E03C5101760D49BB637344C70F7F000041FFD3554889E5534154415541564157488DA50000000049BB58E221CA0F7F00004D8B3B4983C70149BB58E221CA0F7F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284C89BD70FFFFFF4D8B783048899D68FFFFFF498B58384889BD60FFFFFF498B78404D8B40484889B558FFFFFF4C89A550FFFFFF4C898D48FFFFFF48899540FFFFFF48898538FFFFFF4C89BD30FFFFFF48899D28FFFFFF4889BD20FFFFFF4C898518FFFFFF49BB70E221CA0F7F00004D8B034983C00149BB70E221CA0F7F00004D89034983FA040F85000000008139806300000F85000000004C8B51104D85D20F84000000004C8B4108498B7A10813FF0CE01000F85000000004D8B5208498B7A084939F80F83000000004D8B52104F8B54C2104D85D20F84000000004983C0014C8941084983FD000F850000000049BB28DC58C70F7F00004D39DE0F85000000004C8BB560FFFFFF4D8B6E0849BB302855C70F7F00004D39DD0F85000000004D8B451049BBF02855C70F7F00004D39D80F850000000049BB782955C70F7F00004D8B2B49BB802955C70F7F00004D39DD0F850000000048898D10FFFFFF4C899508FFFFFF41BB201B8D0041FFD34C8B5040488B48504885C90F8500000000488B48284883F9000F850000000049BBD03B55C70F7F0000498B0B4883F9000F8F00000000488B0C2500D785014881F9201288010F850000000049BBA82955C70F7F0000498B0B813910E001000F850000000049BBA02955C70F7F0000498B0B48898500FFFFFF488B042530255601488D5040483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8BAD00FFFFFF41F6450401741952415251504C89EF4889C641BBF0C4C50041FFD35859415A5A4989454049896E1848C7421060CE830149BBB03C58C70F7F00004C895A1849BB10EC54C70F7F00004C895A20488985F8FEFFFF48898DF0FEFFFF4C8995E8FEFFFF488995E0FEFFFF48C78578FFFFFF5B0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488985D8FEFFFF488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8BB5E0FEFFFF4C897008488985D0FEFFFF48C78578FFFFFF5C000000488BBDF0FEFFFF4889C6488B95D8FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85F0FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B8500FFFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8BB5F8FEFFFF49C74608FDFFFFFF4C8BB508FFFFFF4D8B6E1049BBFFFFFFFFFFFFFF7F4D39DD0F8D000000004C8B5210488B4A184D8B42104983F8110F85000000004D8B42204C89C74983E0014983F8000F8400000000498B7A384883FF010F8F00000000498B7A184883C7014D8B44FA104983F8130F85000000004989F84883C701498B7CFA104983C0024983FD000F8E000000004983F80B0F85000000004883FF330F850000000049BBC05E56C70F7F00004D39DA0F8500000000488995C8FEFFFF488B042530255601488D5060483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A0100488968084C8B9500FFFFFF41F6420401741951504152524C89D74889C641BBF0C4C50041FFD35A415A585949894240488BBD60FFFFFF48896F1849BBC05E56C70F7F00004C895A3848894A104C896A084C897240488995C0FEFFFF488985B8FEFFFF48C78578FFFFFF5D000000BF000000004889D649BB508C44C70F7F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B8500FFFFFF488B78504885FF0F8500000000488B78284883FF000F85000000004C8B95E8FEFFFFF64004017417504152574889C74C89D641BBF0C4C50041FFD35F415A584C895040488B95B8FEFFFF48C74208FDFFFFFF488B14254845A0024883FA000F8C0000000049BB88E221CA0F7F0000498B134883C20149BB88E221CA0F7F0000498913488B9510FFFFFF4C8B72104D85F60F84000000004C8B6A08498B4E108139F0CE01000F85000000004D8B7608498B4E084939CD0F83000000004D8B76104F8B74EE104D85F60F84000000004983C501488B8D60FFFFFF4C8B41084C896A0849BB302855C70F7F00004D39D80F85000000004D8B681049BBF02855C70F7F00004D39DD0F850000000049BB782955C70F7F00004D8B0349BB802955C70F7F00004D39D80F85000000004883FF000F850000000049BBD03B55C70F7F0000498B3B4883FF000F8F00000000488B3C2500D785014881FF201288010F850000000049BBA82955C70F7F0000498B3B813F10E001000F850000000049BBA02955C70F7F0000498B3B488985B0FEFFFF488B042530255601488D5040483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8B85B0FEFFFF41F6400401741F50524152415051574C89C74889C641BBF0C4C50041FFD35F594158415A5A58498940404889691848C7421060CE830149BBB03C58C70F7F00004C895A1849BB10EC54C70F7F00004C895A204889BDA8FEFFFF4C8995A0FEFFFF48899598FEFFFF48898590FEFFFF4C89B508FFFFFF48C78578FFFFFF5E0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F850000000048898588FEFFFF488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8BB598FEFFFF4C89700848898580FEFFFF48C78578FFFFFF5F000000488BBDA8FEFFFF4889C6488B9588FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85A8FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B85B0FEFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8BB590FEFFFF49C74608FDFFFFFF4C8BB508FFFFFF4D8B561049BBFFFFFFFFFFFFFF7F4D39DA0F8D000000004C8B4210488B4A18498B78104883FF110F8500000000498B78204989FD4883E7014883FF000F84000000004D8B68384983FD010F8F000000004D8B68184983C5014B8B7CE8104883FF130F85000000004C89EF4983C5014F8B6CE8104883C7024983FA000F8E000000004883FF0B0F85000000004983FD330F850000000049BBC05E56C70F7F00004D39D80F850000000048899578FEFFFF488B042530255601488D5060483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A0100488968084C8B85B0FEFFFF41F6400401741D525141504152504C89C74889C641BBF0C4C50041FFD358415A4158595A498940404C8BAD60FFFFFF49896D1849BBC05E56C70F7F00004C895A3848894A104C8952084C89724048898570FEFFFF48899568FEFFFF48C78578FFFFFF60000000BF000000004889D649BB508C44C70F7F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B85B0FEFFFF4C8B68504D85ED0F85000000004C8B68284983FD000F85000000004C8BB5A0FEFFFFF64004017411504889C74C89F641BBF0C4C50041FFD3584C897040488B9570FEFFFF48C74208FDFFFFFF488B14254845A0024883FA000F8C000000004C89EF4D89F2E96BFAFFFF49BB007044C70F7F000041FFD3294C48403835505544585C046064686C036100000049BB007044C70F7F000041FFD34C48044038355044585C6064686C036200000049BB007044C70F7F000041FFD34C4804284038355044585C6064686C036300000049BB007044C70F7F000041FFD34C4804211C284038355044585C6064686C036400000049BB007044C70F7F000041FFD34C4804211D284038355044585C6064686C036500000049BB007044C70F7F000041FFD34C480421284038355044585C6064686C036600000049BB007044C70F7F000041FFD3354C4840385044585C0464686C28036700000049BB007044C70F7F000041FFD34C4838405044580464686C28036800000049BB007044C70F7F000041FFD34C3834405044580464686C28036900000049BB007044C70F7F000041FFD34C382034405044580464686C28036A00000049BB007044C70F7F000041FFD34C3834405044580464686C28036B00000049BB007044C70F7F000041FFD34C3834405044580464686C28036C00000049BB007044C70F7F000041FFD34C3800044050445870152874036D00000049BB007044C70F7F000041FFD34C38004050445870152874036E00000049BB007044C70F7F000041FFD34C38004050445870152874036F00000049BB007044C70F7F000041FFD34C3800054050445870152874037000000049BB007044C70F7F000041FFD34C380004405044587015152874037100000049BB007044C70F7F000041FFD34C380004405044587015152874037200000049BB437044C70F7F000041FFD34C48788001017C4050445870880184011574035B00000049BB437044C70F7F000041FFD34C48788001017C4050445870880184011574037300000049BB437044C70F7F000041FFD34C487890010180017C405044587084011574035C00000049BB437044C70F7F000041FFD34C487890010180017C405044587084011574037400000049BB007044C70F7F000041FFD34C487890010980017C405044587084011574037500000049BB007044C70F7F000041FFD34C48789001087C405044587084011574037600000049BB007044C70F7F000041FFD34C48787C405044587090010884011574037700000049BB007044C70F7F000041FFD34C480008387C405044587090010784011574037800000049BB007044C70F7F000041FFD34C4800087C405044587090010784011574037900000049BB007044C70F7F000041FFD34C48004050445870070884011574037A00000049BB007044C70F7F000041FFD34C480008384050445870070784011507037B00000049BB007044C70F7F000041FFD34C4800084050445870350528070784011538037C00000049BB007044C70F7F000041FFD34C4800081D4050445870350528070784011538037D00000049BB007044C70F7F000041FFD34C4800084050445870350528070784011538037E00000049BB007044C70F7F000041FFD34C4800081D4050445870350528070784011538037F00000049BB007044C70F7F000041FFD34C4800081D214050445870350528070784011538038000000049BB007044C70F7F000041FFD34C4800081D21284050445870350507070784011538038100000049BB007044C70F7F000041FFD34C4800081D284050445870350507070784011538038200000049BB007044C70F7F000041FFD34C480008284050445870350507070784011538038300000049BB437044C70F7F000041FFD34C487898019401019C014050445870840174035D00000049BB437044C70F7F000041FFD34C487898019401019C014050445870840174038400000049BB007044C70F7F000041FFD34C4878980194019C014050445870840174038500000049BB007044C70F7F000041FFD34C48001C9C014050445870840174038600000049BB007044C70F7F000041FFD34C48009C014050445870840174038700000049BB007044C70F7F000041FFD34C4840504458700774038800000049BB007044C70F7F000041FFD34C4840504458700774038900000049BB007044C70F7F000041FFD34C4808384050445874038A00000049BB007044C70F7F000041FFD34C48083504384050445874038B00000049BB007044C70F7F000041FFD34C48083505384050445874038C00000049BB007044C70F7F000041FFD34C480835384050445874038D00000049BB007044C70F7F000041FFD34C042040504458083807038E00000049BB007044C70F7F000041FFD34C04342040504458083807038F00000049BB007044C70F7F000041FFD34C042040504458083807039000000049BB007044C70F7F000041FFD34C042040504458083807039100000049BB007044C70F7F000041FFD34C0400405044580828153807039200000049BB007044C70F7F000041FFD34C04001D405044580828153807039300000049BB007044C70F7F000041FFD34C04001C40504458081528153807039400000049BB007044C70F7F000041FFD34C04001C40504458081528153807039500000049BB437044C70F7F000041FFD34C48A001A40101B0014050445870AC01A8011574035E00000049BB437044C70F7F000041FFD34C48A001A40101B0014050445870AC01A8011574039600000049BB437044C70F7F000041FFD34C48A001B80101A401B0014050445870A8011574035F00000049BB437044C70F7F000041FFD34C48A001B80101A401B0014050445870A8011574039700000049BB007044C70F7F000041FFD34C48A001B80109A401B0014050445870A8011574039800000049BB007044C70F7F000041FFD34C48A001B80108B0014050445870A8011574039900000049BB007044C70F7F000041FFD34C48A001B001405044587008B801A8011574039A00000049BB007044C70F7F000041FFD34C48000838B001405044587007B801A8011574039B00000049BB007044C70F7F000041FFD34C480008B001405044587007B801A8011574039C00000049BB007044C70F7F000041FFD34C480040504458700807A8011574039D00000049BB007044C70F7F000041FFD34C4800083840504458700707A8011507039E00000049BB007044C70F7F000041FFD34C48000840504458700520290707A8011538039F00000049BB007044C70F7F000041FFD34C4800083540504458700520290707A801153803A000000049BB007044C70F7F000041FFD34C48000840504458700520290707A801153803A100000049BB007044C70F7F000041FFD34C4800083540504458700520290707A801153803A200000049BB007044C70F7F000041FFD34C480008351D40504458700520290707A801153803A300000049BB007044C70F7F000041FFD34C480008351D2040504458700507290707A801153803A400000049BB007044C70F7F000041FFD34C480008352040504458700507290707A801153803A500000049BB007044C70F7F000041FFD34C4800082040504458700507290707A801153803A600000049BB437044C70F7F000041FFD34C48A001C401BC0101C001405044587074A801036000000049BB437044C70F7F000041FFD34C48A001C401BC0101C001405044587074A80103A700000049BB007044C70F7F000041FFD34C48A001C401BC01C001405044587074A80103A800000049BB007044C70F7F000041FFD34C480034C001405044587074A80103A900000049BB007044C70F7F000041FFD34C4800C001405044587074A80103AA00000049BB007044C70F7F000041FFD34C484050445870740703AB00000049BB007044C70F7F000041FFD34C484050445870740703AC000000 -[101d937749383] jit-backend-dump} -[101d93774a3eb] {jit-backend-addr -Loop 4 ( #44 FOR_ITER) has address 7f0fc74490d1 to 7f0fc7449cf2 (bootstrap 7f0fc744909b) -[101d93774be7f] jit-backend-addr} -[101d93774cd67] {jit-backend-dump +CODE_DUMP @7fe3d152c098 +0 488B04254045A0024829E0483B0425E03C5101760D49BB63A352D1E37F000041FFD3554889E5534154415541564157488DA50000000049BB581230D4E37F00004D8B3B4983C70149BB581230D4E37F00004D893B4C8B7F504C8B77784C0FB6AF960000004C8B67604C8B97800000004C8B4F584C8B4768498B5810498B5018498B4020498B48284C89BD70FFFFFF4D8B783048899D68FFFFFF498B58384889BD60FFFFFF498B78404D8B40484889B558FFFFFF4C89A550FFFFFF4C898D48FFFFFF48899540FFFFFF48898538FFFFFF4C89BD30FFFFFF48899D28FFFFFF4889BD20FFFFFF4C898518FFFFFF49BB701230D4E37F00004D8B034983C00149BB701230D4E37F00004D89034983FA040F85000000008139806300000F85000000004C8B51104D85D20F84000000004C8B4108498B7A10813FF0CE01000F85000000004D8B5208498B7A084939F80F83000000004D8B52104F8B54C2104D85D20F84000000004983C0014C8941084983FD000F850000000049BB281C67D1E37F00004D39DE0F85000000004C8BB560FFFFFF4D8B6E0849BB306863D1E37F00004D39DD0F85000000004D8B451049BBF06863D1E37F00004D39D80F850000000049BB786963D1E37F00004D8B2B49BB806963D1E37F00004D39DD0F85000000004C899510FFFFFF48898D08FFFFFF41BB201B8D0041FFD3488B48404C8B50504D85D20F85000000004C8B50284983FA000F850000000049BBD07B63D1E37F00004D8B134983FA000F8F000000004C8B142500D785014981FA201288010F850000000049BBA86963D1E37F00004D8B1341813A10E001000F850000000049BBA06963D1E37F00004D8B1348898500FFFFFF488B042530255601488D5040483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8BAD00FFFFFF41F6450401741950525141524C89EF4889C641BBF0C4C50041FFD3415A595A584989454049896E1848C7421060CE830149BBB07C66D1E37F00004C895A1849BB102C63D1E37F00004C895A204C8995F8FEFFFF48898DF0FEFFFF488995E8FEFFFF488985E0FEFFFF48C78578FFFFFF5B0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F8500000000488985D8FEFFFF488B042530255601488D5010483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8BADE8FEFFFF4C896808488985D0FEFFFF48C78578FFFFFF5C000000488BBDF8FEFFFF4889C6488B95D8FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85F8FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B8500FFFFFF4C8B68504D85ED0F85000000004C8B68284983FD000F85000000004C8BADE0FEFFFF49C74508FDFFFFFF4C8BAD10FFFFFF4D8B751049BBFFFFFFFFFFFFFF7F4D39DE0F8D00000000488B4A104C8B52184C8B41104983F8110F85000000004C8B41204C89C74983E0014983F8000F8400000000488B79384883FF010F8F00000000488B79184883C7014C8B44F9104983F8130F85000000004989F84883C701488B7CF9104983C0024983FE000F8E000000004983F80B0F85000000004883FF330F850000000049BBC09E64D1E37F00004C39D90F8500000000488995C8FEFFFF488B042530255601488D5060483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A010048896808488B8D00FFFFFFF6410401741941525250514889CF4889C641BBF0C4C50041FFD359585A415A48894140488BBD60FFFFFF48896F1849BBC09E64D1E37F00004C895A384C8952104C8972084C896A40488985C0FEFFFF488995B8FEFFFF48C78578FFFFFF5D000000BF000000004889D649BB4BBC52D1E37F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B8500FFFFFF488B78504885FF0F8500000000488B78284883FF000F8500000000488B95F0FEFFFFF640040174155750524889C74889D641BBF0C4C50041FFD35A585F48895040488B8DC0FEFFFF48C74108FDFFFFFF488B0C254845A0024883F9000F8C0000000049BB881230D4E37F0000498B0B4883C10149BB881230D4E37F000049890B488B8D08FFFFFF4C8B69104D85ED0F84000000004C8B71084D8B551041813AF0CE01000F85000000004D8B6D084D8B55084D39D60F83000000004D8B6D104F8B6CF5104D85ED0F84000000004983C6014C8B9560FFFFFF4D8B42084C89710849BB306863D1E37F00004D39D80F85000000004D8B701049BBF06863D1E37F00004D39DE0F850000000049BB786963D1E37F00004D8B0349BB806963D1E37F00004D39D80F85000000004883FF000F850000000049BBD07B63D1E37F0000498B3B4883FF000F8F00000000488B3C2500D785014881FF201288010F850000000049BBA86963D1E37F0000498B3B813F10E001000F850000000049BBA06963D1E37F0000498B3B488985B0FEFFFF488995A8FEFFFF488B042530255601488D5040483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C7008800000048C74008030000004889C24883C02848C700508A0100488968084C8B85B0FEFFFF41F6400401741F41504152515052574C89C74889C641BBF0C4C50041FFD35F5A5859415A41584989404049896A1848C7421060CE830149BBB07C66D1E37F00004C895A1849BB102C63D1E37F00004C895A204889BDA0FEFFFF48899598FEFFFF48898590FEFFFF4C89AD10FFFFFF48C78578FFFFFF5E0000004889D741BB3036920041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F850000000048898588FEFFFF488B042530255601488D5010483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700E0300000488B9560FFFFFF48896A184C8B8598FEFFFF4C89400848898580FEFFFF48C78578FFFFFF5F000000488BBDA0FEFFFF4889C6488B9588FEFFFF41BBA02E790041FFD34883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004889C249BB00000000000000804C21D84883F8000F8500000000488B85A0FEFFFF488B4018486BD218488B5410184883FA017206813AB0EB03000F85000000004881FAC02C72010F8400000000488B85B0FEFFFF4C8B40504D85C00F85000000004C8B40284983F8000F85000000004C8B8590FEFFFF49C74008FDFFFFFF4C8B8510FFFFFF4D8B501049BBFFFFFFFFFFFFFF7F4D39DA0F8D000000004C8B6A10488B4A18498B7D104883FF110F8500000000498B7D204989FE4883E7014883FF000F84000000004D8B75384983FE010F8F000000004D8B75184983C6014B8B7CF5104883FF130F85000000004C89F74983C6014F8B74F5104883C7024983FA000F8E000000004883FF0B0F85000000004983FE330F850000000049BBC09E64D1E37F00004D39DD0F850000000048899578FEFFFF488B042530255601488D5060483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C700D00001004889C24883C04848C700508A0100488968084C8BADB0FEFFFF41F6450401741D504150525141524C89EF4889C641BBF0C4C50041FFD3415A595A415858498945404C8BB560FFFFFF49896E1849BBC09E64D1E37F00004C895A3848894A104C8952084C89424048899570FEFFFF48898568FEFFFF48C78578FFFFFF60000000BF000000004889D649BB4BBC52D1E37F000041FFD34883F80274134889C7BE0000000041BB7053950041FFD3EB08488B0425D0D155014883BD78FFFFFF000F8C0000000048833C25A046A002000F85000000004885C00F8500000000488B85B0FEFFFF4C8B70504D85F60F85000000004C8B70284983FE000F85000000004C8B85A8FEFFFFF640040174154150504889C74C89C641BBF0C4C50041FFD35841584C894040488B9568FEFFFF48C74208FDFFFFFF488B14254845A0024883FA000F8C000000004C89F74C89C2E966FAFFFF49BB00A052D1E37F000041FFD3294C48403835505544585C046064686C036100000049BB00A052D1E37F000041FFD34C48044038355044585C6064686C036200000049BB00A052D1E37F000041FFD34C4804284038355044585C6064686C036300000049BB00A052D1E37F000041FFD34C4804211C284038355044585C6064686C036400000049BB00A052D1E37F000041FFD34C4804211D284038355044585C6064686C036500000049BB00A052D1E37F000041FFD34C480421284038355044585C6064686C036600000049BB00A052D1E37F000041FFD3354C4840385044585C0464686C28036700000049BB00A052D1E37F000041FFD34C4838405044580464686C28036800000049BB00A052D1E37F000041FFD34C3834405044580464686C28036900000049BB00A052D1E37F000041FFD34C382034405044580464686C28036A00000049BB00A052D1E37F000041FFD34C3834405044580464686C28036B00000049BB00A052D1E37F000041FFD34C3834405044580464686C28036C00000049BB00A052D1E37F000041FFD34C3800284050445874150470036D00000049BB00A052D1E37F000041FFD34C38004050445874150470036E00000049BB00A052D1E37F000041FFD34C38004050445874150470036F00000049BB00A052D1E37F000041FFD34C3800294050445874150470037000000049BB00A052D1E37F000041FFD34C380028405044587415150470037100000049BB00A052D1E37F000041FFD34C380028405044587415150470037200000049BB43A052D1E37F000041FFD34C48787C0188014050445874158401708001035B00000049BB43A052D1E37F000041FFD34C48787C0188014050445874158401708001037300000049BB43A052D1E37F000041FFD34C48789001017C8801405044587415708001035C00000049BB43A052D1E37F000041FFD34C48789001017C8801405044587415708001037400000049BB00A052D1E37F000041FFD34C48789001097C8801405044587415708001037500000049BB00A052D1E37F000041FFD34C48789001088801405044587415708001037600000049BB00A052D1E37F000041FFD34C48788801405044587408900115708001037700000049BB00A052D1E37F000041FFD34C480008348801405044587407900115708001037800000049BB00A052D1E37F000041FFD34C4800088801405044587407900115708001037900000049BB00A052D1E37F000041FFD34C48004050445874080715708001037A00000049BB00A052D1E37F000041FFD34C480008344050445874070715078001037B00000049BB00A052D1E37F000041FFD34C4800084050445874043929070715348001037C00000049BB00A052D1E37F000041FFD34C4800081D4050445874043929070715348001037D00000049BB00A052D1E37F000041FFD34C4800084050445874043929070715348001037E00000049BB00A052D1E37F000041FFD34C4800081D4050445874043929070715348001037F00000049BB00A052D1E37F000041FFD34C4800081D214050445874043929070715348001038000000049BB00A052D1E37F000041FFD34C4800081D21044050445874073929070715348001038100000049BB00A052D1E37F000041FFD34C4800081D044050445874073929070715348001038200000049BB00A052D1E37F000041FFD34C480008044050445874073929070715348001038300000049BB43A052D1E37F000041FFD34C48789C0194010198014050445874708001035D00000049BB43A052D1E37F000041FFD34C48789C0194010198014050445874708001038400000049BB00A052D1E37F000041FFD34C48789C01940198014050445874708001038500000049BB00A052D1E37F000041FFD34C48001C98014050445874708001038600000049BB00A052D1E37F000041FFD34C480098014050445874708001038700000049BB00A052D1E37F000041FFD34C4840504458747007038800000049BB00A052D1E37F000041FFD34C4840504458747007038900000049BB00A052D1E37F000041FFD34C4804344050445870038A00000049BB00A052D1E37F000041FFD34C48043928344050445870038B00000049BB00A052D1E37F000041FFD34C48043929344050445870038C00000049BB00A052D1E37F000041FFD34C480439344050445870038D00000049BB00A052D1E37F000041FFD34C282040504458043407038E00000049BB00A052D1E37F000041FFD34C28382040504458043407038F00000049BB00A052D1E37F000041FFD34C282040504458043407039000000049BB00A052D1E37F000041FFD34C282040504458043407039100000049BB00A052D1E37F000041FFD34C2800405044580415083407039200000049BB00A052D1E37F000041FFD34C28001D405044580415083407039300000049BB00A052D1E37F000041FFD34C28001C40504458041515083407039400000049BB00A052D1E37F000041FFD34C28001C40504458041515083407039500000049BB43A052D1E37F000041FFD34C48A001A80101B001405044587470A40115AC01035E00000049BB43A052D1E37F000041FFD34C48A001A80101B001405044587470A40115AC01039600000049BB43A052D1E37F000041FFD34C48A001B80101A801B001405044587470A40115035F00000049BB43A052D1E37F000041FFD34C48A001B80101A801B001405044587470A40115039700000049BB00A052D1E37F000041FFD34C48A001B80109A801B001405044587470A40115039800000049BB00A052D1E37F000041FFD34C48A001B80108B001405044587470A40115039900000049BB00A052D1E37F000041FFD34C48A001B001405044587408B80170A40115039A00000049BB00A052D1E37F000041FFD34C48000820B001405044587407B80170A40115039B00000049BB00A052D1E37F000041FFD34C480008B001405044587407B80170A40115039C00000049BB00A052D1E37F000041FFD34C48004050445874080770A40115039D00000049BB00A052D1E37F000041FFD34C480008204050445874070707A40115039E00000049BB00A052D1E37F000041FFD34C4800084050445874053429070720A40115039F00000049BB00A052D1E37F000041FFD34C480008394050445874053429070720A4011503A000000049BB00A052D1E37F000041FFD34C4800084050445874053429070720A4011503A100000049BB00A052D1E37F000041FFD34C480008394050445874053429070720A4011503A200000049BB00A052D1E37F000041FFD34C480008391D4050445874053429070720A4011503A300000049BB00A052D1E37F000041FFD34C480008391D344050445874050729070720A4011503A400000049BB00A052D1E37F000041FFD34C48000839344050445874050729070720A4011503A500000049BB00A052D1E37F000041FFD34C480008344050445874050729070720A4011503A600000049BB43A052D1E37F000041FFD34C48A001C001BC0101C401405044587470A401036000000049BB43A052D1E37F000041FFD34C48A001C001BC0101C401405044587470A40103A700000049BB00A052D1E37F000041FFD34C48A001C001BC01C401405044587470A40103A800000049BB00A052D1E37F000041FFD34C480038C401405044587470A40103A900000049BB00A052D1E37F000041FFD34C4800C401405044587470A40103AA00000049BB00A052D1E37F000041FFD34C484050445874700703AB00000049BB00A052D1E37F000041FFD34C484050445874700703AC000000 +[88d37d8e88e] jit-backend-dump} +[88d37d8f74e] {jit-backend-addr +Loop 4 ( #44 FOR_ITER) has address 7fe3d152c0ce to 7fe3d152ccf2 (bootstrap 7fe3d152c098) +[88d37d9156a] jit-backend-addr} +[88d37d92536] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74490cd +0 E0FDFFFF -[101d93774e163] jit-backend-dump} -[101d93774ee7b] {jit-backend-dump +CODE_DUMP @7fe3d152c0ca +0 E0FDFFFF +[88d37d9390e] jit-backend-dump} +[88d37d94662] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491aa +0 440B0000 -[101d93774fc9f] jit-backend-dump} -[101d937750303] {jit-backend-dump +CODE_DUMP @7fe3d152c1a7 +0 470B0000 +[88d37d95492] jit-backend-dump} +[88d37d95ac2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491b6 +0 5A0B0000 -[101d937750fa3] jit-backend-dump} -[101d93775157b] {jit-backend-dump +CODE_DUMP @7fe3d152c1b3 +0 5D0B0000 +[88d37d9699a] jit-backend-dump} +[88d37d970be] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491c3 +0 6D0B0000 -[101d93775220f] jit-backend-dump} -[101d9377527ab] {jit-backend-dump +CODE_DUMP @7fe3d152c1c0 +0 700B0000 +[88d37d97e72] jit-backend-dump} +[88d37d9851a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491d7 +0 7A0B0000 -[101d9377534c7] jit-backend-dump} -[101d937753b57] {jit-backend-dump +CODE_DUMP @7fe3d152c1d4 +0 7D0B0000 +[88d37d99206] jit-backend-dump} +[88d37d99792] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491e8 +0 8C0B0000 -[101d9377548bf] jit-backend-dump} -[101d937754e73] {jit-backend-dump +CODE_DUMP @7fe3d152c1e5 +0 8F0B0000 +[88d37d9a3c2] jit-backend-dump} +[88d37d9a94e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74491fa +0 9D0B0000 -[101d937755aa3] jit-backend-dump} -[101d937756027] {jit-backend-dump +CODE_DUMP @7fe3d152c1f7 +0 A00B0000 +[88d37d9b592] jit-backend-dump} +[88d37d9bc72] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744920c +0 AD0B0000 -[101d937756c73] jit-backend-dump} -[101d9377571e7] {jit-backend-dump +CODE_DUMP @7fe3d152c209 +0 B00B0000 +[88d37d9caca] jit-backend-dump} +[88d37d9d17e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744921f +0 BA0B0000 -[101d937757e27] jit-backend-dump} -[101d9377584e7] {jit-backend-dump +CODE_DUMP @7fe3d152c21c +0 BD0B0000 +[88d37d9df1a] jit-backend-dump} +[88d37d9e4a6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744923d +0 BA0B0000 -[101d937759213] jit-backend-dump} -[101d9377598c3] {jit-backend-dump +CODE_DUMP @7fe3d152c23a +0 BD0B0000 +[88d37d9f0da] jit-backend-dump} +[88d37d9f692] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449254 +0 C10B0000 -[101d93775a527] jit-backend-dump} -[101d937760017] {jit-backend-dump +CODE_DUMP @7fe3d152c251 +0 C40B0000 +[88d37da02ce] jit-backend-dump} +[88d37da0b1e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449274 +0 DE0B0000 -[101d9377610bb] jit-backend-dump} -[101d9377617a7] {jit-backend-dump +CODE_DUMP @7fe3d152c271 +0 E10B0000 +[88d37da18f2] jit-backend-dump} +[88d37da1f76] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744929c +0 D40B0000 -[101d9377624f7] jit-backend-dump} -[101d937762b83] {jit-backend-dump +CODE_DUMP @7fe3d152c299 +0 D70B0000 +[88d37da2d72] jit-backend-dump} +[88d37da33da] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74492aa +0 E40B0000 -[101d937763ad7] jit-backend-dump} -[101d937764243] {jit-backend-dump +CODE_DUMP @7fe3d152c2a7 +0 E70B0000 +[88d37da40ce] jit-backend-dump} +[88d37da47be] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74492c1 +0 070C0000 -[101d937764ffb] jit-backend-dump} -[101d937765653] {jit-backend-dump +CODE_DUMP @7fe3d152c2be +0 0A0C0000 +[88d37da83de] jit-backend-dump} +[88d37da8aa2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74492d6 +0 100C0000 -[101d937766347] jit-backend-dump} -[101d9377668cb] {jit-backend-dump +CODE_DUMP @7fe3d152c2d3 +0 130C0000 +[88d37da982e] jit-backend-dump} +[88d37da9e3a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74492ef +0 160C0000 -[101d9377674cb] jit-backend-dump} -[101d937767a63] {jit-backend-dump +CODE_DUMP @7fe3d152c2ed +0 180C0000 +[88d37daab3e] jit-backend-dump} +[88d37dab1be] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74493f0 +0 340B0000 -[101d93776864f] jit-backend-dump} -[101d937768d03] {jit-backend-dump +CODE_DUMP @7fe3d152c3ee +0 360B0000 +[88d37dabea2] jit-backend-dump} +[88d37dac44a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74493ff +0 490B0000 -[101d937769b47] jit-backend-dump} -[101d93776a217] {jit-backend-dump +CODE_DUMP @7fe3d152c3fd +0 4B0B0000 +[88d37dad0b2] jit-backend-dump} +[88d37dad662] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449495 +0 D70A0000 -[101d93776aeef] jit-backend-dump} -[101d93776b477] {jit-backend-dump +CODE_DUMP @7fe3d152c493 +0 D90A0000 +[88d37dae29e] jit-backend-dump} +[88d37dae842] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74494a4 +0 EC0A0000 -[101d93776c07b] jit-backend-dump} -[101d93776c60f] {jit-backend-dump +CODE_DUMP @7fe3d152c4a2 +0 EE0A0000 +[88d37daf4e6] jit-backend-dump} +[88d37dafbde] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74494be +0 F60A0000 -[101d93776d1fb] jit-backend-dump} -[101d93776d78b] {jit-backend-dump +CODE_DUMP @7fe3d152c4bc +0 F80A0000 +[88d37db08be] jit-backend-dump} +[88d37db0f7a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74494e4 +0 F40A0000 -[101d93776e5df] jit-backend-dump} -[101d93776ec87] {jit-backend-dump +CODE_DUMP @7fe3d152c4e2 +0 F60A0000 +[88d37db1be6] jit-backend-dump} +[88d37db218a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74494f1 +0 090B0000 -[101d93776f943] jit-backend-dump} -[101d93776feeb] {jit-backend-dump +CODE_DUMP @7fe3d152c4ef +0 0C0B0000 +[88d37db2dbe] jit-backend-dump} +[88d37db338a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449505 +0 170B0000 -[101d937770aeb] jit-backend-dump} -[101d93777107b] {jit-backend-dump +CODE_DUMP @7fe3d152c503 +0 1B0B0000 +[88d37db3fda] jit-backend-dump} +[88d37db459e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449513 +0 2D0B0000 -[101d937771c6b] jit-backend-dump} -[101d93777226b] {jit-backend-dump +CODE_DUMP @7fe3d152c511 +0 320B0000 +[88d37db5282] jit-backend-dump} +[88d37db5992] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449540 +0 430B0000 -[101d937772e6b] jit-backend-dump} -[101d937773613] {jit-backend-dump +CODE_DUMP @7fe3d152c53e +0 490B0000 +[88d37db672a] jit-backend-dump} +[88d37db6cee] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449556 +0 4F0B0000 -[101d9377742df] jit-backend-dump} -[101d93777494f] {jit-backend-dump +CODE_DUMP @7fe3d152c554 +0 550B0000 +[88d37db7922] jit-backend-dump} +[88d37db7ece] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744956b +0 5E0B0000 -[101d93777564f] jit-backend-dump} -[101d937775bef] {jit-backend-dump +CODE_DUMP @7fe3d152c569 +0 640B0000 +[88d37db8b0a] jit-backend-dump} +[88d37db9092] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449579 +0 750B0000 -[101d9377767db] jit-backend-dump} -[101d937776d63] {jit-backend-dump +CODE_DUMP @7fe3d152c577 +0 7B0B0000 +[88d37db9cce] jit-backend-dump} +[88d37dba36a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449590 +0 820B0000 -[101d937777963] jit-backend-dump} -[101d937777eeb] {jit-backend-dump +CODE_DUMP @7fe3d152c58e +0 880B0000 +[88d37dbb186] jit-backend-dump} +[88d37dbb7ae] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74495aa +0 8D0B0000 -[101d937778cfb] jit-backend-dump} -[101d937779367] {jit-backend-dump +CODE_DUMP @7fe3d152c5a8 +0 930B0000 +[88d37dbc3c6] jit-backend-dump} +[88d37dbc94a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74495b4 +0 A90B0000 -[101d93777a047] jit-backend-dump} -[101d93777a5ef] {jit-backend-dump +CODE_DUMP @7fe3d152c5b2 +0 AF0B0000 +[88d37dbd57a] jit-backend-dump} +[88d37dbdb16] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74495be +0 C60B0000 -[101d93777deb7] jit-backend-dump} -[101d93777e5b3] {jit-backend-dump +CODE_DUMP @7fe3d152c5bc +0 CC0B0000 +[88d37dbe736] jit-backend-dump} +[88d37dbecd2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74495d1 +0 D90B0000 -[101d93777f2ab] jit-backend-dump} -[101d93777f8ef] {jit-backend-dump +CODE_DUMP @7fe3d152c5cf +0 DF0B0000 +[88d37dbfac2] jit-backend-dump} +[88d37dc019a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74496d6 +0 F90A0000 -[101d937780583] jit-backend-dump} -[101d937780bef] {jit-backend-dump +CODE_DUMP @7fe3d152c6d3 +0 000B0000 +[88d37dc0ed2] jit-backend-dump} +[88d37dc14ae] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74496e5 +0 0E0B0000 -[101d9377818b7] jit-backend-dump} -[101d937781f2f] {jit-backend-dump +CODE_DUMP @7fe3d152c6e2 +0 150B0000 +[88d37dc20f2] jit-backend-dump} +[88d37dc2692] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74496ee +0 290B0000 -[101d937782c37] jit-backend-dump} -[101d9377831cb] {jit-backend-dump +CODE_DUMP @7fe3d152c6eb +0 300B0000 +[88d37dc5ae2] jit-backend-dump} +[88d37dc615e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449702 +0 380B0000 -[101d937783dcf] jit-backend-dump} -[101d937784337] {jit-backend-dump +CODE_DUMP @7fe3d152c6ff +0 3F0B0000 +[88d37dc705a] jit-backend-dump} +[88d37dc7652] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449710 +0 4A0B0000 -[101d937784f23] jit-backend-dump} -[101d93778551b] {jit-backend-dump +CODE_DUMP @7fe3d152c70d +0 510B0000 +[88d37dc8472] jit-backend-dump} +[88d37dc8b7a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449759 +0 3B0B0000 -[101d93778625f] jit-backend-dump} -[101d93778690b] {jit-backend-dump +CODE_DUMP @7fe3d152c754 +0 440B0000 +[88d37dc98a2] jit-backend-dump} +[88d37dc9e62] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744978b +0 240B0000 -[101d9377876fb] jit-backend-dump} -[101d937787d7b] {jit-backend-dump +CODE_DUMP @7fe3d152c786 +0 2D0B0000 +[88d37dcaaf6] jit-backend-dump} +[88d37dcb07a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744979f +0 2B0B0000 -[101d937788b13] jit-backend-dump} -[101d937789093] {jit-backend-dump +CODE_DUMP @7fe3d152c79b +0 330B0000 +[88d37dcbcaa] jit-backend-dump} +[88d37dcc236] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74497b0 +0 370B0000 -[101d937789ccf] jit-backend-dump} -[101d93778a263] {jit-backend-dump +CODE_DUMP @7fe3d152c7ac +0 3F0B0000 +[88d37dccfde] jit-backend-dump} +[88d37dcd666] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74497c2 +0 420B0000 -[101d93778aec7] jit-backend-dump} -[101d93778b453] {jit-backend-dump +CODE_DUMP @7fe3d152c7be +0 4A0B0000 +[88d37dce3ee] jit-backend-dump} +[88d37dce992] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74497e8 +0 380B0000 -[101d93778c223] jit-backend-dump} -[101d93778c8d7] {jit-backend-dump +CODE_DUMP @7fe3d152c7e4 +0 400B0000 +[88d37dcf5d2] jit-backend-dump} +[88d37dcfba2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74497ff +0 3D0B0000 -[101d93778d687] jit-backend-dump} -[101d93778de53] {jit-backend-dump +CODE_DUMP @7fe3d152c7fb +0 450B0000 +[88d37dd07ce] jit-backend-dump} +[88d37dd0ffa] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744981f +0 560B0000 -[101d93778ea7b] jit-backend-dump} -[101d93778f02b] {jit-backend-dump +CODE_DUMP @7fe3d152c81b +0 5E0B0000 +[88d37dd1c56] jit-backend-dump} +[88d37dd22ee] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449829 +0 680B0000 -[101d93778fc43] jit-backend-dump} -[101d9377901bf] {jit-backend-dump +CODE_DUMP @7fe3d152c825 +0 700B0000 +[88d37dd30d6] jit-backend-dump} +[88d37dd378a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449840 +0 6F0B0000 -[101d937790e3f] jit-backend-dump} -[101d9377914b7] {jit-backend-dump +CODE_DUMP @7fe3d152c83c +0 770B0000 +[88d37dd4562] jit-backend-dump} +[88d37dd4af2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449855 +0 790B0000 -[101d937792197] jit-backend-dump} -[101d937792813] {jit-backend-dump +CODE_DUMP @7fe3d152c851 +0 810B0000 +[88d37dd5746] jit-backend-dump} +[88d37dd5cce] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744986e +0 800B0000 -[101d93779355b] jit-backend-dump} -[101d937793ad7] {jit-backend-dump +CODE_DUMP @7fe3d152c86a +0 880B0000 +[88d37dd693e] jit-backend-dump} +[88d37dd6f8e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744997c +0 920A0000 -[101d9377946e3] jit-backend-dump} -[101d937794c57] {jit-backend-dump +CODE_DUMP @7fe3d152c978 +0 9A0A0000 +[88d37dd7d9a] jit-backend-dump} +[88d37dd8472] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744998b +0 A90A0000 -[101d93779585f] jit-backend-dump} -[101d937795dbf] {jit-backend-dump +CODE_DUMP @7fe3d152c987 +0 B10A0000 +[88d37dd9212] jit-backend-dump} +[88d37dd97ca] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a21 +0 390A0000 -[101d937796bd7] jit-backend-dump} -[101d937797237] {jit-backend-dump +CODE_DUMP @7fe3d152ca1d +0 410A0000 +[88d37dda41e] jit-backend-dump} +[88d37dda9a2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a30 +0 500A0000 -[101d93779a9f3] jit-backend-dump} -[101d93779b1e3] {jit-backend-dump +CODE_DUMP @7fe3d152ca2c +0 580A0000 +[88d37ddb616] jit-backend-dump} +[88d37ddbb9e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a4a +0 5C0A0000 -[101d93779bfcf] jit-backend-dump} -[101d93779c647] {jit-backend-dump +CODE_DUMP @7fe3d152ca46 +0 640A0000 +[88d37ddc7c6] jit-backend-dump} +[88d37ddce62] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a70 +0 5C0A0000 -[101d93779d2e3] jit-backend-dump} -[101d93779d947] {jit-backend-dump +CODE_DUMP @7fe3d152ca6c +0 640A0000 +[88d37dddcd6] jit-backend-dump} +[88d37dde37e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a7d +0 730A0000 -[101d93779e77f] jit-backend-dump} -[101d93779ee0b] {jit-backend-dump +CODE_DUMP @7fe3d152ca79 +0 7B0A0000 +[88d37ddf112] jit-backend-dump} +[88d37ddf69e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a91 +0 830A0000 -[101d93779fbff] jit-backend-dump} -[101d9377a024b] {jit-backend-dump +CODE_DUMP @7fe3d152ca8d +0 8B0A0000 +[88d37de2f1e] jit-backend-dump} +[88d37de3656] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449a9f +0 9A0A0000 -[101d9377a0f77] jit-backend-dump} -[101d9377a1597] {jit-backend-dump +CODE_DUMP @7fe3d152ca9b +0 A20A0000 +[88d37de4466] jit-backend-dump} +[88d37de4c32] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449acc +0 B10A0000 -[101d9377a219b] jit-backend-dump} -[101d9377a2737] {jit-backend-dump +CODE_DUMP @7fe3d152cac8 +0 B90A0000 +[88d37de5996] jit-backend-dump} +[88d37de600a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449ae2 +0 BD0A0000 -[101d9377a336f] jit-backend-dump} -[101d9377a393b] {jit-backend-dump +CODE_DUMP @7fe3d152cade +0 C50A0000 +[88d37de6de6] jit-backend-dump} +[88d37de7496] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449af7 +0 CC0A0000 -[101d9377a4787] jit-backend-dump} -[101d9377a4e33] {jit-backend-dump +CODE_DUMP @7fe3d152caf3 +0 D40A0000 +[88d37de8166] jit-backend-dump} +[88d37de87ca] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b05 +0 E30A0000 -[101d9377a5bbf] jit-backend-dump} -[101d9377a6193] {jit-backend-dump +CODE_DUMP @7fe3d152cb01 +0 EB0A0000 +[88d37de94f6] jit-backend-dump} +[88d37de9a7a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b1c +0 F00A0000 -[101d9377a6ddf] jit-backend-dump} -[101d9377a736b] {jit-backend-dump +CODE_DUMP @7fe3d152cb18 +0 F80A0000 +[88d37dea68a] jit-backend-dump} +[88d37dead02] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b36 +0 FB0A0000 -[101d9377a7fab] jit-backend-dump} -[101d9377a852f] {jit-backend-dump +CODE_DUMP @7fe3d152cb32 +0 030B0000 +[88d37deba8e] jit-backend-dump} +[88d37dec0e2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b40 +0 170B0000 -[101d9377a918f] jit-backend-dump} -[101d9377a982f] {jit-backend-dump +CODE_DUMP @7fe3d152cb3c +0 1F0B0000 +[88d37dece36] jit-backend-dump} +[88d37ded3f6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b4a +0 340B0000 -[101d9377aa64b] jit-backend-dump} -[101d9377aad1b] {jit-backend-dump +CODE_DUMP @7fe3d152cb46 +0 3C0B0000 +[88d37dee02a] jit-backend-dump} +[88d37dee582] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449b5d +0 470B0000 -[101d9377abad7] jit-backend-dump} -[101d9377ac06b] {jit-backend-dump +CODE_DUMP @7fe3d152cb59 +0 4F0B0000 +[88d37def186] jit-backend-dump} +[88d37def706] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449c66 +0 630A0000 -[101d9377acca7] jit-backend-dump} -[101d9377ad277] {jit-backend-dump +CODE_DUMP @7fe3d152cc62 +0 6B0A0000 +[88d37df033e] jit-backend-dump} +[88d37df09b6] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449c75 +0 790A0000 -[101d9377aded7] jit-backend-dump} -[101d9377ae46b] {jit-backend-dump +CODE_DUMP @7fe3d152cc71 +0 810A0000 +[88d37df171a] jit-backend-dump} +[88d37df1d86] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449c7e +0 950A0000 -[101d9377af257] jit-backend-dump} -[101d9377af98b] {jit-backend-dump +CODE_DUMP @7fe3d152cc7a +0 9D0A0000 +[88d37df29b2] jit-backend-dump} +[88d37df2f3e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449c92 +0 A50A0000 -[101d9377b075f] jit-backend-dump} -[101d9377b0d1f] {jit-backend-dump +CODE_DUMP @7fe3d152cc8e +0 AD0A0000 +[88d37df3b5a] jit-backend-dump} +[88d37df40c2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449ca0 +0 B70A0000 -[101d9377b194f] jit-backend-dump} -[101d9377b1f5b] {jit-backend-dump +CODE_DUMP @7fe3d152cc9c +0 BF0A0000 +[88d37df4cfa] jit-backend-dump} +[88d37df530e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7449ce3 +0 AE0A0000 -[101d9377b2b8f] jit-backend-dump} -[101d9377b3a47] jit-backend} -[101d9377b6a83] {jit-log-opt-loop +CODE_DUMP @7fe3d152cce3 +0 B20A0000 +[88d37df611e] jit-backend-dump} +[88d37df707e] jit-backend} +[88d37dfa76e] {jit-log-opt-loop # Loop 4 ( #44 FOR_ITER) : loop with 351 ops [p0, p1] +84: p2 = getfield_gc(p0, descr=) @@ -1534,7 +1534,7 @@ +157: p22 = getarrayitem_gc(p8, 6, descr=) +168: p24 = getarrayitem_gc(p8, 7, descr=) +172: p25 = getfield_gc(p0, descr=) -+172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(139705792106192)) ++172: label(p0, p1, p2, p3, i4, p5, i6, i7, p10, p12, p14, p16, p18, p20, p22, p24, descr=TargetToken(140616493870800)) debug_merge_point(0, ' #44 FOR_ITER') +265: guard_value(i6, 4, descr=) [i6, p1, p0, p2, p3, i4, p5, i7, p10, p12, p14, p16, p18, p20, p22, p24] +275: guard_class(p16, 38562496, descr=) [p1, p0, p16, p2, p3, i4, p5, p10, p12, p14, p18, p20, p22, p24] @@ -1601,85 +1601,85 @@ debug_merge_point(2, ' #32 CALL_METHOD') +575: p64 = getfield_gc(ConstPtr(ptr63), descr=) +588: guard_class(p64, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p64, p2, p5, p10, p12, p16, i59, i51, p50, p36] -+600: p66 = getfield_gc(ConstPtr(ptr63), descr=) -+613: i67 = force_token() ++601: p66 = getfield_gc(ConstPtr(ptr63), descr=) ++614: i67 = force_token() p69 = new_array(3, descr=) p71 = new_with_vtable(38637968) -+705: setfield_gc(p71, i59, descr=) ++706: setfield_gc(p71, i59, descr=) setfield_gc(p49, p71, descr=) -+752: setfield_gc(p0, i67, descr=) -+756: setarrayitem_gc(p69, 0, ConstPtr(ptr73), descr=) -+764: setarrayitem_gc(p69, 1, ConstPtr(ptr75), descr=) -+778: setarrayitem_gc(p69, 2, ConstPtr(ptr77), descr=) -+792: i79 = call_may_force(ConstClass(hash_tuple), p69, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p69, p50, i51, p36] -+857: guard_no_exception(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, p69, p50, i51, p36] -+872: i80 = force_token() ++753: setfield_gc(p0, i67, descr=) ++757: setarrayitem_gc(p69, 0, ConstPtr(ptr73), descr=) ++765: setarrayitem_gc(p69, 1, ConstPtr(ptr75), descr=) ++779: setarrayitem_gc(p69, 2, ConstPtr(ptr77), descr=) ++793: i79 = call_may_force(ConstClass(hash_tuple), p69, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, i51, p69, p36, p50] ++858: guard_no_exception(, descr=) [p1, p0, p49, p66, i79, p71, p2, p5, p10, p12, p16, i51, p69, p36, p50] ++873: i80 = force_token() p82 = new_with_vtable(38549536) -+942: setfield_gc(p0, i80, descr=) -+953: setfield_gc(p82, p69, descr=) -+964: i84 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p66, p82, i79, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] -+1022: guard_no_exception(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] -+1037: i86 = int_and(i84, -9223372036854775808) -+1053: i87 = int_is_true(i86) -guard_false(i87, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, p50, i51, p36] -+1063: p88 = getfield_gc(p66, descr=) -+1074: p89 = getinteriorfield_gc(p88, i84, descr=>) -+1083: guard_nonnull_class(p89, 38793968, descr=) [p1, p0, p49, p82, p89, p71, p2, p5, p10, p12, p16, p50, i51, p36] ++943: setfield_gc(p0, i80, descr=) ++954: setfield_gc(p82, p69, descr=) ++965: i84 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p66, p82, i79, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, i51, p36, p50] ++1023: guard_no_exception(, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, i51, p36, p50] ++1038: i86 = int_and(i84, -9223372036854775808) ++1054: i87 = int_is_true(i86) +guard_false(i87, descr=) [p1, p0, p49, p82, i84, p66, p71, p2, p5, p10, p12, p16, i51, p36, p50] ++1064: p88 = getfield_gc(p66, descr=) ++1075: p89 = getinteriorfield_gc(p88, i84, descr=>) ++1084: guard_nonnull_class(p89, 38793968, descr=) [p1, p0, p49, p82, p89, p71, p2, p5, p10, p12, p16, i51, p36, p50] debug_merge_point(2, ' #35 STORE_FAST') debug_merge_point(2, ' #38 LOAD_FAST') debug_merge_point(2, ' #41 LOAD_CONST') debug_merge_point(2, ' #44 COMPARE_OP') -+1101: i92 = instance_ptr_eq(ConstPtr(ptr91), p89) -guard_false(i92, descr=) [p1, p0, p49, p71, p2, p5, p10, p12, p16, p82, p89, p50, i51, p36] ++1102: i92 = instance_ptr_eq(ConstPtr(ptr91), p89) +guard_false(i92, descr=) [p1, p0, p49, p71, p2, p5, p10, p12, p16, p89, p82, i51, p36, p50] debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') debug_merge_point(2, ' #50 LOAD_FAST') debug_merge_point(2, ' #53 RETURN_VALUE') -+1114: p93 = getfield_gc(p49, descr=) -+1125: guard_isnull(p93, descr=) [p1, p0, p49, p89, p93, p71, p2, p5, p10, p12, p16, p82, None, p50, i51, p36] -+1134: i95 = getfield_gc(p49, descr=) -+1138: i96 = int_is_true(i95) -guard_false(i96, descr=) [p1, p0, p49, p89, p71, p2, p5, p10, p12, p16, p82, None, p50, i51, p36] -+1148: p97 = getfield_gc(p49, descr=) ++1115: p93 = getfield_gc(p49, descr=) ++1126: guard_isnull(p93, descr=) [p1, p0, p49, p89, p93, p71, p2, p5, p10, p12, p16, None, p82, i51, p36, p50] ++1135: i95 = getfield_gc(p49, descr=) ++1139: i96 = int_is_true(i95) +guard_false(i96, descr=) [p1, p0, p49, p89, p71, p2, p5, p10, p12, p16, None, p82, i51, p36, p50] ++1149: p97 = getfield_gc(p49, descr=) debug_merge_point(1, ' #12 LOOKUP_METHOD') -+1148: setfield_gc(p71, -3, descr=) ++1149: setfield_gc(p71, -3, descr=) debug_merge_point(1, ' #15 LOAD_FAST') debug_merge_point(1, ' #18 CALL_METHOD') -+1163: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, None, p89, p50, i51, p36] -+1163: i99 = strlen(p36) -+1174: i101 = int_gt(9223372036854775807, i99) -guard_true(i101, descr=) [p1, p0, p49, p89, p36, p2, p5, p10, p12, p16, None, None, p50, i51, None] -+1193: p102 = getfield_gc_pure(p89, descr=) -+1197: i103 = getfield_gc_pure(p89, descr=) -+1201: i105 = getarrayitem_gc_pure(p102, 0, descr=) -+1205: i107 = int_eq(i105, 17) -guard_true(i107, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] -+1215: i109 = getarrayitem_gc_pure(p102, 2, descr=) -+1219: i111 = int_and(i109, 1) -+1226: i112 = int_is_true(i111) -guard_true(i112, descr=) [p1, p0, p49, p89, i109, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] -+1236: i114 = getarrayitem_gc_pure(p102, 5, descr=) -+1240: i116 = int_gt(i114, 1) -guard_false(i116, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] -+1250: i118 = getarrayitem_gc_pure(p102, 1, descr=) -+1254: i120 = int_add(i118, 1) -+1258: i121 = getarrayitem_gc_pure(p102, i120, descr=) -+1263: i123 = int_eq(i121, 19) -guard_true(i123, descr=) [p1, p0, p49, p89, i120, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] -+1273: i125 = int_add(i120, 1) -+1280: i126 = getarrayitem_gc_pure(p102, i125, descr=) -+1285: i128 = int_add(i120, 2) -+1289: i130 = int_lt(0, i99) -guard_true(i130, descr=) [p1, p0, p49, p89, i126, i128, p2, p5, p10, p12, p16, i99, i103, p102, None, None, p50, i51, p36] -+1299: guard_value(i128, 11, descr=) [p1, p0, p49, p89, i126, i128, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] -+1309: guard_value(i126, 51, descr=) [p1, p0, p49, p89, i126, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] -+1319: guard_value(p102, ConstPtr(ptr133), descr=) [p1, p0, p49, p89, p102, p2, p5, p10, p12, p16, i99, i103, None, None, None, p50, i51, p36] ++1164: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p89, None, i51, p36, p50] ++1164: i99 = strlen(p36) ++1175: i101 = int_gt(9223372036854775807, i99) +guard_true(i101, descr=) [p1, p0, p49, p89, p36, p2, p5, p10, p12, p16, None, None, i51, None, p50] ++1194: p102 = getfield_gc_pure(p89, descr=) ++1198: i103 = getfield_gc_pure(p89, descr=) ++1202: i105 = getarrayitem_gc_pure(p102, 0, descr=) ++1206: i107 = int_eq(i105, 17) +guard_true(i107, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, p102, i99, i103, None, None, i51, p36, p50] ++1216: i109 = getarrayitem_gc_pure(p102, 2, descr=) ++1220: i111 = int_and(i109, 1) ++1227: i112 = int_is_true(i111) +guard_true(i112, descr=) [p1, p0, p49, p89, i109, p2, p5, p10, p12, p16, p102, i99, i103, None, None, i51, p36, p50] ++1237: i114 = getarrayitem_gc_pure(p102, 5, descr=) ++1241: i116 = int_gt(i114, 1) +guard_false(i116, descr=) [p1, p0, p49, p89, p2, p5, p10, p12, p16, p102, i99, i103, None, None, i51, p36, p50] ++1251: i118 = getarrayitem_gc_pure(p102, 1, descr=) ++1255: i120 = int_add(i118, 1) ++1259: i121 = getarrayitem_gc_pure(p102, i120, descr=) ++1264: i123 = int_eq(i121, 19) +guard_true(i123, descr=) [p1, p0, p49, p89, i120, p2, p5, p10, p12, p16, p102, i99, i103, None, None, i51, p36, p50] ++1274: i125 = int_add(i120, 1) ++1281: i126 = getarrayitem_gc_pure(p102, i125, descr=) ++1286: i128 = int_add(i120, 2) ++1290: i130 = int_lt(0, i99) +guard_true(i130, descr=) [p1, p0, p49, p89, i126, i128, p2, p5, p10, p12, p16, p102, i99, i103, None, None, i51, p36, p50] ++1300: guard_value(i128, 11, descr=) [p1, p0, p49, p89, i126, i128, p102, p2, p5, p10, p12, p16, None, i99, i103, None, None, i51, p36, p50] ++1310: guard_value(i126, 51, descr=) [p1, p0, p49, p89, i126, p102, p2, p5, p10, p12, p16, None, i99, i103, None, None, i51, p36, p50] ++1320: guard_value(p102, ConstPtr(ptr133), descr=) [p1, p0, p49, p89, p102, p2, p5, p10, p12, p16, None, i99, i103, None, None, i51, p36, p50] debug_merge_point(2, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') -+1338: i134 = force_token() ++1339: i134 = force_token() p136 = new_with_vtable(38602768) p137 = new_with_vtable(38637968) -+1422: setfield_gc(p137, i51, descr=) ++1423: setfield_gc(p137, i51, descr=) setfield_gc(p49, p137, descr=) +1469: setfield_gc(p0, i134, descr=) +1480: setfield_gc(p136, ConstPtr(ptr133), descr=) @@ -1687,68 +1687,68 @@ +1498: setfield_gc(p136, i99, descr=) +1502: setfield_gc(p136, p36, descr=) +1506: i138 = call_assembler(0, p136, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p50, p36] -+1599: guard_no_exception(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p50, p36] -+1614: guard_false(i138, descr=) [p1, p0, p49, p136, p89, p137, p2, p5, p10, p12, p16, p50, p36] +guard_not_forced(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p36, p50] ++1599: guard_no_exception(, descr=) [p1, p0, p49, p136, p89, i138, p137, p2, p5, p10, p12, p16, p36, p50] ++1614: guard_false(i138, descr=) [p1, p0, p49, p136, p89, p137, p2, p5, p10, p12, p16, p36, p50] debug_merge_point(1, ' #21 RETURN_VALUE') +1623: p139 = getfield_gc(p49, descr=) -+1634: guard_isnull(p139, descr=) [p1, p0, p49, p139, p137, p2, p5, p10, p12, p16, p50, p36] ++1634: guard_isnull(p139, descr=) [p1, p0, p49, p139, p137, p2, p5, p10, p12, p16, p36, p50] +1643: i140 = getfield_gc(p49, descr=) +1647: i141 = int_is_true(i140) -guard_false(i141, descr=) [p1, p0, p49, p137, p2, p5, p10, p12, p16, p50, p36] +guard_false(i141, descr=) [p1, p0, p49, p137, p2, p5, p10, p12, p16, p36, p50] +1657: p142 = getfield_gc(p49, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') setfield_gc(p49, p50, descr=) -+1697: setfield_gc(p137, -3, descr=) -+1712: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p36] -+1712: i145 = getfield_raw(44057928, descr=) -+1720: i147 = int_lt(i145, 0) -guard_false(i147, descr=) [p1, p0, p2, p5, p10, p12, p16, None, p36] ++1695: setfield_gc(p137, -3, descr=) ++1710: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, p36, None] ++1710: i145 = getfield_raw(44057928, descr=) ++1718: i147 = int_lt(i145, 0) +guard_false(i147, descr=) [p1, p0, p2, p5, p10, p12, p16, p36, None] debug_merge_point(0, ' #44 FOR_ITER') -+1730: label(p0, p1, p2, p5, p10, p12, p36, p16, i140, p49, p50, descr=TargetToken(139705792106272)) ++1728: label(p0, p1, p2, p5, p10, p12, p36, p16, i140, p49, p50, descr=TargetToken(140616493870880)) debug_merge_point(0, ' #44 FOR_ITER') -+1760: p148 = getfield_gc(p16, descr=) -+1771: guard_nonnull(p148, descr=) [p1, p0, p16, p148, p2, p5, p10, p12, p36] -+1780: i149 = getfield_gc(p16, descr=) -+1784: p150 = getfield_gc(p148, descr=) -+1788: guard_class(p150, 38655536, descr=) [p1, p0, p16, i149, p150, p148, p2, p5, p10, p12, p36] -+1800: p151 = getfield_gc(p148, descr=) -+1804: i152 = getfield_gc(p151, descr=) -+1808: i153 = uint_ge(i149, i152) ++1758: p148 = getfield_gc(p16, descr=) ++1769: guard_nonnull(p148, descr=) [p1, p0, p16, p148, p2, p5, p10, p12, p36] ++1778: i149 = getfield_gc(p16, descr=) ++1782: p150 = getfield_gc(p148, descr=) ++1786: guard_class(p150, 38655536, descr=) [p1, p0, p16, i149, p150, p148, p2, p5, p10, p12, p36] ++1799: p151 = getfield_gc(p148, descr=) ++1803: i152 = getfield_gc(p151, descr=) ++1807: i153 = uint_ge(i149, i152) guard_false(i153, descr=) [p1, p0, p16, i149, i152, p151, p2, p5, p10, p12, p36] -+1817: p154 = getfield_gc(p151, descr=) -+1821: p155 = getarrayitem_gc(p154, i149, descr=) -+1826: guard_nonnull(p155, descr=) [p1, p0, p16, i149, p155, p2, p5, p10, p12, p36] -+1835: i156 = int_add(i149, 1) ++1816: p154 = getfield_gc(p151, descr=) ++1820: p155 = getarrayitem_gc(p154, i149, descr=) ++1825: guard_nonnull(p155, descr=) [p1, p0, p16, i149, p155, p2, p5, p10, p12, p36] ++1834: i156 = int_add(i149, 1) debug_merge_point(0, ' #47 STORE_FAST') debug_merge_point(0, ' #50 LOAD_GLOBAL') -+1839: p157 = getfield_gc(p0, descr=) -+1850: setfield_gc(p16, i156, descr=) -+1854: guard_value(p157, ConstPtr(ptr42), descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] -+1873: p158 = getfield_gc(p157, descr=) -+1877: guard_value(p158, ConstPtr(ptr44), descr=) [p1, p0, p158, p157, p2, p5, p10, p12, p16, p155, None] -+1896: guard_not_invalidated(, descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] ++1838: p157 = getfield_gc(p0, descr=) ++1849: setfield_gc(p16, i156, descr=) ++1853: guard_value(p157, ConstPtr(ptr42), descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] ++1872: p158 = getfield_gc(p157, descr=) ++1876: guard_value(p158, ConstPtr(ptr44), descr=) [p1, p0, p158, p157, p2, p5, p10, p12, p16, p155, None] ++1895: guard_not_invalidated(, descr=) [p1, p0, p157, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #53 LOOKUP_METHOD') -+1896: p159 = getfield_gc(ConstPtr(ptr45), descr=) -+1909: guard_value(p159, ConstPtr(ptr47), descr=) [p1, p0, p159, p2, p5, p10, p12, p16, p155, None] ++1895: p159 = getfield_gc(ConstPtr(ptr45), descr=) ++1908: guard_value(p159, ConstPtr(ptr47), descr=) [p1, p0, p159, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #56 LOAD_CONST') debug_merge_point(0, ' #59 LOAD_FAST') debug_merge_point(0, ' #62 CALL_METHOD') -+1928: i160 = force_token() -+1928: i161 = int_is_zero(i140) -guard_true(i161, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p50, i160, p155, None] ++1927: i160 = force_token() ++1927: i161 = int_is_zero(i140) +guard_true(i161, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, i160, p50, p155, None] debug_merge_point(1, ' #0 LOAD_GLOBAL') debug_merge_point(1, ' #3 LOAD_FAST') debug_merge_point(1, ' #6 LOAD_FAST') debug_merge_point(1, ' #9 CALL_FUNCTION') -+1938: i162 = getfield_gc(ConstPtr(ptr55), descr=) -+1951: i163 = int_ge(0, i162) -guard_true(i163, descr=) [p1, p0, p49, i162, p2, p5, p10, p12, p16, p50, i160, p155, None] -+1961: i164 = force_token() ++1937: i162 = getfield_gc(ConstPtr(ptr55), descr=) ++1950: i163 = int_ge(0, i162) +guard_true(i163, descr=) [p1, p0, p49, i162, p2, p5, p10, p12, p16, i160, p50, p155, None] ++1960: i164 = force_token() debug_merge_point(2, ' #0 LOAD_GLOBAL') -+1961: p165 = getfield_gc(ConstPtr(ptr60), descr=) -+1969: guard_value(p165, ConstPtr(ptr62), descr=) [p1, p0, p49, p165, p2, p5, p10, p12, p16, i164, p50, i160, p155, None] ++1960: p165 = getfield_gc(ConstPtr(ptr60), descr=) ++1968: guard_value(p165, ConstPtr(ptr62), descr=) [p1, p0, p49, p165, p2, p5, p10, p12, p16, i164, i160, p50, p155, None] debug_merge_point(2, ' #3 LOAD_FAST') debug_merge_point(2, ' #6 LOAD_CONST') debug_merge_point(2, ' #9 BINARY_SUBSCR') @@ -1761,147 +1761,147 @@ debug_merge_point(2, ' #26 LOOKUP_METHOD') debug_merge_point(2, ' #29 LOAD_FAST') debug_merge_point(2, ' #32 CALL_METHOD') -+1982: p166 = getfield_gc(ConstPtr(ptr63), descr=) -+1995: guard_class(p166, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p166, p2, p5, p10, p12, p16, i164, p50, i160, p155, None] -+2007: p167 = getfield_gc(ConstPtr(ptr63), descr=) -+2020: i168 = force_token() ++1981: p166 = getfield_gc(ConstPtr(ptr63), descr=) ++1994: guard_class(p166, ConstClass(ObjectDictStrategy), descr=) [p1, p0, p49, p166, p2, p5, p10, p12, p16, i164, i160, p50, p155, None] ++2006: p167 = getfield_gc(ConstPtr(ptr63), descr=) ++2019: i168 = force_token() p169 = new_array(3, descr=) p170 = new_with_vtable(38637968) -+2112: setfield_gc(p170, i164, descr=) ++2118: setfield_gc(p170, i164, descr=) setfield_gc(p49, p170, descr=) -+2165: setfield_gc(p0, i168, descr=) -+2169: setarrayitem_gc(p169, 0, ConstPtr(ptr73), descr=) -+2177: setarrayitem_gc(p169, 1, ConstPtr(ptr75), descr=) -+2191: setarrayitem_gc(p169, 2, ConstPtr(ptr174), descr=) -+2205: i175 = call_may_force(ConstClass(hash_tuple), p169, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p169, p50, i160, p155] -+2277: guard_no_exception(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p169, p50, i160, p155] -+2292: i176 = force_token() ++2171: setfield_gc(p0, i168, descr=) ++2175: setarrayitem_gc(p169, 0, ConstPtr(ptr73), descr=) ++2183: setarrayitem_gc(p169, 1, ConstPtr(ptr75), descr=) ++2197: setarrayitem_gc(p169, 2, ConstPtr(ptr174), descr=) ++2211: i175 = call_may_force(ConstClass(hash_tuple), p169, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p155, p50, i160, p169] ++2276: guard_no_exception(, descr=) [p1, p0, p49, p167, i175, p170, p2, p5, p10, p12, p16, p155, p50, i160, p169] ++2291: i176 = force_token() p177 = new_with_vtable(38549536) -+2362: setfield_gc(p0, i176, descr=) -+2373: setfield_gc(p177, p169, descr=) -+2384: i178 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p167, p177, i175, descr=) -guard_not_forced(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] -+2442: guard_no_exception(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] -+2457: i179 = int_and(i178, -9223372036854775808) -+2473: i180 = int_is_true(i179) -guard_false(i180, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p50, i160, p155] -+2483: p181 = getfield_gc(p167, descr=) -+2494: p182 = getinteriorfield_gc(p181, i178, descr=>) -+2503: guard_nonnull_class(p182, 38793968, descr=) [p1, p0, p49, p177, p182, p170, p2, p5, p10, p12, p16, p50, i160, p155] ++2361: setfield_gc(p0, i176, descr=) ++2372: setfield_gc(p177, p169, descr=) ++2383: i178 = call_may_force(ConstClass(ll_dict_lookup_trampoline__v693___simple_call__function_l), p167, p177, i175, descr=) +guard_not_forced(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p155, p50, i160] ++2441: guard_no_exception(, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p155, p50, i160] ++2456: i179 = int_and(i178, -9223372036854775808) ++2472: i180 = int_is_true(i179) +guard_false(i180, descr=) [p1, p0, p49, p177, i178, p167, p170, p2, p5, p10, p12, p16, p155, p50, i160] ++2482: p181 = getfield_gc(p167, descr=) ++2493: p182 = getinteriorfield_gc(p181, i178, descr=>) ++2502: guard_nonnull_class(p182, 38793968, descr=) [p1, p0, p49, p177, p182, p170, p2, p5, p10, p12, p16, p155, p50, i160] debug_merge_point(2, ' #35 STORE_FAST') debug_merge_point(2, ' #38 LOAD_FAST') debug_merge_point(2, ' #41 LOAD_CONST') debug_merge_point(2, ' #44 COMPARE_OP') -+2521: i183 = instance_ptr_eq(ConstPtr(ptr91), p182) -guard_false(i183, descr=) [p1, p0, p49, p170, p2, p5, p10, p12, p16, p182, p177, p50, i160, p155] ++2520: i183 = instance_ptr_eq(ConstPtr(ptr91), p182) +guard_false(i183, descr=) [p1, p0, p49, p170, p2, p5, p10, p12, p16, p182, p177, p155, p50, i160] debug_merge_point(2, ' #47 POP_JUMP_IF_FALSE') debug_merge_point(2, ' #50 LOAD_FAST') debug_merge_point(2, ' #53 RETURN_VALUE') -+2534: p184 = getfield_gc(p49, descr=) -+2545: guard_isnull(p184, descr=) [p1, p0, p49, p182, p184, p170, p2, p5, p10, p12, p16, None, p177, p50, i160, p155] -+2554: i185 = getfield_gc(p49, descr=) -+2558: i186 = int_is_true(i185) -guard_false(i186, descr=) [p1, p0, p49, p182, p170, p2, p5, p10, p12, p16, None, p177, p50, i160, p155] -+2568: p187 = getfield_gc(p49, descr=) ++2533: p184 = getfield_gc(p49, descr=) ++2544: guard_isnull(p184, descr=) [p1, p0, p49, p182, p184, p170, p2, p5, p10, p12, p16, None, p177, p155, p50, i160] ++2553: i185 = getfield_gc(p49, descr=) ++2557: i186 = int_is_true(i185) +guard_false(i186, descr=) [p1, p0, p49, p182, p170, p2, p5, p10, p12, p16, None, p177, p155, p50, i160] ++2567: p187 = getfield_gc(p49, descr=) debug_merge_point(1, ' #12 LOOKUP_METHOD') -+2568: setfield_gc(p170, -3, descr=) ++2567: setfield_gc(p170, -3, descr=) debug_merge_point(1, ' #15 LOAD_FAST') debug_merge_point(1, ' #18 CALL_METHOD') -+2583: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p182, None, p50, i160, p155] -+2583: i189 = strlen(p155) -+2594: i191 = int_gt(9223372036854775807, i189) -guard_true(i191, descr=) [p1, p0, p49, p182, p155, p2, p5, p10, p12, p16, None, None, p50, i160, None] -+2613: p192 = getfield_gc_pure(p182, descr=) -+2617: i193 = getfield_gc_pure(p182, descr=) -+2621: i194 = getarrayitem_gc_pure(p192, 0, descr=) -+2625: i195 = int_eq(i194, 17) -guard_true(i195, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] -+2635: i196 = getarrayitem_gc_pure(p192, 2, descr=) -+2639: i197 = int_and(i196, 1) -+2646: i198 = int_is_true(i197) -guard_true(i198, descr=) [p1, p0, p49, p182, i196, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] -+2656: i199 = getarrayitem_gc_pure(p192, 5, descr=) -+2660: i200 = int_gt(i199, 1) -guard_false(i200, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] -+2670: i201 = getarrayitem_gc_pure(p192, 1, descr=) -+2674: i202 = int_add(i201, 1) -+2678: i203 = getarrayitem_gc_pure(p192, i202, descr=) -+2683: i204 = int_eq(i203, 19) -guard_true(i204, descr=) [p1, p0, p49, p182, i202, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] -+2693: i205 = int_add(i202, 1) -+2700: i206 = getarrayitem_gc_pure(p192, i205, descr=) -+2705: i207 = int_add(i202, 2) -+2709: i209 = int_lt(0, i189) -guard_true(i209, descr=) [p1, p0, p49, p182, i206, i207, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p50, i160, p155] -+2719: guard_value(i207, 11, descr=) [p1, p0, p49, p182, i206, i207, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] -+2729: guard_value(i206, 51, descr=) [p1, p0, p49, p182, i206, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] -+2739: guard_value(p192, ConstPtr(ptr133), descr=) [p1, p0, p49, p182, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p50, i160, p155] ++2582: guard_not_invalidated(, descr=) [p1, p0, p49, p2, p5, p10, p12, p16, p182, None, p155, p50, i160] ++2582: i189 = strlen(p155) ++2593: i191 = int_gt(9223372036854775807, i189) +guard_true(i191, descr=) [p1, p0, p49, p182, p155, p2, p5, p10, p12, p16, None, None, None, p50, i160] ++2612: p192 = getfield_gc_pure(p182, descr=) ++2616: i193 = getfield_gc_pure(p182, descr=) ++2620: i194 = getarrayitem_gc_pure(p192, 0, descr=) ++2624: i195 = int_eq(i194, 17) +guard_true(i195, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p155, p50, i160] ++2634: i196 = getarrayitem_gc_pure(p192, 2, descr=) ++2638: i197 = int_and(i196, 1) ++2645: i198 = int_is_true(i197) +guard_true(i198, descr=) [p1, p0, p49, p182, i196, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p155, p50, i160] ++2655: i199 = getarrayitem_gc_pure(p192, 5, descr=) ++2659: i200 = int_gt(i199, 1) +guard_false(i200, descr=) [p1, p0, p49, p182, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p155, p50, i160] ++2669: i201 = getarrayitem_gc_pure(p192, 1, descr=) ++2673: i202 = int_add(i201, 1) ++2677: i203 = getarrayitem_gc_pure(p192, i202, descr=) ++2682: i204 = int_eq(i203, 19) +guard_true(i204, descr=) [p1, p0, p49, p182, i202, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p155, p50, i160] ++2692: i205 = int_add(i202, 1) ++2699: i206 = getarrayitem_gc_pure(p192, i205, descr=) ++2704: i207 = int_add(i202, 2) ++2708: i209 = int_lt(0, i189) +guard_true(i209, descr=) [p1, p0, p49, p182, i206, i207, p2, p5, p10, p12, p16, i193, p192, i189, None, None, p155, p50, i160] ++2718: guard_value(i207, 11, descr=) [p1, p0, p49, p182, i206, i207, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p155, p50, i160] ++2728: guard_value(i206, 51, descr=) [p1, p0, p49, p182, i206, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p155, p50, i160] ++2738: guard_value(p192, ConstPtr(ptr133), descr=) [p1, p0, p49, p182, p192, p2, p5, p10, p12, p16, i193, None, i189, None, None, p155, p50, i160] debug_merge_point(2, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') -+2758: i210 = force_token() ++2757: i210 = force_token() p211 = new_with_vtable(38602768) p212 = new_with_vtable(38637968) -+2842: setfield_gc(p212, i160, descr=) ++2841: setfield_gc(p212, i160, descr=) setfield_gc(p49, p212, descr=) -+2893: setfield_gc(p0, i210, descr=) -+2904: setfield_gc(p211, ConstPtr(ptr133), descr=) -+2918: setfield_gc(p211, i193, descr=) -+2922: setfield_gc(p211, i189, descr=) -+2926: setfield_gc(p211, p155, descr=) -+2930: i213 = call_assembler(0, p211, descr=) ++2892: setfield_gc(p0, i210, descr=) ++2903: setfield_gc(p211, ConstPtr(ptr133), descr=) ++2917: setfield_gc(p211, i193, descr=) ++2921: setfield_gc(p211, i189, descr=) ++2925: setfield_gc(p211, p155, descr=) ++2929: i213 = call_assembler(0, p211, descr=) guard_not_forced(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p155, p50] -+3023: guard_no_exception(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p155, p50] -+3038: guard_false(i213, descr=) [p1, p0, p49, p211, p182, p212, p2, p5, p10, p12, p16, p155, p50] ++3022: guard_no_exception(, descr=) [p1, p0, p49, p211, p182, i213, p212, p2, p5, p10, p12, p16, p155, p50] ++3037: guard_false(i213, descr=) [p1, p0, p49, p211, p182, p212, p2, p5, p10, p12, p16, p155, p50] debug_merge_point(1, ' #21 RETURN_VALUE') -+3047: p214 = getfield_gc(p49, descr=) -+3058: guard_isnull(p214, descr=) [p1, p0, p49, p214, p212, p2, p5, p10, p12, p16, p155, p50] -+3067: i215 = getfield_gc(p49, descr=) -+3071: i216 = int_is_true(i215) ++3046: p214 = getfield_gc(p49, descr=) ++3057: guard_isnull(p214, descr=) [p1, p0, p49, p214, p212, p2, p5, p10, p12, p16, p155, p50] ++3066: i215 = getfield_gc(p49, descr=) ++3070: i216 = int_is_true(i215) guard_false(i216, descr=) [p1, p0, p49, p212, p2, p5, p10, p12, p16, p155, p50] -+3081: p217 = getfield_gc(p49, descr=) ++3080: p217 = getfield_gc(p49, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') setfield_gc(p49, p50, descr=) -+3115: setfield_gc(p212, -3, descr=) -+3130: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, p155, None] -+3130: i219 = getfield_raw(44057928, descr=) -+3138: i220 = int_lt(i219, 0) ++3118: setfield_gc(p212, -3, descr=) ++3133: guard_not_invalidated(, descr=) [p1, p0, p2, p5, p10, p12, p16, p155, None] ++3133: i219 = getfield_raw(44057928, descr=) ++3141: i220 = int_lt(i219, 0) guard_false(i220, descr=) [p1, p0, p2, p5, p10, p12, p16, p155, None] debug_merge_point(0, ' #44 FOR_ITER') -+3148: jump(p0, p1, p2, p5, p10, p12, p155, p16, i215, p49, p50, descr=TargetToken(139705792106272)) -+3159: --end of the loop-- -[101d93796427b] jit-log-opt-loop} -[101d937abc21f] {jit-backend -[101d937ad8b53] {jit-backend-dump ++3151: jump(p0, p1, p2, p5, p10, p12, p155, p16, i215, p49, p50, descr=TargetToken(140616493870880)) ++3162: --end of the loop-- +[88d37fb8f32] jit-log-opt-loop} +[88d381268a6] {jit-backend +[88d3814d7d2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a7b0 +0 488DA50000000049BBA0E221CA0F7F00004D8B3B4983C70149BBA0E221CA0F7F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB007044C70F7F000041FFD31D1803AD00000049BB007044C70F7F000041FFD31D1803AE000000 -[101d937add077] jit-backend-dump} -[101d937add7ab] {jit-backend-addr -bridge out of Guard 90 has address 7f0fc744a7b0 to 7f0fc744a824 -[101d937ade67b] jit-backend-addr} -[101d937adee83] {jit-backend-dump +CODE_DUMP @7fe3d152d7b4 +0 488DA50000000049BBA01230D4E37F00004D8B3B4983C70149BBA01230D4E37F00004D893B4C8B7E404D0FB67C3F184983FF330F84000000004883C7014C8B7E084C39FF0F8C00000000B80000000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC349BB00A052D1E37F000041FFD31D1803AD00000049BB00A052D1E37F000041FFD31D1803AE000000 +[88d3815c4d6] jit-backend-dump} +[88d3815d0fe] {jit-backend-addr +bridge out of Guard 90 has address 7fe3d152d7b4 to 7fe3d152d828 +[88d3815e662] jit-backend-addr} +[88d3815f21a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a7b3 +0 70FFFFFF -[101d937adfeaf] jit-backend-dump} -[101d937ae063b] {jit-backend-dump +CODE_DUMP @7fe3d152d7b7 +0 70FFFFFF +[88d38160be6] jit-backend-dump} +[88d3816198e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a7e5 +0 3B000000 -[101d937ae1417] jit-backend-dump} -[101d937ae19a3] {jit-backend-dump +CODE_DUMP @7fe3d152d7e9 +0 3B000000 +[88d38162d9e] jit-backend-dump} +[88d3816370e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a7f6 +0 3E000000 -[101d937ae25df] jit-backend-dump} -[101d937ae2deb] {jit-backend-dump +CODE_DUMP @7fe3d152d7fa +0 3E000000 +[88d381649fe] jit-backend-dump} +[88d381658ce] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448e10 +0 9C190000 -[101d937ae3a13] jit-backend-dump} -[101d937ae42d7] jit-backend} -[101d937ae50ff] {jit-log-opt-bridge +CODE_DUMP @7fe3d152be0b +0 A5190000 +[88d38166bda] jit-backend-dump} +[88d38167ac2] jit-backend} +[88d381690da] {jit-log-opt-bridge # bridge out of Guard 90 with 10 ops [i0, p1] debug_merge_point(0, 're StrLiteralSearch at 11/51 [17. 8. 3. 1. 1. 1. 1. 51. 0. 19. 51. 1]') @@ -1915,87 +1915,87 @@ guard_false(i9, descr=) [i7, p1] +74: finish(0, descr=) +116: --end of the loop-- -[101d937af18c7] jit-log-opt-bridge} -[101d9382744bb] {jit-backend -[101d9382b13eb] {jit-backend-dump +[88d381785aa] jit-log-opt-bridge} +[88d388ceffe] {jit-backend +[88d3890f72a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a864 +0 488DA50000000049BBB8E221CA0F7F00004D8B3B4983C70149BBB8E221CA0F7F00004D893B4C8BBD00FFFFFF4D8B77504D85F60F85000000004D8B77284983FE000F85000000004C8BB5E8FEFFFF41F6470401740F4C89FF4C89F641BBF0C4C50041FFD34D8977404C8BB5B8FEFFFF49C74608FDFFFFFF4C8B34254845A0024983FE000F8C00000000488B042530255601488D5010483B142548255601761A49BB2D7244C70F7F000041FFD349BBC27244C70F7F000041FFD3488914253025560148C70088250000488B9508FFFFFF4889500849BB28DC58C70F7F00004D89DE41BD0000000041BA0400000048C78548FFFFFF2C00000048898538FFFFFF488B8D10FFFFFF48C78530FFFFFF0000000048C78528FFFFFF0000000048C78520FFFFFF0000000048C78518FFFFFF0000000049BB869144C70F7F000041FFE349BB007044C70F7F000041FFD34C483C389C0140504458709401749801840103AF00000049BB007044C70F7F000041FFD34C483C9C0140504458709401749801840103B000000049BB007044C70F7F000041FFD34C4840504458700774070703B100000049BB007044C70F7F000041FFD34C4840504458700774070703B2000000 -[101d9382b8807] jit-backend-dump} -[101d9382b8fa3] {jit-backend-addr -bridge out of Guard 133 has address 7f0fc744a864 to 7f0fc744a9a2 -[101d9382b9c93] jit-backend-addr} -[101d9382ba69f] {jit-backend-dump +CODE_DUMP @7fe3d152d868 +0 488DA50000000049BBB81230D4E37F00004D8B3B4983C70149BBB81230D4E37F00004D893B4C8BBD00FFFFFF4D8B77504D85F60F85000000004D8B77284983FE000F85000000004C8BB5F0FEFFFF41F6470401740F4C89FF4C89F641BBF0C4C50041FFD34D8977404C8BB5C0FEFFFF49C74608FDFFFFFF4C8B34254845A0024983FE000F8C00000000488B042530255601488D5010483B142548255601761A49BB2DA252D1E37F000041FFD349BBC2A252D1E37F000041FFD3488914253025560148C70088250000488B9510FFFFFF4889500849BB281C67D1E37F00004D89DE41BD0000000041BA0400000048C78548FFFFFF2C00000048898538FFFFFF488B8D08FFFFFF48C78530FFFFFF0000000048C78528FFFFFF0000000048C78520FFFFFF0000000048C78518FFFFFF0000000049BB83C152D1E37F000041FFE349BB00A052D1E37F000041FFD34C483C3898014050445874709C019401800103AF00000049BB00A052D1E37F000041FFD34C483C98014050445874709C019401800103B000000049BB00A052D1E37F000041FFD34C4840504458747007070703B100000049BB00A052D1E37F000041FFD34C4840504458747007070703B2000000 +[88d38916aea] jit-backend-dump} +[88d3891725a] {jit-backend-addr +bridge out of Guard 133 has address 7fe3d152d868 to 7fe3d152d9a6 +[88d38917fde] jit-backend-addr} +[88d3891888e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a867 +0 E0FDFFFF -[101d9382bb6b3] jit-backend-dump} -[101d9382bbfeb] {jit-backend-dump +CODE_DUMP @7fe3d152d86b +0 E0FDFFFF +[88d389198b6] jit-backend-dump} +[88d3891a142] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a899 +0 05010000 -[101d9382c799f] jit-backend-dump} -[101d9382c81c7] {jit-backend-dump +CODE_DUMP @7fe3d152d89d +0 05010000 +[88d3891aff2] jit-backend-dump} +[88d3891b5ea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a8a7 +0 1B010000 -[101d9382c8f8b] jit-backend-dump} -[101d9382c9667] {jit-backend-dump +CODE_DUMP @7fe3d152d8ab +0 1B010000 +[88d3891c4b2] jit-backend-dump} +[88d3891cca2] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744a8e9 +0 19010000 -[101d9382ca3a7] jit-backend-dump} -[101d9382cab93] {jit-backend-dump +CODE_DUMP @7fe3d152d8ed +0 19010000 +[88d3891dad6] jit-backend-dump} +[88d3891e1ea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc74496ee +0 72110000 -[101d9382cba6f] jit-backend-dump} -[101d9382cc4f3] jit-backend} -[101d9382cd433] {jit-log-opt-bridge +CODE_DUMP @7fe3d152c6eb +0 79110000 +[88d3891ee4e] jit-backend-dump} +[88d3891f6b2] jit-backend} +[88d3892047a] {jit-log-opt-bridge # bridge out of Guard 133 with 19 ops [p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11, p12] debug_merge_point(1, ' #21 RETURN_VALUE') +37: p13 = getfield_gc(p2, descr=) -+48: guard_isnull(p13, descr=) [p0, p1, p2, p13, p5, p6, p7, p8, p9, p10, p4, p12, p3, p11] ++48: guard_isnull(p13, descr=) [p0, p1, p2, p13, p5, p6, p7, p8, p9, p10, p11, p3, p4, p12] +57: i14 = getfield_gc(p2, descr=) +61: i15 = int_is_true(i14) -guard_false(i15, descr=) [p0, p1, p2, p5, p6, p7, p8, p9, p10, p4, p12, p3, p11] +guard_false(i15, descr=) [p0, p1, p2, p5, p6, p7, p8, p9, p10, p11, p3, p4, p12] +71: p16 = getfield_gc(p2, descr=) debug_merge_point(0, ' #65 POP_TOP') debug_merge_point(0, ' #66 JUMP_ABSOLUTE') -setfield_gc(p2, p11, descr=) +setfield_gc(p2, p12, descr=) +104: setfield_gc(p5, -3, descr=) -+119: guard_not_invalidated(, descr=) [p0, p1, p6, p7, p8, p9, p10, None, p12, None, None] ++119: guard_not_invalidated(, descr=) [p0, p1, p6, p7, p8, p9, p10, p11, None, None, None] +119: i20 = getfield_raw(44057928, descr=) +127: i22 = int_lt(i20, 0) -guard_false(i22, descr=) [p0, p1, p6, p7, p8, p9, p10, None, p12, None, None] +guard_false(i22, descr=) [p0, p1, p6, p7, p8, p9, p10, p11, None, None, None] debug_merge_point(0, ' #44 FOR_ITER') p24 = new_with_vtable(ConstClass(W_StringObject)) -+200: setfield_gc(p24, p12, descr=) -+211: jump(p1, p0, p6, ConstPtr(ptr25), 0, p7, 4, 44, p8, p9, p24, p10, ConstPtr(ptr29), ConstPtr(ptr30), ConstPtr(ptr30), ConstPtr(ptr30), descr=TargetToken(139705792106192)) ++200: setfield_gc(p24, p11, descr=) ++211: jump(p1, p0, p6, ConstPtr(ptr25), 0, p7, 4, 44, p8, p9, p24, p10, ConstPtr(ptr29), ConstPtr(ptr30), ConstPtr(ptr30), ConstPtr(ptr30), descr=TargetToken(140616493870800)) +318: --end of the loop-- -[101d9382ed8b7] jit-log-opt-bridge} -[101d93832def7] {jit-backend -[101d93833f3c7] {jit-backend-dump +[88d3894d382] jit-log-opt-bridge} +[88d38996006] {jit-backend +[88d389a7342] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744aa23 +0 488DA50000000049BBD0E221CA0F7F00004D8B3B4983C70149BBD0E221CA0F7F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[101d938342dd7] jit-backend-dump} -[101d93834346f] {jit-backend-addr -bridge out of Guard 87 has address 7f0fc744aa23 to 7f0fc744aa89 -[101d938349387] jit-backend-addr} -[101d938349d3f] {jit-backend-dump +CODE_DUMP @7fe3d152da27 +0 488DA50000000049BBD01230D4E37F00004D8B3B4983C70149BBD01230D4E37F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[88d389aac8e] jit-backend-dump} +[88d389ab33e] {jit-backend-addr +bridge out of Guard 87 has address 7fe3d152da27 to 7fe3d152da8d +[88d389abf7e] jit-backend-addr} +[88d389ac726] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744aa26 +0 70FFFFFF -[101d93834addf] jit-backend-dump} -[101d93834b6fb] {jit-backend-dump +CODE_DUMP @7fe3d152da2a +0 70FFFFFF +[88d389ad662] jit-backend-dump} +[88d389adf0a] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448d53 +0 CC1C0000 -[101d93834c5a7] jit-backend-dump} -[101d93834cf63] jit-backend} -[101d93834dd27] {jit-log-opt-bridge +CODE_DUMP @7fe3d152bd4e +0 D51C0000 +[88d389aeb9a] jit-backend-dump} +[88d389af402] jit-backend} +[88d389aff4e] {jit-log-opt-bridge # bridge out of Guard 87 with 5 ops [i0, p1] +37: i3 = int_add(i0, 1) @@ -2004,28 +2004,28 @@ +56: setfield_gc(p1, i0, descr=) +60: finish(1, descr=) +102: --end of the loop-- -[101d93835cbfb] jit-log-opt-bridge} -[101d9384f8b1f] {jit-backend -[101d938509147] {jit-backend-dump +[88d389b7b3e] jit-log-opt-bridge} +[88d38b4ff0a] {jit-backend +[88d38b60436] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744aa89 +0 488DA50000000049BBE8E221CA0F7F00004D8B3B4983C70149BBE8E221CA0F7F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 -[101d93850cbdb] jit-backend-dump} -[101d93850d29f] {jit-backend-addr -bridge out of Guard 89 has address 7f0fc744aa89 to 7f0fc744aaef -[101d93850de5b] jit-backend-addr} -[101d93850e5c7] {jit-backend-dump +CODE_DUMP @7fe3d152da8d +0 488DA50000000049BBE81230D4E37F00004D8B3B4983C70149BBE81230D4E37F00004D893B4989FF4883C70148897E1848C74620000000004C897E28B80100000048890425D0D1550141BBD01BF30041FFD3B802000000488D65D8415F415E415D415C5B5DC3 +[88d38b63f96] jit-backend-dump} +[88d38b6460a] {jit-backend-addr +bridge out of Guard 89 has address 7fe3d152da8d to 7fe3d152daf3 +[88d38b651b6] jit-backend-addr} +[88d38b658ea] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc744aa8c +0 70FFFFFF -[101d93850f3e3] jit-backend-dump} -[101d93850fc33] {jit-backend-dump +CODE_DUMP @7fe3d152da90 +0 70FFFFFF +[88d38b66806] jit-backend-dump} +[88d38b6708e] {jit-backend-dump BACKEND x86_64 SYS_EXECUTABLE /home/fijal/Downloads/pypy-1.8/pypy-1.8/bin/pypy -CODE_DUMP @7f0fc7448dff +0 861C0000 -[101d938510adf] jit-backend-dump} -[101d93851136f] jit-backend} -[101d938511de3] {jit-log-opt-bridge +CODE_DUMP @7fe3d152bdfa +0 8F1C0000 +[88d38b6ea8a] jit-backend-dump} +[88d38b6f412] jit-backend} +[88d38b700ca] {jit-log-opt-bridge # bridge out of Guard 89 with 5 ops [i0, p1] +37: i3 = int_add(i0, 1) @@ -2034,29 +2034,29 @@ +56: setfield_gc(p1, i0, descr=) +60: finish(1, descr=) +102: --end of the loop-- -[101d938518e47] jit-log-opt-bridge} -[101d93857d417] {jit-backend-counts +[88d38b77c9a] jit-log-opt-bridge} +[88d38be0b4e] {jit-backend-counts entry 0:4647 -TargetToken(139705745523760):4647 -TargetToken(139705745523840):9292 +TargetToken(140616447296560):4647 +TargetToken(140616447296640):9292 entry 1:201 -TargetToken(139705745528160):201 -TargetToken(139705745528240):4468 +TargetToken(140616447300960):201 +TargetToken(140616447301040):4468 bridge 16:4446 bridge 33:4268 -TargetToken(139705745530240):4268 +TargetToken(140616447303040):4268 entry 2:1 -TargetToken(139705792105152):1 -TargetToken(139705792105232):1938 +TargetToken(140616493869760):1 +TargetToken(140616493869840):1938 entry 3:3173 bridge 85:2882 bridge 88:2074 bridge 86:158 entry 4:377 -TargetToken(139705792106192):527 -TargetToken(139705792106272):1411 +TargetToken(140616493870800):527 +TargetToken(140616493870880):1411 bridge 90:1420 bridge 133:150 bridge 87:50 bridge 89:7 -[101d938585943] jit-backend-counts} +[88d38be91e2] jit-backend-counts} From noreply at buildbot.pypy.org Wed Feb 29 21:56:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 29 Feb 2012 21:56:22 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add numpy example Message-ID: <20120229205622.9E7208204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4108:a65e0376519b Date: 2012-02-29 12:55 -0800 http://bitbucket.org/pypy/extradoc/changeset/a65e0376519b/ Log: add numpy example diff --git a/talk/pycon2012/tutorial/examples.rst b/talk/pycon2012/tutorial/examples.rst --- a/talk/pycon2012/tutorial/examples.rst +++ b/talk/pycon2012/tutorial/examples.rst @@ -1,12 +1,20 @@ * Refcount example, where it won't work + 01_refcount + * A simple speedup example and show performance + 02_speedup + * Show memory consumption how it grows for tight instances + 03_memory + * Some numpy example (?) + 04_numpy + * An example how to use execnet * An example how to use matplotlib diff --git a/talk/pycon2012/tutorial/examples/04_numpy.py b/talk/pycon2012/tutorial/examples/04_numpy.py new file mode 100644 --- /dev/null +++ b/talk/pycon2012/tutorial/examples/04_numpy.py @@ -0,0 +1,15 @@ +try: + import numpypy +except ImportError: + pass +import numpy + +def f(): + a = numpy.zeros(10000, dtype=float) + b = numpy.zeros(10000, dtype=float) + c = numpy.zeros(10000, dtype=float) + (a + b)[0] = 3 + (a + b * c)[0] = 3 + (a + b * c).sum() + +f() From noreply at buildbot.pypy.org Wed Feb 29 21:56:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 29 Feb 2012 21:56:23 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge Message-ID: <20120229205623.D135D8204C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4109:e8ce2e0db76c Date: 2012-02-29 12:56 -0800 http://bitbucket.org/pypy/extradoc/changeset/e8ce2e0db76c/ Log: merge diff --git a/planning/stm.txt b/planning/stm.txt --- a/planning/stm.txt +++ b/planning/stm.txt @@ -75,8 +75,8 @@ use 4-5 bits, where in addition we use some "thread hash" value if there is only one copy. -<< NOW: think of a minimal GC model with these properties. We probably -need GC_GLOBAL, a single bit of GC_WAS_COPIED, and the version number. >> +<< NOW: implemented a minimal GC model with these properties. We have +GC_GLOBAL, a single bit of GC_WAS_COPIED, and the version number. >> stm_read @@ -102,9 +102,8 @@ depending on cases). And if the read is accepted then we need to remember in a local list that we've read that object. -<< NOW: implement the thread's local dictionary in C, as say a search -tree. Should be easy enough if we don't try to be as efficient as -possible. The rest of the logic here is straightforward. >> +<< NOW: the thread's local dictionary is in C, as a search tree. +The rest of the logic here is straightforward. >> stm_write @@ -124,7 +123,7 @@ consistent copy (i.e. nobody changed the object in the middle of us reading it). If it is too recent, then we might have to abort. -<< NOW: straightforward >> +<< NOW: done, straightforward >> TODO: how do we handle MemoryErrors when making a local copy?? Maybe force the transaction to abort, and then re-raise MemoryError @@ -147,8 +146,7 @@ We need to check that each of these global objects' versions have not been modified in the meantime. -<< NOW: should be easy, but with unclear interactions between the C -code and the GC. >> +<< NOW: done, kind of easy >> Annotator support @@ -167,7 +165,10 @@ of a localobj are themselves localobjs. This would be useful for 'PyFrame.fastlocals_w': it should also be known to always be a localobj. -<< do later >> +<< NOW: done in the basic form by translator/stm/transform.py. +Runs late (just before C databasing). Should work well enough to +remove the maximum number of write barriers, but still missing +PyFrame.fastlocals_w. >> Local collections @@ -243,7 +244,9 @@ << at first the global area keeps growing unboundedly. The next step will be to add the LIL but run the global collection by keeping all -other threads blocked. >> +other threads blocked. NOW: think about, at least, doing "minor +collections" on the global area *before* we even start running +transactions. >> When not running transactively @@ -267,19 +270,10 @@ is called, we can try to do such a collection, but what about the pinned objects? -<< NOW: let this mode be rather slow. Two solutions are considered: - - 1. we would have only global objects, and have the stm_write barrier - of 'obj' return 'obj'. Do only global collections (once we have - them; at first, don't collect at all). Allocation would allocate - immediately a global object, without being able to benefit from - bump-pointer allocation. - - 2. allocate in a nursery, never collected for now; but just do an - end-of-transaction collection when transaction.run() is first - called. - ->> +<< NOW: the global area is just the "nursery" for the main thread. +stm_writebarrier of 'obj' return 'obj' in the main thread. All +allocations get us directly a global object, but allocated from +the "nursery" of the main thread, with bump-pointer allocation. >> Pointer equality @@ -296,8 +290,11 @@ dictionary if they map to each other. And we need to take care of the cases of NULL pointers. -<< NOW: straightforward, if we're careful not to forget cases >> - +<< NOW: done, without needing the local dictionary: +stm_normalize_global(obj) returns globalobj if obj is a local, +WAS_COPIED object. Then a pointer comparison 'x == y' becomes +stm_normalize_global(x) == stm_normalize_global(y). Moreover +the call to stm_normalize_global() can be omitted for constants. >> diff --git a/talk/pycon2012/tutorial/slides.rst b/talk/pycon2012/tutorial/slides.rst --- a/talk/pycon2012/tutorial/slides.rst +++ b/talk/pycon2012/tutorial/slides.rst @@ -1,3 +1,44 @@ +First rule of optimization? +=========================== + +|pause| + +If it's not correct, it doesn't matter. + +Second rule of optimization? +============================ + +|pause| + +If it's not faster, you're wasting ime. + +Third rule of optimization? +=========================== + +|pause| + +Measure twice, cut once. + +(C)Python performance tricks +============================ + +|pause| + +* ``map()`` instead of list comprehensions + +* ``def f(int=int):``, make globals local + +* ``append = my_list.append``, grab bound methods outside loop + +* Avoiding function calls + +Forget these +============ + +* PyPy has totally different performance characterists + +* Which we're going to learn about now + Why PyPy? ========= @@ -41,7 +82,7 @@ * moving computations to C, example:: - map(operator.... ) # XXX some obscure example + map(operator.attrgetter("a"), my_list) PyPy's sweetpot =============== From noreply at buildbot.pypy.org Wed Feb 29 23:00:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill hex and oct from the baseobjectspace method table, and add them only in the flow objspace Message-ID: <20120229220004.37BB28204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53026:6bfa14832536 Date: 2012-02-29 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/6bfa14832536/ Log: kill hex and oct from the baseobjectspace method table, and add them only in the flow objspace diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1528,10 +1528,6 @@ ('neg', 'neg', 1, ['__neg__']), ('nonzero', 'truth', 1, ['__bool__']), ('abs' , 'abs', 1, ['__abs__']), - # hex and oct no longer calls special methods in py3k, but we need to keep - # them in this table for the flow object space - ('hex', 'hex', 1, []), - ('oct', 'oct', 1, []), ('ord', 'ord', 1, []), ('invert', '~', 1, ['__invert__']), ('add', '+', 2, ['__add__', '__radd__']), diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -19,6 +19,9 @@ ('getslice', 'getslice', 3, ['__getslice__']), ('setslice', 'setslice', 4, ['__setslice__']), ('delslice', 'delslice', 3, ['__delslice__']), + # hex and oct no longer calls special methods in py3k + ('hex', 'hex', 1, []), + ('oct', 'oct', 1, []), ]) From noreply at buildbot.pypy.org Wed Feb 29 23:00:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:05 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill support for ordering arbitrary objects. It makes test_unordeable_types passing Message-ID: <20120229220005.6C9798204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53027:082078dbfca5 Date: 2012-02-29 14:31 +0100 http://bitbucket.org/pypy/pypy/changeset/082078dbfca5/ Log: kill support for ordering arbitrary objects. It makes test_unordeable_types passing diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -530,46 +530,12 @@ return space.wrap(-1) if space.is_w(w_obj2, space.w_None): return space.wrap(1) - if space.is_w(w_typ1, w_typ2): - #print "WARNING, comparison by address!" - lt = _id_cmpr(space, w_obj1, w_obj2, symbol) else: - #print "WARNING, comparison by type name!" - - # the CPython rule is to compare type names; numbers are - # smaller. So we compare the types by the following key: - # (not_a_number_flag, type_name, type_id) - num1 = number_check(space, w_obj1) - num2 = number_check(space, w_obj2) - if num1 != num2: - lt = num1 # if obj1 is a number, it is Lower Than obj2 - else: - name1 = w_typ1.getname(space, "") - name2 = w_typ2.getname(space, "") - if name1 != name2: - lt = name1 < name2 - else: - lt = _id_cmpr(space, w_typ1, w_typ2, symbol) - if lt: - return space.wrap(-1) - else: - return space.wrap(1) - -def _id_cmpr(space, w_obj1, w_obj2, symbol): - if symbol == "==": - return not space.is_w(w_obj1, w_obj2) - elif symbol == "!=": - return space.is_w(w_obj1, w_obj2) - w_id1 = space.id(w_obj1) - w_id2 = space.id(w_obj2) - return space.is_true(space.lt(w_id1, w_id2)) - - -def number_check(space, w_obj): - # avoid this as much as possible. It checks if w_obj "looks like" - # it might be a number-ish thing. - return (space.lookup(w_obj, '__int__') is not None or - space.lookup(w_obj, '__float__') is not None) + typename1 = space.type(w_obj1).getname(space) + typename2 = space.type(w_obj2).getname(space) + raise operationerrfmt(space.w_TypeError, + "unorderable types: %s %s %s", + typename1, symbol, typename2) # regular methods def helpers From noreply at buildbot.pypy.org Wed Feb 29 23:00:06 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that we cannot compare with None either Message-ID: <20120229220006.9F0A58204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53028:182e7d13839c Date: 2012-02-29 14:34 +0100 http://bitbucket.org/pypy/pypy/changeset/182e7d13839c/ Log: make sure that we cannot compare with None either diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -526,10 +526,6 @@ # fall back to internal rules if space.is_w(w_obj1, w_obj2): return space.wrap(0) - if space.is_w(w_obj1, space.w_None): - return space.wrap(-1) - if space.is_w(w_obj2, space.w_None): - return space.wrap(1) else: typename1 = space.type(w_obj1).getname(space) typename2 = space.type(w_obj2).getname(space) diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -301,13 +301,13 @@ raises(AttributeError, getattr, x, 'a') def test_unordeable_types(self): - # incomparable objects sort by type name :-/ class A(object): pass class zz(object): pass raises(TypeError, "A() < zz()") raises(TypeError, "zz() > A()") raises(TypeError, "A() < A()") - # if in doubt, CPython sorts numbers before non-numbers + raises(TypeError, "A() < None") + raises(TypeError, "None < A()") raises(TypeError, "0 < ()") raises(TypeError, "0.0 < ()") raises(TypeError, "0j < ()") From noreply at buildbot.pypy.org Wed Feb 29 23:00:07 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: make sure that we can access the correct locals when evaluating the stmt inside 'raises' with -A Message-ID: <20120229220007.D44EF8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53029:e8b1a2d32c9b Date: 2012-02-29 15:30 +0100 http://bitbucket.org/pypy/pypy/changeset/e8b1a2d32c9b/ Log: make sure that we can access the correct locals when evaluating the stmt inside 'raises' with -A diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -201,6 +201,7 @@ if python is None: py.test.skip("Cannot find the default python3 interpreter to run with -A") helpers = r"""if 1: + import sys def skip(message): print(message) raise SystemExit(0) @@ -213,7 +214,8 @@ # it's probably an indented block, so we prefix if True: # to avoid SyntaxError func = "if True:\n" + func - exec(func) + frame = sys._getframe(1) + exec(func, frame.f_globals, frame.f_locals) else: func(*args, **kwargs) except exc as e: diff --git a/pypy/tool/pytest/test/conftest1_innertest.py b/pypy/tool/pytest/test/conftest1_innertest.py --- a/pypy/tool/pytest/test/conftest1_innertest.py +++ b/pypy/tool/pytest/test/conftest1_innertest.py @@ -31,3 +31,7 @@ def test_method(self): assert self.space +def app_test_raise_in_a_closure(): + def f(x): + raises(AttributeError, "x.foo") + f(42) diff --git a/pypy/tool/pytest/test/test_conftest1.py b/pypy/tool/pytest/test/test_conftest1.py --- a/pypy/tool/pytest/test/test_conftest1.py +++ b/pypy/tool/pytest/test/test_conftest1.py @@ -60,3 +60,11 @@ assert "test_code_in_docstring_ignored" in passed[0].nodeid assert "app_test_code_in_docstring_failing" in failed[0].nodeid assert "test_code_in_docstring_failing" in failed[1].nodeid + + def test_raises_inside_closure(self, testdir): + sorter = testdir.inline_run(innertest, '-k', 'app_test_raise_in_a_closure', + '--runappdirect') + passed, skipped, failed = sorter.listoutcomes() + assert len(passed) == 1 + print passed + assert "app_test_raise_in_a_closure" in passed[0].nodeid From noreply at buildbot.pypy.org Wed Feb 29 23:00:09 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:09 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill space.cmp and all the logic to look for __cmp__, which is gone in py3k; actually, space.cmp is still there (raising NotImplementedError) because we still need to kill it from the method table. test_descroperation still passes Message-ID: <20120229220009.13EE58204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53030:8676582d3cb4 Date: 2012-02-29 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/8676582d3cb4/ Log: kill space.cmp and all the logic to look for __cmp__, which is gone in py3k; actually, space.cmp is still there (raising NotImplementedError) because we still need to kill it from the method table. test_descroperation still passes diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -441,23 +441,7 @@ space.get_and_call_function(w_del, w_obj) def cmp(space, w_v, w_w): - - if space.is_w(w_v, w_w): - return space.wrap(0) - - # The real comparison - if space.is_w(space.type(w_v), space.type(w_w)): - # for object of the same type, prefer __cmp__ over rich comparison. - w_cmp = space.lookup(w_v, '__cmp__') - w_res = _invoke_binop(space, w_cmp, w_v, w_w) - if w_res is not None: - return w_res - # fall back to rich comparison. - if space.eq_w(w_v, w_w): - return space.wrap(0) - elif space.is_true(space.lt(w_v, w_w)): - return space.wrap(-1) - return space.wrap(1) + raise NotImplementedError def issubtype(space, w_sub, w_type): return space._type_issubtype(w_sub, w_type) @@ -493,46 +477,6 @@ return w_res return None -# helper for invoking __cmp__ - -def _conditional_neg(space, w_obj, flag): - if flag: - return space.neg(w_obj) - else: - return w_obj - -def _cmp(space, w_obj1, w_obj2, symbol): - w_typ1 = space.type(w_obj1) - w_typ2 = space.type(w_obj2) - w_left_src, w_left_impl = space.lookup_in_type_where(w_typ1, '__cmp__') - do_neg1 = False - do_neg2 = True - if space.is_w(w_typ1, w_typ2): - w_right_impl = None - else: - w_right_src, w_right_impl = space.lookup_in_type_where(w_typ2, '__cmp__') - if (w_left_src is not w_right_src - and space.is_true(space.issubtype(w_typ2, w_typ1))): - w_obj1, w_obj2 = w_obj2, w_obj1 - w_left_impl, w_right_impl = w_right_impl, w_left_impl - do_neg1, do_neg2 = do_neg2, do_neg1 - - w_res = _invoke_binop(space, w_left_impl, w_obj1, w_obj2) - if w_res is not None: - return _conditional_neg(space, w_res, do_neg1) - w_res = _invoke_binop(space, w_right_impl, w_obj2, w_obj1) - if w_res is not None: - return _conditional_neg(space, w_res, do_neg2) - # fall back to internal rules - if space.is_w(w_obj1, w_obj2): - return space.wrap(0) - else: - typename1 = space.type(w_obj1).getname(space) - typename2 = space.type(w_obj2).getname(space) - raise operationerrfmt(space.w_TypeError, - "unorderable types: %s %s %s", - typename1, symbol, typename2) - # regular methods def helpers @@ -581,6 +525,12 @@ left, right = specialnames op = getattr(operator, left) def comparison_impl(space, w_obj1, w_obj2): + # for == and !=, we do a quick check for identity. This also + # guarantees that identity implies equality. + if left == '__eq__' or left == '__ne__': + if space.is_w(w_obj1, w_obj2): + return space.wrap(left == '__eq__') + # w_typ1 = space.type(w_obj1) w_typ2 = space.type(w_obj2) w_left_src, w_left_impl = space.lookup_in_type_where(w_typ1, left) @@ -610,10 +560,12 @@ w_res = _invoke_binop(space, w_right_impl, w_obj2, w_obj1) if w_res is not None: return w_res - # fallback: lt(a, b) <= lt(cmp(a, b), 0) ... - w_res = _cmp(space, w_first, w_second, symbol) - res = space.int_w(w_res) - return space.wrap(op(res, 0)) + # + typename1 = space.type(w_obj1).getname(space) + typename2 = space.type(w_obj2).getname(space) + raise operationerrfmt(space.w_TypeError, + "unorderable types: %s %s %s", + typename1, symbol, typename2) return func_with_new_name(comparison_impl, 'comparison_%s_impl'%left.strip('_')) From noreply at buildbot.pypy.org Wed Feb 29 23:00:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:10 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill builtins.cmp Message-ID: <20120229220010.43E1C8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53031:59e34c5ce798 Date: 2012-02-29 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/59e34c5ce798/ Log: kill builtins.cmp diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -55,7 +55,6 @@ 'repr' : 'operation.repr', 'hash' : 'operation.hash', 'round' : 'operation.round', - 'cmp' : 'operation.cmp', 'divmod' : 'operation.divmod', 'format' : 'operation.format', 'issubclass' : 'abstractinst.app_issubclass', diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -98,10 +98,6 @@ "Return the identity of an object: id(x) == id(y) if and only if x is y." return space.id(w_object) -def cmp(space, w_x, w_y): - """return 0 when x == y, -1 when x < y and 1 when x > y """ - return space.cmp(w_x, w_y) - def divmod(space, w_x, w_y): """Return the tuple ((x-x%y)/y, x%y). Invariant: div*y + mod == x.""" return space.divmod(w_x, w_y) From noreply at buildbot.pypy.org Wed Feb 29 23:00:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:00:11 +0100 (CET) Subject: [pypy-commit] pypy default: ignore also ValueErrors when autoflushing _io files. This is suboptimal, because a ValueError might be an actual bug, but it's the exception which is raised when we try to flush a closed file: this has been reported to happen sometimes with e.g. gzip.GzipFile Message-ID: <20120229220011.6E09F8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r53032:f9f3b57f1300 Date: 2012-02-29 22:59 +0100 http://bitbucket.org/pypy/pypy/changeset/f9f3b57f1300/ Log: ignore also ValueErrors when autoflushing _io files. This is suboptimal, because a ValueError might be an actual bug, but it's the exception which is raised when we try to flush a closed file: this has been reported to happen sometimes with e.g. gzip.GzipFile diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -326,8 +326,11 @@ try: space.call_method(w_iobase, 'flush') except OperationError, e: - # if it's an IOError, ignore it - if not e.match(space, space.w_IOError): + # if it's an IOError or ValueError, ignore it (ValueError is + # raised if by chance we are trying to flush a file which has + # already been closed) + if not (e.match(space, space.w_IOError) or + e.match(space, space.w_ValueError)): raise diff --git a/pypy/module/_io/test/test_fileio.py b/pypy/module/_io/test/test_fileio.py --- a/pypy/module/_io/test/test_fileio.py +++ b/pypy/module/_io/test/test_fileio.py @@ -178,7 +178,7 @@ space.finish() assert tmpfile.read() == '42' -def test_flush_at_exit_IOError(): +def test_flush_at_exit_IOError_and_ValueError(): from pypy import conftest from pypy.tool.option import make_config, make_objspace @@ -190,7 +190,12 @@ def flush(self): raise IOError + class MyStream2(io.IOBase): + def flush(self): + raise ValueError + s = MyStream() + s2 = MyStream2() import sys; sys._keepalivesomewhereobscure = s """) space.finish() # the IOError has been ignored From noreply at buildbot.pypy.org Wed Feb 29 23:15:02 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:15:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill also tests about builtins.cmp Message-ID: <20120229221502.3F59B8204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53033:dfc1189e05b1 Date: 2012-02-29 23:08 +0100 http://bitbucket.org/pypy/pypy/changeset/dfc1189e05b1/ Log: kill also tests about builtins.cmp diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -432,40 +432,6 @@ obj = SomeClass() assert reversed(obj) == 42 - - def test_cmp(self): - assert cmp(9,9) == 0 - assert cmp(0,9) < 0 - assert cmp(9,0) > 0 - assert cmp(b"abc", 12) != 0 - assert cmp("abc", 12) != 0 - - def test_cmp_more(self): - class C(object): - def __eq__(self, other): - return True - def __cmp__(self, other): - raise RuntimeError - c1 = C() - c2 = C() - raises(RuntimeError, cmp, c1, c2) - - def test_cmp_cyclic(self): - if not self.sane_lookup: - skip("underlying Python implementation has insane dict lookup") - if not self.safe_runtimerror: - skip("underlying Python may raise random exceptions on stack ovf") - a = []; a.append(a) - b = []; b.append(b) - from UserList import UserList - c = UserList(); c.append(c) - raises(RuntimeError, cmp, a, b) - raises(RuntimeError, cmp, b, c) - raises(RuntimeError, cmp, c, a) - raises(RuntimeError, cmp, a, c) - # okay, now break the cycles - a.pop(); b.pop(); c.pop() - def test_return_None(self): class X(object): pass x = X() From noreply at buildbot.pypy.org Wed Feb 29 23:15:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 29 Feb 2012 23:15:03 +0100 (CET) Subject: [pypy-commit] pypy py3k: kill the __cmp__ multimethod Message-ID: <20120229221503.806758204C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r53034:49df6ee6912a Date: 2012-02-29 23:14 +0100 http://bitbucket.org/pypy/pypy/changeset/49df6ee6912a/ Log: kill the __cmp__ multimethod diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1566,7 +1566,6 @@ ('ne', '!=', 2, ['__ne__', '__ne__']), ('gt', '>', 2, ['__gt__', '__lt__']), ('ge', '>=', 2, ['__ge__', '__le__']), - ('cmp', 'cmp', 2, ['__cmp__']), # rich cmps preferred ('contains', 'contains', 2, ['__contains__']), ('iter', 'iter', 1, ['__iter__']), ('next', 'next', 1, ['__next__']), diff --git a/pypy/interpreter/nestedscope.py b/pypy/interpreter/nestedscope.py --- a/pypy/interpreter/nestedscope.py +++ b/pypy/interpreter/nestedscope.py @@ -32,6 +32,7 @@ self.w_value = None def descr__cmp__(self, space, w_other): + # XXX fix me, cmp is gone other = space.interpclass_w(w_other) if not isinstance(other, Cell): return space.w_NotImplemented diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -887,7 +887,6 @@ GeneratorIterator.typedef.acceptable_as_base_class = False Cell.typedef = TypeDef("cell", - __cmp__ = interp2app(Cell.descr__cmp__), __hash__ = None, __reduce__ = interp2app(Cell.descr__reduce__), __setstate__ = interp2app(Cell.descr__setstate__), diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -440,9 +440,6 @@ if w_del is not None: space.get_and_call_function(w_del, w_obj) - def cmp(space, w_v, w_w): - raise NotImplementedError - def issubtype(space, w_sub, w_type): return space._type_issubtype(w_sub, w_type) diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -219,7 +219,6 @@ ('inplace_and', inplace_and), ('inplace_or', inplace_or), ('inplace_xor', inplace_xor), - ('cmp', cmp), ('iter', iter), ('next', next), ('get', get), diff --git a/pypy/objspace/std/builtinshortcut.py b/pypy/objspace/std/builtinshortcut.py --- a/pypy/objspace/std/builtinshortcut.py +++ b/pypy/objspace/std/builtinshortcut.py @@ -36,7 +36,7 @@ 'get', 'set', 'delete', # uncommon (except on functions) 'delitem', 'trunc', # rare stuff? 'abs', # rare stuff? - 'pos', 'divmod', 'cmp', # rare stuff? + 'pos', 'divmod', # rare stuff? 'float', 'long', # rare stuff? 'isinstance', 'issubtype', ] diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -605,15 +605,6 @@ iter__Frozenset = iter__Set -def cmp__Set_settypedef(space, w_left, w_other): - # hack hack until we get the expected result - raise OperationError(space.w_TypeError, - space.wrap('cannot compare sets using cmp()')) - -cmp__Set_frozensettypedef = cmp__Set_settypedef -cmp__Frozenset_settypedef = cmp__Set_settypedef -cmp__Frozenset_frozensettypedef = cmp__Set_settypedef - init_signature = Signature(['some_iterable'], None, None) init_defaults = [None] def init__Set(space, w_set, __args__): diff --git a/pypy/objspace/std/test/test_identitydict.py b/pypy/objspace/std/test/test_identitydict.py --- a/pypy/objspace/std/test/test_identitydict.py +++ b/pypy/objspace/std/test/test_identitydict.py @@ -24,10 +24,6 @@ def __eq__(self, other): return True - class CustomCmp (object): - def __cmp__(self, other): - return 0 - class CustomHash(object): def __hash__(self): return 0 @@ -35,17 +31,11 @@ class TypeSubclass(type): pass - class TypeSubclassCustomCmp(type): - def __cmp__(self, other): - return 0 - assert self.compares_by_identity(Plain) assert not self.compares_by_identity(CustomEq) - assert not self.compares_by_identity(CustomCmp) assert not self.compares_by_identity(CustomHash) assert self.compares_by_identity(type) assert self.compares_by_identity(TypeSubclass) - assert not self.compares_by_identity(TypeSubclassCustomCmp) def test_modify_class(self): class X(object): diff --git a/pypy/objspace/std/test/test_intobject.py b/pypy/objspace/std/test/test_intobject.py --- a/pypy/objspace/std/test/test_intobject.py +++ b/pypy/objspace/std/test/test_intobject.py @@ -453,14 +453,6 @@ def test_getnewargs(self): assert 0 .__getnewargs__() == (0,) - - def test_cmp(self): - skip("This is a 'wont fix' case") - # We don't have __cmp__, we consistently have __eq__ & the others - # instead. In CPython some types have __cmp__ and some types have - # __eq__ & the others. - assert 1 .__cmp__ - assert int .__cmp__ def test_bit_length(self): for val, bits in [ diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -97,7 +97,6 @@ assert d == a def test_compare(self): - raises(TypeError, cmp, set('abc'), set('abd')) assert set('abc') != 'abc' raises(TypeError, "set('abc') < 42") assert not (set('abc') < set('def')) diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py --- a/pypy/objspace/std/typeobject.py +++ b/pypy/objspace/std/typeobject.py @@ -176,8 +176,7 @@ # ^^^ conservative default, fixed during real usage if space.config.objspace.std.withidentitydict: - if (key is None or key == '__eq__' or - key == '__cmp__' or key == '__hash__'): + if (key is None or key == '__eq__' or key == '__hash__'): w_self.compares_by_identity_status = UNKNOWN if space.config.objspace.std.newshortcut: @@ -242,7 +241,6 @@ my_eq = w_self.lookup('__eq__') overrides_eq = (my_eq and my_eq is not type_eq(w_self.space)) overrides_eq_cmp_or_hash = (overrides_eq or - w_self.lookup('__cmp__') or w_self.lookup('__hash__') is not default_hash) if overrides_eq_cmp_or_hash: w_self.compares_by_identity_status = OVERRIDES_EQ_CMP_OR_HASH diff --git a/pypy/objspace/test/test_descriptor.py b/pypy/objspace/test/test_descriptor.py --- a/pypy/objspace/test/test_descriptor.py +++ b/pypy/objspace/test/test_descriptor.py @@ -112,11 +112,6 @@ def __eq__(self, other): pass raises(TypeError, "hash(B())") # because we define __eq__ but not __hash__ - # same as above for __cmp__ - class C(object): - def __cmp__(self, other): pass - hash(C()) - class E(object): def __hash__(self): return "something"